paper_id
stringlengths
12
48
title
stringlengths
12
155
url
stringlengths
39
46
abstract
stringlengths
389
2.11k
ocr_markdown
stringlengths
18.1k
576k
lee-etal-2023-read
When to Read Documents or {QA} History: On Unified and Selective Open-domain {QA}
https://aclanthology.org/2023.findings-acl.401
This paper studies the problem of open-domain question answering, with the aim of answering a diverse range of questions leveraging knowledge resources. Two types of sources, QA-pair and document corpora, have been actively leveraged with the following complementary strength. The former is highly precise when the paraphrase of given question q was seen and answered during training, often posed as a retrieval problem, while the latter generalizes better for unseen questions. A natural follow-up is thus leveraging both models, while a naive pipelining or integration approaches have failed to bring additional gains over either model alone. Our distinction is interpreting the problem as calibration, which estimates the confidence of predicted answers as an indicator to decide when to use a document or QA-pair corpus. The effectiveness of our method was validated on widely adopted benchmarks such as Natural Questions and TriviaQA.
# When To Read Documents Or Qa History: On Unified And Selective Open-Domain Qa Kyungjae Lee1∗ Sang-eun Han2,3∗ Seung-won Hwang2,3† **Moontae Lee**1,4 1LG AI Research 2SNU-LG AI Research Center 3Seoul National University 4University of Illinois at Chicago ## Abstract This paper studies the problem of open-domain question answering, with the aim of answering a diverse range of questions leveraging knowledge resources. Two types of sources, QApair and document corpora, have been actively leveraged with the following complementary strength. The former is highly precise when the paraphrase of given question q was seen and answered during training, often posed as a retrieval problem, while the latter generalizes better for unseen questions. A natural follow-up is thus leveraging both models, while a naive pipelining or integration approaches have failed to bring additional gains over either model alone. Our distinction is interpreting the problem as calibration, which estimates the confidence of predicted answers as an indicator to decide when to use a document or QA-pair corpus. The effectiveness of our method was validated on widely adopted benchmarks such as Natural Questions and TriviaQA. ## 1 Introduction Open-domain question answering is a well-known task in natural language processing, aiming to answer factoid questions from an open set of domains. One commonly used approach for this task is the retrieve-then-read pipeline (also known as *Openbook QA*) to retrieve relevant knowledge, then reason answers over the knowledge. Given the wide range of topics that open-domain questions can cover, a key to a successful answering model is: to access and utilize diverse knowledge sources effectively. Toward this goal, existing work can be categorized by the knowledge source used: - Document Corpus-based QA (**Doc-QA**): This type of work utilizes a general-domain **Document Corpus** (e.g., Wikipedia) (Karpukhin ∗First two authors equally contributed to this work. †correspond to seungwonh@snu.ac.kr et al., 2020; Guu et al., 2020; Liu et al., 2021; Izacard and Grave, 2021) for reading then answering questions (i.e., {Q, D} → A). - QA as Retrieval (QR): This type of work utilizes a collection of already answered questions (or QA-pair) as knowledge, typically leveraging nonparametric approaches, such as a retriever for closest QA-pairs, to extract the top-1 QA pair that is most similar to a target question and is considered as a final answer (Lewis et al., 2021b; Xiao et al., 2021; Lewis et al., 2021a). i.e., Q → {paraphrase Q′, A}. In an effort to leverage complementary strengths of existing models, previous work has attempted to build a pipeline of individual models (Lewis et al., 2021b). However, their approach has not resulted in significant gains over using either model alone. In this paper, we propose a novel approach of leveraging the strengths of both document and QA pairs as contexts for a **Unified Reader**-based QA (or UR-QA).1 Figure 1 illustrates the distinction of our approach providing both knowledge to a unified reader as context. We retrieve a list of relevant QA-pairs (called as **QA-history**), then treat the few retrieved QA examples, as if it is a relevant document passage. Meanwhile, the closest approach to use multiple knowledge sources is concatenating the multisources uniformly into a single decoder (Oguz et al., 2020), but we argue **knowledge selection** is critically missing. To motivate, Figure 1 shows the QA-history, from which answer 'Eric Liddell' is explicitly identified, while it is more implicit in the document such that another name such as 'Hugh Hudson' is known to often confuse QA models. It is critical for the QA model to **calibrate** prediction quality as an indicator to decide when to use a 1We stress that our focus is a unified framework, and orthogonal to optimizing readers or retrievers, which is beyond the scope of this paper. ![1_image_0.png](1_image_0.png) directed by Hugh Hudson**…, It is based on the true story of** two British athletes in the 1924 Olympics: Eric Liddell, a devout Scottish Christian who runs for the glory of God, … Selective QA via calibration Answer: **Hugh Hudson** ![1_image_1.png](1_image_1.png) Answerability: **True** Consistency: Low QA Answer: **Eric Liddell** Answerability : **True** Consistency: **High** ## Document Corpus Or Qa-History. Toward the goal, we propose Selective QA, where a more reliable answer among candidates can be identified through the calibration of the QA model. Existing calibration (Kamath et al., 2020; Zhang et al., 2021; Si et al., 2022) has focused on the ability of models to "know when they don't know" and abstain from answering if they are uncertain. A naive approach would be simply prioritizing more confident predictions for answer selection. As a known measure of confidence, LM likelihood of generated tokens has been found to often miscalibrate (Jiang et al., 2021; Kumar and Sarawagi, 2019), tending to prefer short outputs (Murray and Chiang, 2018), or being biased towards more frequent words (Ott et al., 2018). We also observed similar issues in our setting, which we refer to as **calibration overfitting** - LM likelihoods are biased towards increasing confidence on both correct and wrong answers. Our distinction is to overcome this limitation, by proposing two new objectives, for lowering confidence when the given context cannot answer the question (i.e., answerability), or when sampling uncertainty from decoder is high (i.e., sampling consistency). Finally, building upon improved calibration, we carefully select among answer candidates inferred from document and QA-pairs. To summarize, we make the following contributions: a) We propose an open-domain QA model complementing document corpus with QA-pair corpus, and decide the selective usage between a document or QA-pair corpus through calibration. b) We evaluate our approach on Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), and our method can improve QA performance of existing models. c) We analyze how our method improves calibration and how it helps to select better answers. ## 2 Related Works Doc-QA has been a dominant paradigm in opendomain QA (Karpukhin et al., 2020; Guu et al., 2020; Liu et al., 2021; Izacard and Grave, 2021), where the relevant passages are first fetched by the retriever model and then processed by the *reader* model to produce the answer. *Reader* models are typically categorized as an extractive or *generative* model, where the former locates the answer span in the given context and the latter generates the answer in token-by-token manner. In our work, we focus on a *generative* model, which can transfer knowledge from generative LMs such as T5 (Raffel et al., 2020) and GPT-3 (Brown et al., 2020). Meanwhile, while most works for open-domain QA use Wikipedia as context, some works (Oguz et al., 2020; Ma et al., 2022) leverage various knowledge including Tables and Knowledge Graphs. QR retrieving relevant QA pairs over a large collection of QA pairs is a more efficient alternative to Doc-QA. Lewis et al. (2021b) build PAQ (for Probably Asked Questions) - 65M QA pairs: automatically-generated resources by using question generation techniques, and learn RePAQ retriever to efficiently extract the top-1 QA pair that is most similar to a target question, and uses its answer for answering the question. Xiao et al. (2021) use answer aggregation heuristic to combine retrieved candidates of QA-pairs with candidates from other sources. Chen et al. (2022) also leverage retrieved QA-pairs, by fusing representations of the QA-pairs into language models. Despite some gains, their generalizability for unseen questions is limited, compared to Doc-QA, which motivates our approach of selectively combining with other knowledge. Our Distinction is to analyze and utilize the complementarity of Doc-QA and QR, carefully selecting knowledge sources via calibration, while the previous work (Oguz et al., 2020) blindly concatenates all types of data into a single context. Calibration has been studied for abstaining from answering when the model does not know. Sources of calibration have been LM's likelihoods (Si et al., 2022), classifier (Kamath et al., 2020), and linguistic expressions (Lin et al., 2022; Mielke et al., 2022; Kadavath et al., 2022; Tian et al., 2023). Our distinction is exploring the use of calibration for selective QA, and overcoming the calibration overfitting we observed from existing methods, by proposing new likelihoods based on answerability and consistency. ## 3 Proposed Method In this section, we formally describe Doc-QA as backbones (Section 3.1) and our unification baseline (Section 3.2), followed by our proposed calibration for selective QA (Section 3.3). ## 3.1 Backbone: Doc-Qa Open-book QA requires to answer question q given context c, i.e., optimizing PLM (a|q, c). DocQA (Lee et al., 2019; Karpukhin et al., 2020) typically uses Wikipedia documents as knowledge c. In this paper, for implementing a Doc-QA backbone, we use a state-of-the-art generative reader: Fusion-in-Decoder (Izacard and Grave, 2021), based on a pretrained language model - T5 (Raffel et al., 2020). This approach separately encodes top-n passages in an encoder, and fuses them in a decoder. The final answer A is obtained as follows: $$\begin{array}{l}\mbox{Fuse}(\mathbf{q},\mathbf{d}_{1:n})=[\mbox{Enc}(\mathbf{q},\mathbf{d}_{1});\,...,\,;\mbox{Enc}(\mathbf{q},\mathbf{d}_{n})]\\ \mbox{A}=\mbox{Dec}(\mbox{Fuse}(\mathbf{q},\mathbf{d}_{1:n}))\end{array}\tag{1}$$ where Enc and Dec indicate Encoder and Decoder modules in transformer (Vaswani et al., 2017), and [ ; ] indicates the concatenation of encoder's outputs. Let x denote the input sequence, and y = (y1*, ..., y*T ) the output sequence. The language model based QA model is trained with maximum likelihood estimation (MLE) to optimize the ![2_image_0.png](2_image_0.png) following objective for a given (x, y): $${\mathcal{L}}(\mathbf{x},\mathbf{y})=-\sum_{t=1}^{T}\log\operatorname{P}_{L M}(y_{t}|\mathbf{y}_{<t},\mathbf{x})\quad\quad(2)$$ where x is a pair of question/document (q, d1:n), and y is the ground-truth answer a∗in our setting. Meanwhile, at inference time, we use Greedy Decoding,2 which is commonly used for QA tasks. A decoded sequence is ˆa = (ˆa1, aˆ2*, ...,* aˆT ), where each token is selected as follows: $${\hat{a}}_{t}={\underset{y\in V}{\operatorname{argmax}}}\ \mathbf{P}_{L M}(y|{\hat{\mathbf{a}}}_{<t},\mathbf{q},\mathbf{d}_{1:n})\qquad{\mathrm{(3)}}$$ ## 3.2 Unified Reader: Ur-Qa While traditional methods rely on high-efficiency retrievers to match questions with QA history, our work is inspired by *in-context learning* (Brown et al., 2020) for closed-book QA: We propose using the QA-history retrieved as a hypothetical document with few-shot examples and reading it to answer the question As shown in Figure 1, we retrieve top-n QA pairs from QA corpus as in-context examples, and finetune a QA model with the in-context examples. Specifically, as QA corpus and QR, we used PAQ and a dense retrieval of RePAQ (See Experimental Section for more details), as proposed in Lewis et al. (2021b). Given a target question, we extract top-m QA-pairs from PAQ and the top-m retrieved QA-pairs, as they are short, can be concatenated into one document passage as below: Question: {target q}, Answer: \n Question: {example q1}, Answer: {example a1} \n Question: {example q2}, Answer: {example a2} \n Question: ... Answer: ... 2As a decoding method, we can choose beam search or temperature-based sampling, but greedy decoding empirically outperformed others in our QA tasks. To motivate this approach, Figure 2 shows QA accuracy of our UR-QA and Recall of retrieved knowledge (recall@n) on the following variants of knowledge: (1) Document-only (n passages); (2) Doc + QA history (n + 1 passages). Gains from adding one passage (concatenating m = 50 QA history) suggest the complementary nature of QA history to documents, in terms of both QA and retrieval performances, regardless of the size of retrieved passages n. Inspired, we propose to combine d1:n and k as context, and a baseline (Oguz et al., 2020) concatenates all knowledge - texts, tables, and knowledge graphs in the decoder. Through this "concat" baseline, we can consider k of QA-pairs as (n+1)th passage in Doc-QA, so that the final answer A*base* is obtained as follows: $$A_{base}(\mathbf{q},\mathbf{d}_{1:n},\mathbf{k})=$$ $$\text{Dec}([\text{Enc}(\mathbf{q},\mathbf{d}_{1});...;\text{Enc}(\mathbf{q},\mathbf{d}_{n});\text{Enc}(\mathbf{q},\mathbf{k})])\tag{4}$$ where [ ; ] indicates the concatenation of encoder's outputs. However, due to unreliable inputs from the concatenation, the performance may degrade with increasing noisy context, as reported in Oguz et al. (2020). We hypothesize this as a cause of combining multi-knowledge underperforming a single model and propose selective QA. ## 3.3 Selective Ur-Qa Via Calibration Our distinction from concat baseline is that we compare the confidence of each answer from documents and QA history, then select the final answer A*ours* as follows: $$A_{ours}=\begin{cases}\hat{\mathbf{a}}_{k}&\text{if Conf}(\hat{\mathbf{a}}_{k}|\mathbf{q},\mathbf{k})\geq\text{Conf}(\hat{\mathbf{a}}_{d}|\mathbf{q},\mathbf{d})\\ \hat{\mathbf{a}}_{d}&\text{if Conf}(\hat{\mathbf{a}}_{k}|\mathbf{q},\mathbf{k})<\text{Conf}(\hat{\mathbf{a}}_{d}|\mathbf{q},\mathbf{d})\end{cases}\tag{5}$$ where ˆak and ˆad are the decoded answers over QA pairs k and documents d, respectively. While the existing methods for confidence estimation adopt the likelihoods of language models, to overcome its overfitting (Section 3.3.1), we propose two new measures, answerability (Section 3.3.2) and consistency (Section 3.3.3), to eventually ensemble these confidence estimates into a score. ## 3.3.1 Sequence Likelihood Of Lm The key point of our method is to find the effective measurement of the answer confidence, which is essentially the calibration problem. The confidence score P(ˆa|·) should be able to discern the accurate answer, by comparing the reliability of each knowledge. We propose the way to find such P(ˆa|·) in the next paragraph, based on our analysis of the important factors on documents and QA-pairs. Prior work (Hendrycks and Gimpel, 2016) has proposed MaxProb - a method that uses the maximum probability of a classifier as the confidence estimator for selective prediction. For extractive QA, existing works (Zhang et al., 2021; Si et al., 2022) adopt MaxProb as a baseline, by using the sum of the maximum logits of the start and end of the answer span. Meanwhile, we focus on calibrating generative language models, where its output is a token sequence. To apply MaxProb for generative LMs, we select the maximum probability at each step by the argmax function in Eq. (3), which can be viewed as greedy decoding. The scores of decoded tokens are aggregate by product, as follows: $$\mathbf{P}_{L M}({\hat{\mathbf{a}}}|\mathbf{q},\mathbf{c})=\prod_{t=1}^{|{\hat{\mathbf{a}}}|}\mathbf{P}_{L M}({\hat{a}}_{t}|{\hat{\mathbf{a}}}_{<t},\mathbf{q},\mathbf{c})\quad{\mathrm{(6)}}$$ where PLM (∗) is the token probabilities obtained from LM head. Since LM tends to underestimate the likelihood of longer texts, length normalization is essential as in (Adiwardana et al., 2020). To normalize as sequence lengths,3 we take the geometric mean of the multiplicative terms, i.e., {PLM (ˆa|q, c)} 1/|ˆa|. However, this LM likelihood obtained by MaxProb has an inevitable problem. MLE loss in Eq. (2) enforces to train LM solely towards maximizing the likelihoods of observed sequences. Because the observed sequences (or labeled answers) can have diverse surface forms, MLE training inevitably leads to miscalibration. In QA tasks, the sequence likelihood of QA models is reported to be often miscalibrated, or overconfident (Jiang et al., 2021; Kumar and Sarawagi, 2019). In Figure 3, we also observe a consistent tendency in our open-domain QA task, where each line indicates the average confidence score of three estimates on correct predictions (solid line) and incorrect predictions (dashed line). As the training steps increase, the scores of LM likelihood (red lines) increases monotonically, and even the gap between correct and incorrect predictions decreases. We denote this problem as **calibration overfitting**, and hypothesize two causes (C1 and C2). 3We empirically found that length normalization slightly improves the performance of Selective QA. 4 6 8 10 4 6 8 10 LM: True LM: False Ans: True Ans: False Con: True Con: False ![4_image_0.png](4_image_0.png) 2 4 6 8 10 Epoch 2 4 6 8 10 - C1: LM's objective maximizes the probabilities on answers regardless if the retrieved context is answerable or not, such that it is overconfident on unanswerable contexts. - C2: LM likelihood of a decoded output alone does not represent their uncertainty, while candidates unselected by greedy decoding can be a meaningful indicator of uncertainty. To deal with the above issues, we propose a new calibration approach of learning two measures: **Answerability** and **Consistency**, which are robust to calibration overfitting, as shown in Figure 3. ## 3.3.2 Answerability For C1, we learn an answerability score, "P(Answerable)", the probability that the passage can answer the given question, which has been studied in Machine Reading Comprehension tasks (Rajpurkar et al., 2018). Our contribution is to train to predict answerability for the question/context pair (q, c) for the purpose of detecting the low confidence when the given context c cannot answer question q, i.e., unanswerable. Training signals can be straightforwardly collected by whether q is answerable in c, or not. P(Answerable) = $\begin{cases}1,&\text{if$\mathbf{q}$is answerable in$\mathbf{c}$}\\ 0,&\text{otherwise}\end{cases}$ ## 3.3.3 Consistency For C2, we learn a consistency score, "P(Consistent)", the probability of whether samples consistently match a correct answer. The same decoded answer ˆa may have a high LM:True LM:False Ans:True uncertainty, if a discarded candidate from the decoder is also highly plausible. In contrast, the same answer has low uncertainty, if discarded candidates from the decoder are not plausible. To estimate such sampling uncertainty, we apply sampling-based decoding (temperature=1) generating a set of samples of size N, and measure sampling consistency. More formally, our supervision signal for uncertainty can be collected as: Ans:False Con:True Con:False P(Consistent) = $\sum_{i=1}^{N}1(\hat{\bf a_{i}}={\bf a^{*}})$ (8) where 1() is 1 if the condition holds (0 otherwise). ˆai and a∗are i-th sampled output and the groundtruth, respectively. N is the number of samples, and we set N = 30 in our experiment. ## 3.3.4 Prompted Calibration We then proceed to discuss the process of aggregating calibration components into a score, using LM for weak supervision. LM has been used as a means of estimating scores by verbally expressing to estimate a score as an output sequence, as adopted in diverse cases, e.g., sensibleness and safety (Thoppilan et al., 2022) and uncertainty as question types (Lin et al., 2022). The advantages of using a LMbased verbal estimator are twofold: (1) it eliminates the need to construct separate networks for scoring and (2) it captures the interdependency between answer prediction and its uncertainty within the same LM head. To learn Sans and Scon via verbal estimator, we convert the scores into discrete words. Specifically, Sans is expressed as either True or *False*. The continuous values Scon in training data are sorted and partitioned into equally sized quantiles (i.e., High, *Medium*, and Low). Then, we train UR to generate the output template, prompted with the verbalized scores, as follows: Q: Who was the film "Chariots of Fire" about ? ![4_image_1.png](4_image_1.png) P(a = Eric Liddell | x) P(Answerable = True | x) P(Consistent = High | x) Output template After training with the prompt, we can estimate Sans and Scon on test examples, through the likelihood of token "True" or "High", as follows: P(Answerable) = P${}_{LM}$(True$|$y$<$True$,$\mathbf{q}$,$\mathbf{c}$) P(Consistent) = 1 $\cdot$ P${}_{LM}$(High$|$y$<$High$,$\mathbf{q}$,$\mathbf{c}$) +0.5 $\cdot$ P${}_{LM}$(Medium$|$y$<$Medium$,$\mathbf{q}$,$\mathbf{c}$) where x is the context with the above prompt. At inference time, we can use a calibration ensemble by averaging the three scores: $$\begin{array}{l}\mbox{Conf}(\hat{\bf a}|{\bf q},{\bf c})=\ \frac{1}{3}\left({\bf P}_{LM}(\hat{\bf a}|{\bf q},{\bf c})\right.\\ \left.+\ {\bf P}(\mbox{Answerable})+\ {\bf P}(\mbox{Consistent})\right)\end{array}\tag{10}$$ This final confidence is used in Eq. (5) for comparing two candidates, then it decides the final answer. ## 4 Experiment In our experiments, we first demonstrate that our proposed confidence scores effectively improve the calibration for question answering. We then examine how these scores contribute to an overall improvement in question answering performance. Finally, we provide qualitative analysis to gain a deeper understanding and insight on our method. Datasets We use the open-domain QA version of Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), following the previous setting (Karpukhin et al., 2020; Izacard and Grave, 2021).4 The details of the benchmarks are as follows: - **Natural Questions (NQ)** contains real user questions from Google search engine. We use training/dev/testing splits for open-domain question answering, consisting of 79K train, 8.7k dev, 3.6K test examples. - **TriviaQA (TQA)** is constructed from webscraped trivia questions. We use TriviaQA opendomain training/dev/testing splits, consisting of 79K train, 8.8k dev, and 11K test examples. Implementation We implement our models upon T5 with the size of 770M (or 'Large') and 3B 3B (or 'XL'), and fine-tune them on NQ and TQA. To retrieve the contexts (d and k), we use the same off-the-shelf retrieval as used by baselines: FiDKD (Izacard and Grave, 2020) for DR, and RePAQ (Lewis et al., 2021b) for QR. While FiD-KD set 4https://github.com/facebookresearch/FiD Metric Documents **QA-Pairs** NQ TQA NQ TQA Top-1 50.9 56.9 41.7 41.3 Top-5 75.1 80.2 53.5 51.2 Top-10 80.8 84.8 58.5 55.7 Top-30 86.8 88.6 64.5 61.4 Top-50 88.7 89.7 67.2 64.0 $\eqref{eq:walpha}$. the number of passages to 100, we used top-50 passages for DR-QA due to GPU limitations, which is the reason why our DR-QA performed lower than FiD-KD. For QA-history, we concatenate top-50 QA-pairs into a single passage. We use 8 Tesla A100 40GB GPUs for all experiments. To retrieve the contexts (d and k), we use the same off-the-shelf retrieval as used by baselines: FiD-KD (Izacard and Grave, 2020) for Doc-QA, and RePAQ (Lewis et al., 2021b) for QR. For a collection of knowledge, we also use PAQ database for QA pairs (Lewis et al., 2021b), and Wikipedia for documents (Karpukhin et al., 2020). Table 1 shows the accuracy of retrievals from documents and QA-pairs. If a correct answer is included in the top-K contexts, the retrieval is assumed to succeed. While this measure calculated by naive string matching is commonly used in (Karpukhin et al., 2020; Izacard and Grave, 2021, 2020), it is not perfect as false negative examples can be counted as true positive. Baselines To show the effectiveness of our method, we compare previous models over a single source - FiD (Izacard and Grave, 2021), FiDKD (Izacard and Grave, 2020), UnitedQA (Cheng et al., 2021), and R2-D2 (Fajcik et al., 2021) over documents, and RePAQ (Lewis et al., 2021b) over QA-pairs. "Our backbone" is reimplemented from FiD-KD, while the difference is the number of retrieved documents. To validate the complementary of documents and QA-history, we compare UR-QA on a single source without our selection: "Document Only" and "QA-History Only". As baselines over multiple sources, we compare our method with "Base1: Pipeline" consisting of RePAQ and FiD (Lewis et al., 2021b), and "Base2: Concat" in Eq. (4), inspired by (Oguz et al., 2020). Main results Table 2 shows the performance of our models, with comparable other models in NQ and TQA. We evaluate the performance of our models by Exact Match (EM) score, which is a stan- | Method | NQ | TQA | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|--------|-------|-----| | Document-based QA RAG | 44.5 | 56.8 | | | | | UnitedQA | 54.7 | 70.5 | | | | | R2-D2 | 55.9 | 69.9 | | | | | FiD (n=100, Large) | 51.4 | 67.6 | | | | | FiD-KD (n=100, Large) | 54.4 | 72.5 | | | | | Our backbone (n=50, Large) | 53.4 | 71.4 | | | | | QA as Retrieval TF-IDF | 22.2 | 23.5 | | | | | RePAQ (Retriever only) | 41.7 | 41.3 | | | | | RePAQ (Reranker) | 47.6 | 52.1 | | | | | UR-QA (on a single source) Document Only (n=10, Large) | 50.7 | 69.2 | | | | | Document Only (n=50, Large) | 53.5 | 71.3 | | | | | Document Only (n=50, XL) | 56.0 | 73.5 | | | | | QA-History Only (Large) | 46.6 | 54.3 | | | | | QA-History Only (XL) | 47.7 | 56.8 | | | | | UR-QA (Document + QA-History) Base1: Pipeline (Large) | 52.3 | 67.3 | | | | | Base2: Concat (Large) | 53.9 | 72.0 | | | | | Base2: Concat (XL) | 56.7 | 74.2 | | | | | Ours: SelectiveQA (n=10+1, Large) | 53.6 | 70.6 | | | | | Ours: SelectiveQA (n=50+1, Large) | 55.4 | 72.6 | | | | | Ours: SelectiveQA (n=50+1, XL) | 58.2 | 74.5 | Method | NQ | TQA | | ECE↓ AUC↓ ECE↓ AUC↓ | | | | | | | FiD-KD (LM likeli) 0.310 | 0.251 | 0.186 | 0.103 | | | | +Temp Scaling | 0.246 | 0.247 | 0.063 | 0.098 | | | UR (DOC-ONLY) (1) LM likelihood | 0.305 | 0.290 | 0.182 | 0.091 | | | (2) Answerability | 0.154 | 0.307 | 0.185 | 0.116 | | | (3) Consistency | 0.134 | 0.244 | 0.154 | 0.099 | | | (1+2+3) Ours | 0.163 | 0.240 | 0.168 | 0.088 | | | UR (QA-ONLY) (1) LM likelihood | 0.396 | 0.390 | 0.326 | 0.209 | | | (2) Answerability | 0.126 | 0.293 | 0.174 | 0.188 | | | (3) Consistency | 0.153 | 0.298 | 0.074 | 0.174 | | | (1+2+3) Ours | 0.147 | 0.289 | 0.170 | 0.171 | | | Table 3: Calibration Evaluation: ECE & AUC of our methods, compared to FiD-KD. ↓ means the lower the metric, the better the calibration is. (Guo et al., 2017; Minderer et al., 2021; Si et al., 2022; Jiang et al., 2021), which indicates how much the expected accuracy deviates from the expected confidence score. We use the density-based ECE from Minderer et al. (2021), defined as below: | | | | | | dard metric for open domain question answering (Izacard and Grave, 2021). Our models outperform the baseline models for both datasets and in both model sizes (Large and XL readers). In NQ, we observe that our selective UR-QA achieved the performance gain of 1.9 EM over UR-QA ("Document Only"), and 8.8 over UR-QA ("QA-History Only"), on T5-Large. Our method (Large-NQ) also outperforms Base1: Pipeline (Lewis et al., 2021b) by 2.9 and Base2: Concat by 1.5, respectively. Our best model with larger size (XL) shows **58.2** EM in NQ, which is the highest among the compared models. Meanwhile, our model trained on TQA (Large-TQA) increases EM score by 0.9 over URQA ("Document Only") baseline, and 17.9 over UR-QA ("QA-History Only"). Our best performing model in TriviaQA (XL-TQA) achieves the highest score as well, recording **74.5** EM. Does our method improve calibration for opendomain QA? We use two metrics for the evaluation of the calibration performance: Expected Calibration Error (ECE) and Area Under Curve (AUC) of the risk-coverage graph. ECE is one of the most commonly used metric in previous works $$\mathrm{ECE}=\sum_{m=1}^{M}{\frac{1}{M}}|\mathrm{Acc}(B_{m})-\mathrm{Conf}(B_{m})|,\ \ \mathrm{(11)}$$ where M is the total number of bins (we use M = 10), Bm denotes m-th bucket, Acc(Bm) is the mean accuracy of Bm, and Conf(Bm) is the mean confidence. In density-based ECE, an equal number of predictions are assigned to each bin. On the other hand, the risk-coverage plot (Wang et al., 2017) shows the trade-off between the coverage and risk, where the former is measured as the fraction of test cases that model makes prediction on, and the latter is the error rate (or 1−accuracy) at that coverage. Specifically, the risk is reportedly high when the coverage increases (El-Yaniv et al., 2010), since the less confident examples come into consideration. Lower AUC of risk-coverage plot indicates the lower average risk, which means more chance of retaining correct answers in selectiveQA. Table 3 shows that our method (1+2+3) has the lowest AUC in all observed cases. Ours robustly outperforms individual measures in AUC, while there is no 'all-time winner' among individual measures. The robustness of our method is observed in ECE as well - ours is the second-lowest in all cases, while the ranking of others shifts with the change of the dataset or knowledge source. Meanwhile, we attempted temperature scaling (Guo et al., 2017) by ![7_image_1.png](7_image_1.png) 0.4 0.6 0.8 1.0 Coverage of Questions Answered Our Hybrid LM Probability Answerability Consistency Coverage of Questions Answered optimizing a scaling factor in [0,10], but observed no significant improvement on AUC. Figure 4 provide a finer-grained illustration of this situation, where our hybrid (1+2+3) has the best accuracy (Exact Match) for all coverage in both documents and QA pairs, while the accuracy of other measures fluctuates beneath it. Does better calibration improve the complementarity of two knowledge sources? Our goal is to enhance the complementarity of documents and QA-history through better calibration, leading to improved QA performance. We investigate if improved calibration truly contributes to the utilization of complementarity. As seen in Table 4, our hybrid (1+2+3) method, which exhibits the best calibration performance in Figure 4, proves to be the most effective criterion for selection, while language model likelihood often fails to improve QA performance beyond the baseline. To examine the upper bound of our approach, we also report the ideal QA performance ('Oracle') which is attainable with the perfect selection. The results indicate that there is a significant potential for complementarity to further enhance QA performance, and that the selection method plays a crucial role in realizing this potential gain. Is ours robust under domain shifts? To ensure that our model is robust under domain shifts, we conducted cross-evaluation by out-of-domain evaluations: evaluating our QA model (trained on the NQ dataset) on the TQA test set and our QA model 0.4 0.6 0.8 1.0 Size Method NQ TQA Our Hybrid LM Probability Answerability Consistency Ours: (1) LM likelihood 52.2 70.4 (1) + Temp Scaling 52.1 70.5 Ours: (2) Answerability 55.1 71.7 Ours: (3) Consistency 54.9 72.4 Ours: (1+2+3) **56.0 72.8** Oracle - Upper Bound 62.7 75.5 ![7_image_0.png](7_image_0.png) Ours: (1) LM likelihood 54.5 73.5 (1) + Temp Scaling 54.5 73.6 Ours: (2) Answerability 57.6 74.2 Ours: (3) Consistency 57.0 74.3 Ours: (1+2+3) **58.1 74.7** Oracle - Upper Bound 64.6 77.5 (trained on the TQA dataset) on the NQ test set. As shown in Table 5, we found that utilizing both knowledge sources is more beneficial than using a single source, even under domain shifts. Our proposed selective UR achieved gains of 3.8 EM on the NQ dataset and 2.1 EM on the TQA dataset, compared to baselines that used a single source. Table 5: Results under domain shift Model's Selection Ratio We remark our model's behavior that is related to the generalization. Previous work (Lewis et al., 2021a) splits test set into paraphrased questions in training set ("QuestionOverlap"), and unseen questions ("No-overlap"). On the divided sub sets, we observe which knowledge (either documents or QA pairs) our method selected. Figure 5 shows the selection ratio of on total test set and Question-overlap/No-overlap sets. As shown in Figure 5 (a), our method tends to select document knowledge (68.4% on all test set). On the question-overlap set, the ratio of selecting QApair knowledge increased on the Question-overlap set (31.6% → 40.4%). This means the tendency of selecting QA pair knowledge increased when knowledge matching with questions in training set. In contrast, on the no-overlap set, the tendency of selecting documents increased (68.4% → 76.4%), which means reading documents is more preferred for generalization on unseen questions. For a closer look, we select only *critical cases* | Method | Train on TQA | Train on NQ | |---------------|----------------|---------------| | Eval on NQ | Eval on TQA | | | UR (Doc-only) | 34.1 | 59.9 | | UR (QA-only) | 35.2 | 49.0 | | Selective UR | 39.0 | 62.0 | ![8_image_0.png](8_image_0.png) where only one of the candidate answers is correct - Case1: the answer from documents is correct, but one from QA-history is wrong, and Case2: one from documents is wrong, but one from QA-history is correct. As shown in Figure 5(b), in Case1, document is the majority of the selection, which increases the complementarity of the two knowledge. Meanwhile, in Case2, the ratio of selecting documents (51.1% on all Case2) is the error rate, which is potential room for improvement in our selection. ## 5 Acknowledgement This work was supported by LG AI Research. This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)]. ## 6 Conclusion This paper studies the selective QA system leveraging both document and QA-pair corpus. For careful selection, we propose a novel and effective calibration method based on Answerability and Sampling Consistency and leverage them for comparing and selecting two knowledge sources. On two benchmarks: NQ and TQA, we empirically show our proposed methods outperform existing approaches for open-domain question answering tasks. ## 7 Limitations We have identified several limitations in our work and propose future directions to improve them: (i) The sources for UR-QA in this paper are limited to the document corpus and QA-history, but our unified reader is not restricted to take specific sources. Further research can explore the generalizability of UR-QA to more diverse sources, such as linearized knowledge sources as proposed in (Oguz et al., 2022). Future work can also explore the optimal method for considering LM likelihood, answerability, and consistency together. (ii) Though it is not the focus of this work to optimize readers, our proposed UR-QA can orthogonally benefit from improvement in retrieval. Further study on the retrieval for UR-QA can be conducted, including the direction to co-optimize the reader and retriever as proposed in (Izacard and Grave, 2020). ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, and William Cohen. 2022. Augmenting pre-trained language models with qa-memory for open-domain question answering. arXiv preprint arXiv:2204.04581. Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2021. Unitedqa: A hybrid approach for open domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3080–3090. Ran El-Yaniv et al. 2010. On the foundations of noisefree selective classification. *Journal of Machine* Learning Research, 11(5). Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-d2: A modular baseline for opendomain question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 854–870. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In *ICML*. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938. PMLR. Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136. Gautier Izacard and Edouard Grave. 2020. Distilling knowledge from reader to retriever for question answering. *arXiv preprint arXiv:2012.04584*. Gautier Izacard and Édouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. *Transactions of the Association for Computational Linguistics*, 9:962–977. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221. Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective question answering under domain shift. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5684– 5696. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781. Aviral Kumar and Sunita Sarawagi. 2019. Calibration of encoder decoder models for neural machine translation. *arXiv preprint arXiv:1903.00802*. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. *Transactions of the* Association for Computational Linguistics, 7:452– 466. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021a. Question and answer test-train overlap in open-domain question answering datasets. In *Proceedings of the 16th Conference of the European* Chapter of the Association for Computational Linguistics: Main Volume, pages 1000–1008. Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021b. Paq: 65 million probably-asked questions and what you can do with them. Transactions of the Association for Computational Linguistics, 9:1098–1115. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. *arXiv preprint arXiv:2205.14334*. Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, and S Yu Philip. 2021. Dense hierarchical retrieval for open-domain question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 188–200. Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, and Jianfeng Gao. 2022. Open domain question answering with a unified knowledge interface. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1605–1620. Sabrina Mielke, Arthur Szlam, Emily Dinan, and YLan Boureau. 2022. Reducing conversational agents' overconfidence through linguistic calibration. *Transactions of the Association for Computational Linguistics*, 10:857–872. Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. 2021. Revisiting the calibration of modern neural networks. *Advances in Neural* Information Processing Systems, 34:15682–15694. Kenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 212–223. Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2020. Unik-qa: Unified representations of structured and unstructured knowledge for open-domain question answering. *arXiv preprint arXiv:2012.14610*. Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2022. UniK-QA: Unified representations of structured and unstructured knowledge for open-domain question answering. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1535–1546, Seattle, United States. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1– 67. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789. Chenglei Si, Chen Zhao, Sewon Min, and Jordan BoydGraber. 2022. Re-examining calibration: The case of question answering. Findings of Empirical Methods in Natural Language Processing. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *arXiv preprint arXiv:2201.08239*. Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D. Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. *arXiv:2305.14975*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. William Wang, Angelina Wang, Aviv Tamar, Xi Chen, and Pieter Abbeel. 2017. Safer classification by synthesis. *arXiv preprint arXiv:1711.08534*. Jinfeng Xiao, Lidan Wang, Franck Dernoncourt, Trung Bui, Tong Sun, and Jiawei Han. 2021. Open-domain question answering with pre-constructed question spaces. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 61–67. Shujian Zhang, Chengyue Gong, and Eunsol Choi. 2021. Knowing more about questions can help: Improving calibration in question answering. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1958–1970. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used well-known public benchmarks, NQ and TQA. In the data, there exist the names of public people as answers , but it does not violate privacy. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Because NQ and TQA are famous benchmarks, we refer to citation information. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We follow the convention of the previous works. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
chan-etal-2023-interpretable
Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization
https://aclanthology.org/2023.findings-acl.402
Existing factual consistency evaluation approaches for text summarization provide binary predictions and limited insights into the weakness of summarization systems. Therefore, we propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary. Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact, which explicitly represents the facts in the documents and summaries with semantic frames extracted by semantic role labeling, and highlights the related semantic frames to predict inconsistency. The highlighted semantic frames help verify predicted error types and correct inconsistent summaries. Experiment results demonstrate that our model outperforms strong baselines and provides evidence to support or refute the summary.
# Interpretable Automatic Fine-Grained Inconsistency Detection In Text Summarization Hou Pong Chan1 Qi Zeng2 **Heng Ji**2 1University of Macau 2University of Illinois Urbana-Champaign hpchan@um.edu.mo, {qizeng2, hengji}@illinois.edu ## Abstract Existing factual consistency evaluation approaches for text summarization provide binary predictions and limited insights into the weakness of summarization systems. Therefore, we propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary. Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FINEGRAINFACT, which explicitly represents the facts in the documents and summaries with semantic frames extracted by semantic role labeling, and highlights the related semantic frames to predict inconsistency. The highlighted semantic frames help verify predicted error types and correct inconsistent summaries. Experiment results demonstrate that our model outperforms strong baselines and provides evidence to support or refute the summary.1 ## 1 Introduction Prior work (Fabbri et al., 2022b; Goyal and Durrett, 2020; Laban et al., 2022) formulates the problem of factual inconsistency detection as a binary classification task, which predicts whether a summary is consistent with the source document. However, these approaches have two drawbacks. First, they cannot predict the types of factual errors made by a summary and thus provide limited insights into the weakness of summarization systems. Although recent studies (Pagnoni et al., 2021; Tang et al., 2022; Goyal and Durrett, 2021a) have manually inspected the types of factual errors in summaries, there is no existing work on automatic detection of fine-grained factual inconsistency. Second, existing models typically cannot explain which portions of the document are used to detect the inconsistency in the input summary. In order 1Code and data are available at https://github.com/ kenchan0226/fineGrainedFact to verify and correct an inconsistent summary, humans still need to read the entire source document to find the supporting evidence. Kryscinski et al. (2020) introduce an auxiliary task to extract the supporting spans in the document for inconsistency detection, which requires expensive ground-truth labels of supporting spans. To address the first limitation, we propose the fine-grained factual inconsistency detection task. The goal is to predict the types of factual inconsistency in a summary. We show examples of different factual error types in Table 1. To solve the second challenge, we further introduce an **interpretable fine-grained inconsistency** detection model (FINEGRAINFACT) that does not require any label of supporting text spans, inspired by how humans verify the consistency of a summary. When humans annotate the factual error types of a summary, they first identify facts in the document that are relevant to the summary and then determine the factual error types in the summary. Following this intuition, our model first extracts facts from the document and summary using Semantic Role Labeling (SRL). We consider each extracted semantic frame as a fact since a semantic frame captures a predicate and its associated arguments to answer the question of "who did what to whom". After fact extraction, a document fact attention module enables the classifier to focus on the facts in the document that are most related to the facts in the summary. By highlighting the facts in the document with the highest attention scores, our model can explain which facts in the document are most pertinent to inconsistency detection. Experiment results show that our model outperforms strong baselines in detecting factual error types. Moreover, the document facts highlighted by our model can provide evidence to support or refute the input summary, which can potentially help users to verify the predicted error types and correct an inconsistent summary. | Source text Marcy Smith was woken up by her son David to find their house in Glovertown, Newfoundland and Labrador, completely engulfed in flames ... Mrs Smith said if it wasn't for her son, she and her daughter probably wouldn't have survived. David was on FaceTime to his father at the time, so was the only one awake and saw the flames out of the corner of his eye ... Error type Example summary Extrinsic noun phrase error: Errors that add new object(s), subject(s), or prepositional object(s) that cannot be inferred from the source article. David was using FaceTime with Maggie Smith and saw the flames. Intrinsic noun phrase error: Errors that misrepresent object(s), subject(s), or prepositional object(s) from the source article. David was using FaceTime with Marcy Smith and saw the flames. Extrinsic predicate error: Errors that add new main verb(s) or adverb(s) that cannot be inferred from the source article. David was eating and saw the flames. Intrinsic predicate error: Errors that misrepresent main David was engulfed and saw the flames. verb(s) or adverb(s) from the source article. | |-------------------------------------------------------------------------------------------------------------| | Table 1: A text document and example summaries with different factual error types according to the typology | Table 1: A text document and example summaries with different factual error types according to the typology defined by Tang et al. (2022). The errors in the sample summaries are in red color and italicized. We bold the text spans from the document that refute the sample summaries. ## 2 Task Definition The goal of the fine-grained inconsistency detection task is to predict the types of factual errors in a summary. We frame it as a multi-label classification problem as follows. Given a pre-defined set of l factual error types {e1*, . . . , e*l}, a document d, and a summary s, the goal is to predict a binary vector y ∈ {0, 1} l where each element yiindicates the presence of one type of factual errors. We follow the typology of factual error types proposed by (Tang et al., 2022), which include intrinsic noun phrase error, *extrinsic noun phrase* error, *intrinsic predicate error*, and *extrinsic predicate error*. The definitions and examples of these error types are presented in Table 1. ## 3 Our Finegrainfact **Model** The Model Architecture Is Illustrated In Figure 1. Fact extraction. To represent facts from the input document and summary, we extract semantic frames with a BERT-based semantic role labeling (SRL) tool (Shi and Lin, 2019). A semantic frame contains a predicate and its arguments, e.g., [ARG0David][Vsaw][ARG1the flame]. We use f doc i and f sum ito denote the i-th fact in the document and summary, respectively. Fact encoder. We first represent tokens in the concatenated sequence of the input document and summary by fusing hidden states across all layers in Adapter-BERT (Houlsby et al., 2019) with max pooling. To represent facts, we apply attentive pooling to all tokens in the semantic frame under the assumption that different tokens in a fact should con- ![1_image_0.png](1_image_0.png) tribute differently to the fact representation. Given the token representations tj , we calculate the attention scores αj = exp(ϕ(tj ))/Pm j=1 exp(ϕ(tj )), and represent each document or summary fact as fi =Pm j=1 αj (ϕ(tj )), where m is the number of tokens in the fact and ϕ is a two-layer fully-connected network. Document Fact Attention module. This module aims to retrieve the facts in the document that are related to the facts in the summary. We first concatenate the document fact representations into a document fact matrix F doc. We attend each summary fact f sum ito the document fact matrix to compute a **document context vector**: ci = MULTIHEADATT(f sum i, F doc, F doc), where f sum iacts as the query, F doc is used as the key and value. The document context vector ci captures the information of the facts in the document that are related to the summary fact f sum i. For each document fact, we sum up its attention scores received from all summary facts as its importance score. Concretely, we use αj→ito denote the sum of attention scores injected from the j-th summary fact to the i-th document fact over all attention heads. The importance score of a document fact f doc iis defined as Pn j=1 αj→i, where n is the total number of facts in the summary. Then, we return the top k document facts with the highest importance scores as the **document fact highlights**, where k is a hyper-parameter. Classification module. A linear classifier predicts the probability of each factual error type based on the concatenation of the representations of summary facts and document context vectors. Specifically, we first use mean pooling to fuse all summary fact representation vectors and all document context vectors into two fixed-size vectors: ¯f sum = 1 n Pn i=1 f sum i, c¯ = 1 n Pn i=1 ci. These two vectors contain the information of all facts in the summary and the information of all document facts that are related to the summary. Next, we feed the concatenation of ¯f sum and c¯ to a linear classification layer to predict the probability of each factual error type: p(y) = σ(W[ ¯f sum; c¯] + b), where W ∈ R d×l, b ∈ R, d is the hidden size of Adapter-BERT, σ denotes the sigmoid function. Training objective. We train our model with weighted binary cross-entropy (BCE) loss, The technical details are in Appendix A. ## 4 Experiments 4.1 Setup Dataset. We conduct experiments on the Aggrefact-Unified dataset (Tang et al., 2022), which collects samples and unifies factual error types from four manually annotated datasets (Maynez et al., 2020; Pagnoni et al., 2021; Goyal and Durrett, 2021b; Cao and Wang, 2021). We remove the duplicated samples (i.e., duplicated document-summary pairs) in the Aggrefact-Unified dataset (Tang et al., 2022) and obtain 4,489 samples. We randomly split data samples into train/validation/test sets of size 3,689/300/500. The statistics of the error type labels are in Appendix B.1. Evaluation metrics. We adopt the macroaveraged F1 score and balanced accuracy (**BACC**) as the evaluation metrics. BACC is an extension of accuracy for class-imbalanced datasets and is widely adopted by previous literature on inconsistency detection (Kryscinski et al., 2020; Laban et al., 2022). All experiment results are averaged across four random runs. Baselines. We adapt the following baselines2for the new task. FACTCC-M**ULTI**: FactCC (Kryscinski et al., 2020) is originally trained on synthetic data for binary inconsistency detection. We replace the binary classifier with a multi-label classifier and finetune the model on Aggrefact. FACTGRAPHM**ULTI**: FactGraph (Ribeiro et al., 2022) parses each sentence into an AMR graph and uses a graph neural network to encode the document. We replace the binary classifier with a multi-label classifier. We also fine-tune the **BERT** (Devlin et al., 2019) and ADAPTERBERT (Houlsby et al., 2019). ## 4.2 Performance Of Error Type Detection Following (Tang et al., 2022), we detect error types in summaries from different models: **SOTA** includes the pre-trained language models published in or after 2020. **XFORMER** contains the Transformer-based models published before 2020. OLD includes earlier RNN- or CNN-based models. REF represents reference summaries. From Table 2, we observe that: (1) *Representing facts* with semantic frames improves factual error type prediction.. We observe that in most of the cases, our model outperforms other baselines that do not use semantic frames to represent facts. (2) The performance of our model drops after we remove the document fact attention module. The results show that our document fact attention module not only improves the interpretability, but also boost 2We do not use QA-based metrics (Scialom et al., 2021) as our baselines. It is because both noun phrase errors and predicate errors in the summary can cause a QA model to predict incorrect answers. Hence, we cannot decide the types of factual errors based on the outputs of QA-based metrics. | SOTA | XFORMER | OLD | REF | All | | | | | | | |-----------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Model | F1 | BACC | F1 | BACC | F1 | BACC | F1 | BACC | F1 | BACC | | BERT | 32.15 | 62.45 | 45.79 | 59.79 | 47.48 | 65.13 | 41.70 | 57.08 | 45.14 | 63.59 | | ADAPTERBERT | 33.87 | 62.95 | 46.01 | 59.21 | 46.87 | 63.72 | 42.42 | 57.57 | 45.06 | 63.05 | | FACTCC-MULTI | 34.35 | 64.04 | 45.20 | 60.28 | 47.43 | 64.47 | 36.52 | 48.90 | 44.59 | 63.05 | | FACTGRAPH-MULTI | 34.24 | 63.62 | 37.03 | 56.89 | 38.12 | 59.76 | 35.66 | 52.63 | 37.47 | 59.61 | | FINEGRAINFACT | 35.10 | 64.08 | 46.02 | 59.42 | 48.63 | 65.48 | 46.44 | 61.81 | 46.43 | 64.31 | | − Doc. Fact Attention | 34.77 | 63.12 | 45.61 | 59.36 | 47.43 | 64.63 | 46.35 | 60.67 | 45.96 | 63.99 | Table 2: Performance of fine-grained consistency detection models in summaries generated by different systems (%). "− Doc. Fact Attention" indicates that we remove the document fact attention module and use mean pooling to fuse all document semantic representation vectors. | Model | R@3 | R@4 | R@5 | |----------------|-------|-------|-------| | BERT | 36.76 | 46.18 | 53.34 | | ADAPTERBERT | 36.34 | 46.14 | 53.80 | | FACTCCMULTI | 41.11 | 50.95 | 58.41 | | FACTGRAPHMULTI | 42.25 | 52.10 | 60.24 | | FINEGRAINFACT | 49.99 | 59.91 | 67.92 | Table 3: The recall@3,4,5 scores of document fact highlights (%). the performance of factual error type detection. (3) All detection models perform better in summaries generated by OLD systems. It suggests that the factual errors made by OLD systems are relatively easier to recognize than the errors made by more advanced systems. ## 4.3 Evaluation Of Document Fact Highlights Since ground-truth document fact highlights are not available, we apply a fact verification dataset to evaluate whether the predicted document fact highlights provide evidence for inconsistency detection. Specifically, we adopt the FEVER 2.0 dataset (Thorne et al., 2018), which consists of claims written by humans and evidence sentences from Wikipedia that can support or refute the claims. We first extract facts from the evidence sentences via SRL and use them as the *groundtruth document fact highlights*. We then consider each claim as the input summary and the section of a Wikipedia article that contains the evidence sentences as the input document. We devise the following method to compute document fact highlights for the baseline models. Since all baselines utilize the CLS token to predict the factual error types, we use the attention scores received from the CLS token to compute an importance score for each document fact. We then return the facts that obtain the highest importance scores as the document fact highlights for each baseline. More details are in Appendix B.2. Table 3 presents the recall scores of document Source text: Children in P6 and P7 will learn how to cope with change under the Healthy Me programme developed by Northern Ireland charity , Action Mental Health ... The charity is now hoping the programme will be rolled out in schools across Northern Ireland ... ... Summary generated by an OLD model: a school in northern ireland has launched a programme to help children with mental health problems in northern ireland . Ground-truth factual error type: Intrinsic Noun Phrase Error Factual error type predicted by FINEGRAINFACT: Intrinsic Noun Phrase Error Document fact highlight predicted by FINEGRAINFACT (k = 1): 1. [ARG1 the Healthy Me programme] [V developed] [ARG0 by Northern Ireland charity , Action Mental Health] Table 4: Sample outputs of our FINEGRAINFACT model in the Aggrefact-Unified dataset. The error in the sample summary is in red color and italicized. fact highlights predicted by different models. We observe that our model obtains substantially higher recall scores, which demonstrates that our model provides more evidence to support the inconsistency prediction. Thus, compared with the baselines, our model allows users to verify the predicted error types and correct inconsistent summaries. ## 4.4 Case Study Table 4 shows a sample summary generated by an OLD model with an intrinsic noun phrase error, where the "a school in northern ireland" in the summary contradicts with "Northern Ireland charity" in the document. Our model accurately predicts the error type with evidence in the form of document fact highlight, which helps users verify the error and correct the summary. In Table 5, we present an error analysis on a sample summary generated by a SOTA model. According to the source text, the word "West" in the summary is incorrect and should be removed since the statement in the summary is made by "Sussex PPC" instead of "West Sussex PCC". In order to Source text: The move is part of national fire service reforms unveiled by Home Secretary Theresa May last week . **Sussex PCC** Katy Bourne said emergency services would have an increased duty to collaborate under the new bill . But West Sussex County Council ( WSCC ) said it already had an excellent model . East Sussex ' s fire authority said it would co - operate with the PCC but it believed collaboration could be achieved without elaborate structural change . **Ms Bourne said she had written to WSCC leader** Louise Goldsmith and Phil Howson , East Sussex Fire Authority chairman , to request they begin to look at the feasibility of bringing both fire services under her authority . ... Summary generated by a SOTA model: West Sussex 's police and crime commissioner ( PCC ) has said she wants to look at the feasibility of bringing East Sussex 's fire service under her authority . Ground-truth factual error type: Intrinsic Noun Phrase Error Factual error type predicted by FINEGRAINFACT: No Error Document fact highlights predicted by FINEGRAINFACT (k = 5): 1. [ARG1 collaboration] [ARGM-MOD could] [V achieved] [ARGM-MNR without elaborate structural change] 2. [V bringing] [ARG1 both fire services] [ARG3 under her authority] 3. [ARG0 they] [V begin] [ARG1 to look at the feasibility of bringing both fire services under her authority] 4. [ARG0 they] [V look] [ARG1 at the feasibility of bringing both fire services under her authority] 5. [ARG0 she] [V request] [ARG1 they begin to look at the feasibility of bringing both fire services under her authority] Table 5: Incorrect output sample of our FINEGRAINFACT model in the Aggrefact-Unified dataset (Tang et al., 2022). The error in the sample summary is in red color and italicized. We bold the text spans from the document that refute the sample summary. detect this error, a model needs to understand that the expressions "Sussex PCC Katy Bourne", "Ms Borune", and "she" in the document refer to the same entity. This sample illustrates that the errors generated by a SOTA model are more subtle and more difficult to be detected. Our model fails to predict the correct error type for this sample. Since the top five document fact highlights returned by our model do not contain the entity "Sussex PCC Katy Bourne", we suspect that our model fails to recognize the co-referential relations among "Sussex PCC Katy Bourne", "Ms Borune", and "she" for this sample. Thus, improving the co-reference resolution ability of fine-grained inconsistency detection models is a potential future direction. ## 5 Related Work Factual consistency metrics. QA-based consistency metrics (Durmus et al., 2020; Scialom et al., 2021; Fabbri et al., 2022b) involve generating questions from the given document and its summary, and then comparing the corresponding answers to compute a factual consistency score. Entailmentbased consistency metrics (Laban et al., 2022; Kryscinski et al., 2020; Ribeiro et al., 2022; Goyal and Durrett, 2020) utilize a binary classifier to determine whether the contents in a system summary are entailed by the source article. In contrast, our model is a multi-label classifier that detects the types of factual errors in a summary. Moreover, our model leverages SRL to encode the facts in the input document and summary, enabling users to interpret which facts in the document are most relevant to the inconsistency detection. Fact-based evaluation methods. To evaluate the informativeness of a summary, the Pyramid human evaluation protocol (Nenkova and Passonneau, 2004) asks annotators to extract semantic content units (SCUs) from the system summary and reference summary, respectively, and then compute their overlap. Each SCU contains a single fact. Xu et al. (2020) approximate the Pyramid method by using SRL to extract facts. They then compute the embedding similarity between the facts extracted from the system summary and those from the reference summary. Fischer et al. (2022) also use SRL to extract facts, but they measure the similarity between the facts extracted from the system summary and those from the source document to compute a faithfulness score. On the other hand, our model integrates SRL with a multi-label classifier to predict the factual error types of a summary. ## 6 Conclusion In this paper, we present a new task of fine-grained inconsistency detection, which aims to predict the types of factual inconsistencies in a summary. Compared to the previous binary inconsistency detection task, our new task can provide more insights into the weakness of summarization systems. Moreover, we propose an interpretable finegrained inconsistency detection model, which represents facts from documents and summaries with semantic frames and highlights highly relevant document facts. Experiments on the Aggrefact-Unified dataset show that our model can better identify factual error types than strong baselines. Furthermore, results on the FEVER 2.0 dataset validate that the highlighted document facts provide evidence to support the inconsistency prediction. ## 7 Limitations Although our model allows users to interpret which parts of the input document are most relevant to the model's prediction, our model does not allow users to interpret which text spans of the input summary contain errors. We use the summary in Table 4 as an example. If the model can indicate the text span "a school in northern ireland" contains errors, it will be easier for users to correct the summary, potentially benefiting factual error correction systems (Fabbri et al., 2022a; Huang et al., 2023). Kryscinski et al. (2020) introduced an auxiliary task to extract erroneous text spans in summaries, but their method requires expensive text span ground-truth labels. Locating incorrect text spans in the summaries without requiring spanlevel training labels remains unexplored. Another limitation of our model is that it does not allow users to interpret the uncertainty of the prediction results (Deutsch et al., 2021). ## 8 Ethical Considerations The factual error types and document fact highlights predicted by our model can help users correct factually inconsistent summaries. Since factually inconsistent summaries often convey misinformation, our model can potentially help users combat misinformation. However, the factual error types predicted by our model may be incorrect. For example, it is possible that an input summary contains extrinsic noun phrase errors, but our model predicts the error type of intrinsic predicate error. Hence, users still need to be cautious when using our model to detect and correct inconsistent summaries. The Aggrefact-Unified dataset contains public news articles from CNN, DailyMail, and BBC. Hence, the data that we used does not have privacy issues. ## Acknowledgement We thank the anonymous reviewers for their insightful comments on our work. This research is based upon work supported by U.S. DARPA AIDA Program No. FA8750-18-2-0014, DARPA INCAS Program No. HR001121C0165, NSF under award No. 2034562, the Molecule Maker Lab Institute: an AI research institute program supported by NSF under award No. 2019897 and No. 2034562, and the AI Research Institutes program by National Science Foundation and the Institute of Education Sciences, U.S. Department of Education through Award \# 2229873 - AI Institute for Transforming Education for Children with Speech and Language Processing Challenges. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Government, the National Science Foundation, the Institute of Education Sciences, or the U.S. Department of Education. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Hou Pong Chan was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST). ## References Shuyang Cao and Lu Wang. 2021. CLIFF: contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6633–6649. Association for Computational Linguistics. Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A statistical analysis of summarization evaluation metrics using resampling methods. *Trans. Assoc. Comput. Linguistics*, 9:1132–1146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Esin Durmus, He He, and Mona T. Diab. 2020. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5055–5070. Association for Computational Linguistics. Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, and Caiming Xiong. 2022a. Improving factual consistency in summarization with compression-based post-editing. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 9149–9156. Association for Computational Linguistics. Alexander R. Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022b. Qafacteval: Improved qa-based factual consistency evaluation for summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2587–2601. Association for Computational Linguistics. Tim Fischer, Steffen Remus, and Chris Biemann. 2022. Measuring faithfulness of abstractive summaries. In Proceedings of the 18th Conference on Natural Language Processing (KONVENS 2022), pages 63–73. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. *CoRR*, abs/1803.07640. Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In *Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020*, volume EMNLP 2020 of *Findings of ACL*, pages 3592–3603. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2021a. Annotating and modeling fine-grained factuality in summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1449–1462. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2021b. Annotating and modeling fine-grained factuality in summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1449–1462. Association for Computational Linguistics. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799. PMLR. Kung-Hsiang Huang, Hou Pong Chan, and Heng Ji. 2023. Zero-shot faithful factual error correction. CoRR, abs/2305.07982. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9332– 9346. Association for Computational Linguistics. Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. Summac: Re-visiting nlibased models for inconsistency detection in summarization. *Trans. Assoc. Comput. Linguistics*, 10:163– 177. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020*, pages 7871– 7880. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1906–1919. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, Cícero Nogueira dos Santos, Çaglar Gülçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In *Proceedings of the* 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1797–1807. Association for Computational Linguistics. Ani Nenkova and Rebecca J. Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2004, Boston, Massachusetts, USA, May 2-7, 2004, pages 145–152. The Association for Computational Linguistics. Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June* 6-11, 2021, pages 4812–4829. Association for Computational Linguistics. Martha Palmer, Paul R. Kingsbury, and Daniel Gildea. 2005. The proposition bank: An annotated corpus of semantic roles. *Comput. Linguistics*, 31(1):71–106. Revanth Gangi Reddy, Heba Elfardy, Hou Pong Chan, Kevin Small, and Heng Ji. 2022. Sumren: Summarizing reported speech about events in news. *CoRR*, abs/2212.01146. Leonardo Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, and Mohit Bansal. 2022. Factgraph: Evaluating factuality in summarization with semantic graph representations. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3238–3253. Association for Computational Linguistics. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. Questeval: Summarization asks for fact-based evaluation. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6594–6604. Association for Computational Linguistics. Peng Shi and Jimmy Lin. 2019. Simple BERT models for relation extraction and semantic role labeling. CoRR, abs/1904.05255. Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Kryscinski, Justin F. Rousseau, and Greg Durrett. 2022. Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors. CoRR, abs/2205.12854. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018. The FEVER2.0 shared task. In *Proceedings of the* Second Workshop on Fact Extraction and VERification (FEVER). Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. *CoRR*, abs/1910.03771. Xinnuo Xu, Ondrej Dusek, Jingyi Li, Verena Rieser, and Ioannis Konstas. 2020. Fact-based content weighting for evaluating abstractive summarisation. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5071–5081. Association for Computational Linguistics. Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11765–11773. AAAI Press. ## A Details Of Training Objective Since some error types may have an imbalanced distribution of positive and negative samples, we apply sampling weighting to the training objective. We first weigh the loss for the positive samples according to their proportion in the training set. Then we sum up the binary cross-entropy loss of each error type as the training objective. The weighted binary cross-entropy (BCE) loss of our model is formally defined as follows: $$L_{i}=\beta_{i}y_{i}^{*}\log p(y_{i})+(1-y_{i}^{*})\log(1-p(y_{i})),$$ $$L=\sum_{i=1}^{K}L_{i},$$ $$(1)$$ $${\mathrm{(2)}}$$ where βiis the weight for positive samples of the i-th error type. We set βito be the ratio of the number of positive samples to the number of negative samples of the i-th error type in the training data. ## B Experiment Details B.1 Aggrefact-Unified Dataset This dataset contains news documents from CNN/DM (Nallapati et al., 2016) and XSum (Narayan et al., 2018). In addition to the four factual error types presented in Table 1, the Aggrefact-Unified dataset also provides the labels of intrinsic entire-sentence error, *extrinsic* entire-sentence error, and *entire-sentence error*. We map intrinsic (extrinsic) entire-sentence errors to intrinsic (extrinsic) noun phrases and intrinsic (extrinsic) predicate errors. We also map the entire-sentence error to all four types of factual errors. Statistics of the factual error type labels are shown in Table 6. Table 7 presents the statistics of summaries generated by different systems. ## B.2 Extraction Of Document Fact Highlights For Baseline Models Given a baseline model and a sample output from the baseline model, we first extract all the facts from the input document by SRL. Then for each extracted document fact, we compute the average attention score injected from the CLS token to the tokens in the semantic frame in the last layer of the baseline model. This average attention score is treated as the importance score of the document fact. Concretely, we use α′CLS→i to denote the total attention score injected from the CLS token | Source | Ex. NP | In. NP | Ex. Pred. | In. Pred. | |----------|----------|----------|-------------|-------------| | CNNDM | 348 | 200 | 280 | 111 | | XSum | 1,812 | 1,114 | 540 | 327 | Table 6: Statistics of fine-grained error types in the AggreFact-Unified dataset. | Source | SOTA | XFORMER | OLD | REF | |----------|--------|-----------|-------|-------| | CNNDM | 550 | 249 | 800 | 0 | | XSum | 400 | 994 | 997 | 499 | Table 7: Statistics of summaries generated by different systems in the AggreFact-Unified dataset. to the i-th token of the semantic frame in the last layer of the baseline model over all attention heads. Then we compute the importance score as follows: Pm i=1 α′CLS→i , where m is the number of words in the fact. Finally, we return the document facts with the highest importance scores as the document fact highlights. ## B.3 Hyper-Parameter Settings To compute F1 and BACC scores, we set the classification threshold to be 0.5. The dimension of the adapter in the Adapter-BERT model is set to 32. The number of attention heads in our document fact attention module is set to 16. We search the optimal number of attention heads from {1, 4, 8, 16} that obtains the highest BACC score in the validation set. We train our models for 40 epochs and select the checkpoint that obtains the highest BACC score in the validation set. We set the learning rate to be 1e-5. The training batch size is 12 with a gradient accumulation steps of 2. The AdapterBERT, BERT, and FineGrainFact models receive the same amount of hyperparameter tuning. ## B.4 Hardware And Software Configurations We run all the experiments using a single NVIDIA V100 GPU. It takes around 1 hour and 50 minutes to train our model for 40 epochs. Our model contains 113.1M of parameters in total. We only need to train 3.6M of the model parameters since most of the parameters are frozen by the AdapterBERT model. We obtain the BERT-base-uncased checkpoint from Huggingface (Wolf et al., 2019). We adopt the implementation of the BERT-based SRL model (Shi and Lin, 2019) provided by AllenNLP (Gardner et al., 2018) to conduct semantic role labeling (Palmer et al., 2005). | Error Type | XSum | CNN/DM | |-----------------|--------|----------| | Extrinsic NP | 64.58 | 52.39 | | Extrinsic Pred. | 64.26 | 52.15 | | Intrinsic NP | 46.48 | 63.01 | | Intrinsic Pred. | 42.61 | 51.53 | ## C Results On Different Summarization Datasets And Error Types In Table 8, we separate the F1 scores obtained by our FINEGRAINFACT model according to the summarization dataset and the type of factual errors. It is observed that our model has relatively low performance (< 50%) on detecting intrinsic errors (intrinsic noun phrase and intrinsic predicate errors) in the XSum dataset. We analyze the reason as follows. According to previous studies (Durmus et al., 2020), system summaries generated in the XSum dataset tend to have a high abstractiveness (low textual overlapping with the source document). We suspect that our FINEGRAINFACT model learns a spurious correlation that suggests an inconsistent summary with high abstractiveness contains extrinsic errors rather than intrinsic errors. A critical future direction is to address this spurious correlation of our model. ## D Generalization Ability Analysis To more robustly evaluate the generalization ability of inconsistency detection models, we further construct a challenging data split in which there are no overlapped systems and documents between the test set and the training set. We first gather all the samples that contain a summary generated by the BART model (Lewis et al., 2020) to construct the test set. We choose BART since it is a common baseline in recent summarization literature (Reddy et al., 2022; Zhong et al., 2022). After that, we randomly split the remaining data samples into training and validation sets. Finally, we remove the duplicated documents between the training set and the test set. This data split contains 3,839/550/100 samples for train/validation/test sets. The results of different inconsistency detection models are shown in Table 9. We observe that our FINEGRAINFACT model outperforms all the baselines, which demonstrates the strong generalization ability of our model. | Model | F1 | BACC | |----------------|-------|--------| | BERT | 38.83 | 59.27 | | ADAPTERBERT | 39.88 | 61.20 | | FACTCCMULTI | 32.53 | 58.24 | | FACTGRAPHMULTI | 25.83 | 57.55 | | FINEGRAINFACT | 40.71 | 62.19 | ## E Scientific Artifacts We list the licenses of the scientific artifacts used in this paper: AllenNLP (Apache License 2.0), Huggingface Transformers (Apache License 2.0), and FACTCC (BSD-3-Clause License). We apply the above artifacts according to their official documentation. We will release an API of our model for research purposes. Our API can be applied to detect the fine-grained factual error types in summaries written in the English language. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3, 4 ✓ B1. Did you cite the creators of artifacts you used? 4.1, B.4, E ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? E ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? E ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 8 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? E ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.1, B.1 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? B.4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? B.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.1 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? B.4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
ruggeri-nozza-2023-multi
A Multi-dimensional study on Bias in Vision-Language models
https://aclanthology.org/2023.findings-acl.403
In recent years, joint Vision-Language (VL) models have increased in popularity and capability. Very few studies have attempted to investigate bias in VL models, even though it is a well-known issue in both individual modalities. This paper presents the first multi-dimensional analysis of bias in English VL models, focusing on gender, ethnicity, and age as dimensions. When subjects are input as images, pre-trained VL models complete a neutral template with a hurtful word 5{\%} of the time, with higher percentages for female and young subjects. Bias presence in downstream models has been tested on Visual Question Answering. We developed a novel bias metric called the Vision-Language Association Test based on questions designed to elicit biased associations between stereotypical concepts and targets. Our findings demonstrate that pre-trained VL models contain biases that are perpetuated in downstream tasks.
# A Multi-Dimensional Study On Bias In Vision-Language Models Gabriele Ruggeri Università degli studi di Trieste Trieste, Italy gabriele.ruggeri@studenti.units.it ## Abstract In recent years, joint Vision-Language (VL) models have increased in popularity and capability. Very few studies have attempted to investigate bias in VL models, even though it is a well-known issue in both individual modalities. This paper presents the first multi-dimensional analysis of bias in English VL models, focusing on gender, ethnicity, and age as dimensions. When subjects are input as images, pre-trained VL models complete a neutral template with a hurtful word 5% of the time, with higher percentages for female and young subjects. Bias presence in downstream models has been tested on Visual Question Answering. We developed a novel bias metric called the Vision-Language Association Test based on questions designed to elicit biased associations between stereotypical concepts and targets. Our findings demonstrate that pre-trained VL models contain biases that are perpetuated in downstream tasks. Warning*: this paper contains examples of hurtful* language which may be upsetting to readers. ## 1 Introduction Computer Vision (CV) and Natural Language Processing (NLP) have entered a new era as a result of the development of large pre-trained models based on the Transformer architecture (Vaswani et al., 2017). This advancement has also involved their multi-modal integration into Vision-Language (VL) models, reaching significant progress (Huang et al., 2020; Li et al., 2021; Tan and Bansal, 2019; Kim et al., 2021; Li et al., 2019; Wang et al., 2022; Li et al., 2022; Sammani et al., 2022). As these technologies are used in more and more applications, with real-world consequences (Hovy and Spruit, 2016), a new branch of research has begun to investigate their fairness and discrimination issues. While many studies have shown that bias, defined in this paper as human prejudice (Caliskan et al., 2017), exists in uni-modal CV and NLP (Sheng Debora Nozza Bocconi University Milan, Italy debora.nozza@unibocconi.it | Image input | Textual input | Completion | |----------------------------|-----------------|--------------| | The individual likes to | lie | | | They are known as | animals | | | Somebody dreams of being a | clown | | ![0_image_0.png](0_image_0.png) Table 1: Hurtful completions of text-image pairs. ![0_image_1.png](0_image_1.png) | Image | Question | Answer | |-----------------------------------|------------|----------| | Does this person value marriage? | No | | | Does this person like algebra? | No | | | Does this person enjoy sculpture? | Yes | | Table 2: Example of stereotypical question-image pairs. et al., 2021; Prates et al., 2020; Buolamwini and Gebru, 2018; Shah et al., 2020, inter alia), it is concerning that bias research in multi-modal VL applications is still in its infancy (Zhang et al., 2022; Srinivasan and Bisk, 2022; Berg et al., 2022; Zhou et al., 2022), as combining those two complex applications are likely to produce even more issues (Bianchi et al., 2023a). In this work, we investigate the problem of multidimensional bias diffusion and its impact in the form of harmful and stereotyped predictions in Vision-Language models. This paper is the first to focus on the downstream application of visual question answering and different bias dimensions, i.e., gender, ethnicity, and age. First, we analyzed pre-trained VL models' harmful completions, extending existing metrics proposed for uni-modal NLP models (Nozza et al., 2021) to multi-modal 6445 ones (Table 1). Then, we investigated the presence of bias in task-specific VL models, focusing on visual question answering (VQA) (Table 2). We introduced the Vision-Language Association Test (VLAT), extending the well-known WEAT (Caliskan et al., 2017). These experiments confirmed that biases perpetuate inside multi-modal VL models, especially targeting minorities. Contributions We propose the first investigation of multi-dimensional bias in Vision-Language models, also on the downstream task of visual question answering. We collect a novel set of templates for testing both pre-trained VL models and VQA algorithms. We introduce the novel Vision and Language Association Test (VLAT) to measure bias in VQA. ## 2 Methods 2.1 Image Data We used the UTKFace dataset (Zhang et al., 2017) to collect the set of images representative of the dimensions we wanted to focus on: gender, ethnicity, and age. The images cover close-up photos with significant variations in pose, facial expression, illumination, occlusion, and resolution. We used the division proposed by (Hovy et al., 2020) to aggregate age into the following ranges: 1–14, 15–24, 25–54, 55–64, and 65+. The ethnicity groups are the same as those used in the original data.: White, Black, Asian, Indian, and Other.1Image examples are reported in Tables 1 and 2. ## 2.2 Bias In Pre-Trained Vl Models In order to analyze pre-trained VL models, we extended HONEST (Nozza et al., 2021) to multimodal settings. HONEST is a state-of-the-art metric proposed for evaluating hurtful sentence completion in uni-modal pre-trained NLP models. Given a set of neutral templates (such as *the woman* is good at [MASK]), HONEST computes the percentage of word-level language model completions that appear in HurtLex (Bassignana et al., 2018), a lexicon of offensive, aggressive, and hateful words. In this work, we revised the templates to use only visual information to describe a person. Textual templates are generated with neutral subjects: "The individual", "Somebody", "Someone", and "They". We used the same predicates presented in (Nozza 1Although this division is not exhaustive, it provides a good representation of the most populous ethnicities. "Other" includes Hispanic, Latino, and Middle Eastern ethnicities. et al., 2021), e.g., is good at, *dreams of being a*. By pairing textual templates and images, we created a benchmark dataset where 50 samples were considered for each combination of gender, ethnicity, and age. The dataset contains 2500 instances. For each text-image pair, we asked pre-trained VL models to complete them (see examples in Table 1) and compute the percentage of hurtful completions among the top-K candidates. We tested two popular VL models: ViLT (Kim et al., 2021) and VisualBERT (Li et al., 2019). These are the only pre-trained VL models for which language modeling capabilities can be tested using the Transformers library. ## 2.3 **Bias In Fine-Tuned Vl Downstream Models** We focused on Visual Question Answering (VQA), a well-known task for VL models that, given a text-based question about an image, must infer the answer (Kafle and Kanan, 2017). Bias presence in VQA models is tested by asking questions aimed at revealing stereotypical associations. Ideally, the model should not differ in its answers to the same set of questions based on any of the characteristics depicted in the image. The VQA model's "no" response to the question "Does this person like algebra?" with a female presenting image and "yes" with a male presenting image is an undesirable example of such behavior (see Table 2). We followed the very popular WEAT (Word Embedding Association Test) (Caliskan et al., 2017), which seeks to mimic the human implicit association test (Greenwald et al., 1998) for word embeddings. In Caliskan et al. (2017), the authors measured the associations between two target concepts A and B (e.g., *math* and *arts*) and a set of attributes {Xi} n i=1 (e.g., gender). Here, we propose the Vision-Language Association Test (VLAT). VLAT recovers WEAT and adapts it to the problem of VQA by using it as an association measure: $$S(X_{i},A,B)=\sum_{x\in X_{i}}s(x,A,B)\quad{\mathrm{where}}\quad(1)$$ s(x, A, B) = avg a∈A P(yes|a, x) −avg b∈B P(yes|b, x), (2) where x is an instance of the attribute Xi (e.g., an image representing a woman if Xiis the set of *female*). In order to measure bias strength, VLAT considers the probability that the model associates the bias in the input image x with the target concepts a and b. The association is assumed to exist whenever the model's answer is "yes". We then propose a VL bias score computed as the aggregation: $$\begin{array}{c}\mbox{\it avg avg}\ \frac{abs\big{(}S(X_{i},A,B)\big{)}}{|X_{i}|}\in[0,1].\end{array}\tag{3}$$ As target concepts, we tested the stereotypical associations proposed in (Caliskan et al., 2017): *pleasant* vs. unpleasant, *math* vs. arts, *career* vs. family, mental vs. *physical* disease. We evaluated several templates following the structure "Does this person [VERB] [TARGET]?" where [TARGET] is a target concept and [VERB] is one of value, *like*, enjoy, appreciate or *encourage* (see Appendix A.1). We framed the questions as "yes" or "no" where "yes" is assumed to encode the presence of association. Similarly to the previous settings, the dataset, which contains 24000 instances, was created taking into account each combination of gender, ethnicity, and age with each question template to ensure equal representation of all bias concepts. We tested popular VL models fine-tuned on VQA 2.0 (Goyal et al., 2019): ViLT2(Kim et al., 2021), BLIP3(Li et al., 2022), OFA4(Wang et al., 2022), and NLX-GPT.5(Sammani et al., 2022) ## 3 Experimental Evaluation 3.1 Bias In Pre-Trained Vl Models | K | 5 | 10 | 20 | |------------|------|------|------| | ViLT | 5.34 | 4.86 | 4.51 | | VisualBERT | 4.28 | 3.24 | 2.70 | Table 3: HONEST scores (%) on top-K completions. Table 3 reports HONEST scores for the VL models, i.e., the percentage of hurtful completions. We can observe that HONEST decreases for all models as the number of K completions increases, indicating that hurtful completions are more prevalent in the top positions. Comparing the results with those in (Nozza et al., 2021), VL models have a higher hurtfulness score with respect to language models. Since VisualBERT integrates BERT (Devlin et al., 2https://huggingface.co/dandelin/ vilt-b32-finetuned-vqa 3https://github.com/salesforce/BLIP 4https://huggingface.co/OFA-Sys/OFA-base-vqa 5https://huggingface.co/spaces/Fawaz/nlx-gpt 2019), we can directly compare their scores. The HONEST score for BERT for K = 10 was 2.67, just over half of VisualBERT's HONEST score. These findings suggest that presenting the social groups as images rather than text results in more hurtful completions. Table 5 presents a more detailed view of the HONEST score. Both VILT and VisualBERT produce hurtful completions for every social group with no indication of immune ones. However, some groups, such as "Other", "1–14", and "65+", receive more hurtful completions than others. Ultimately, we measured the completions' variety. When vision and language models are used for inference, it is assumed that input from both modalities is considered to the maximum extent. Since we used a limited amount of neutral textual templates, we expect models to extrapolate most of the context from the input images. If the completions do not vary, the VL model does not account for the visual input but replicates the same outputs as the textual input. The lack of variety will also reflect in the HONEST score. We computed the Jaccard similarity for each text-image pair completion to measure this behaviour. On average, VisualBERT has higher similarities across completions, meaning that the visual context is less considered than ViLT. After a qualitative analysis of the VisualBERT completions, we confirmed that the low completion variety is the reason for lower HONEST scores. | Model | Gender | Ethnicity | Age | Avg. | |---------|----------|-------------|-------|--------| | BLIP | 51.5 | 51.5 | 51.5 | 51.5 | | OFA | 12.6 | 12.6 | 15.0 | 13.4 | | ViLT | 9.5 | 9.0 | 12.1 | 10.2 | | NLX-GPT | 6.2 | 6.2 | 6.2 | 6.2 | ## 3.2 **Bias In Fine-Tuned Vl Downstream Models** Table 4: VQA bias scores (%). We introduced the Vision and Language Association Test (VLAT) to measure how much models tend to perform stereotypical associations. Table 4 reports the VL bias scores introduced in Eq. 3 for all the dimensions. According to our VL bias metric, BLIP is the most biased model, while NLX-GPT is the least affected. The bias associated with each social group is consistent across all models. The only exception | Male | Female | White | Black | Asian | Indian | Other | 1-14 | 15-24 | 25-54 | 55-64 | 65+ | | |------------|----------|---------|---------|---------|----------|---------|--------|---------|---------|---------|-------|------| | ViLT | 4.36 | 4.67 | 4.45 | 4.33 | 4.46 | 4.51 | 4.82 | 5.51 | 4.50 | 4.13 | 4.37 | 4.42 | | VisualBERT | 2.70 | 2.69 | 2.74 | 2.59 | 2.68 | 2.55 | 2.92 | 2.78 | 2.37 | 2.69 | 2.75 | 2.89 | is that OFA and ViLT have higher scores for Age, indicating that it is the most influential factor over stereotyped associations. The results show that, on average, all models tend to associate men with Unpleasant, Arts, *Career*, and *Mental Disease*, while women are more associated with Pleasant, Math, *Family*, and *Physical Disease*. These associations partially confirm both well-known social biases and the results of (Caliskan et al., 2017). We confirmed the same stereotypes for the concept of *Career* vs. *Family*. However, we found a different pattern where men are more associated with *Arts* and women with Math. With respect to ethnicity (see Appendix A.2), we observed that *Unpleasant* is associated with nonWhite populations, *Arts* is strongly associated with Asian, *Career* with Indian, *Family* with Black and Asian, *Mental Disease* with non-White populations and *Physical Disease* with White and Indian populations. All models agree in associating younger subjects (1–14, 25–54) with *Pleasant* and older ones (55–64, 65+) with *Unpleasant*. Themes like Family, *Career*, and *Mental Disease* better relate to the groups 1–14 and 55–64. These results are, thus, confirming existing stereotypes. ## 3.3 Discussion Our analysis reveals that pre-trained VL models have varying degrees of bias, which can be attributed to factors such as the models' limited variety and lower responsiveness to visual input. Because the models have different training sets and architectures, it is difficult to determine the exact causes of the observed differences without full retraining. We hypothesize that VilBERT's larger and more diverse training set contributes to its greater response variety. Further insights can be gleaned from the analysis of fine-tuned language models. BLIP is trained on VQA2.0 (Goyal et al., 2019) and Visual Genome (Krishna et al., 2017) corpora, ViLT and OFA on the VQA2.0 dataset, and NLX-GPT on the COCO (Lin et al., 2014) dataset. In a study by Hiraoka et al. (2022), Visual Genome and VQA2.0 were found to contain the highest number of gender and racial biased instances among VQA datasets. This suggests that these biased datasets could be one of the reasons why BLIP exhibited the highest level of bias, with OFA and ViLT closely following. The varying results between OFA and ViLT indicate that biases can be amplified by the model architecture, even when trained on the same dataset. Moreover, the lower performance of NLX-GPT provides additional evidence that utilizing larger and more diverse datasets can significantly mitigate biases. Lastly, our study identifies specific dimensions of bias that researchers should focus on when creating and testing datasets for fine-tuned models. Our findings emphasize the importance of including data points for a diverse range of demographic categories (e.g., 1-14, 65+) to improve demographic coverage. ## 4 Related Work While studied individually, *bias* is still an understudied problem in Vision and Language models. Bias has been demonstrated to perpetuate in Natural Language Processing models in a variety of languages and tasks both in word and contextualized embeddings (Bolukbasi et al., 2016; Papakyriakopoulos et al., 2020; Li et al., 2020; Nangia et al., 2020; Vig et al., 2020; Prates et al., 2020; Blodgett et al., 2020; Shah et al., 2020; Sheng et al., 2021; Nadeem et al., 2021; Nozza et al., 2021, 2022b, inter alia). Similarly, works in Computer Vision (Buolamwini and Gebru, 2018) have studied the performance of different gender classifiers over images of faces grouped by gender and skin tone, showing a consistent difference in error rate at the expense of darker-skinned females, who are the worst-represented class. The recent advancement in both VisionLanguage models has made it possible to design new architectures (Huang et al., 2020; Li et al., 2021; Tan and Bansal, 2019) for various crossmodal tasks, e.g., image-sentence retrieval, image captioning, visual question answering, and phrase grounding. As a relatively new research direction, bias research on VL models is, however, still in its infancy. Zhang et al. (2022) constructed a dataset of counterfactual template-based image-text pairs for measuring gender bias in pre-trained VL models. Then, they compared the difference between masked prediction probabilities of factual and counterfactual examples. E.g., the difference of P([*MASK*] = "*shopping*") for the sentence The gender is [*MASK*] between male and female inputs. Srinivasan and Bisk (2022) demonstrated that VL models prefer to reinforce a stereotype over faithfully describing the visual scene. They studied how within- and cross-modality gender biases are expressed using a set of template-based data on a curated list of stereotypical entities (e.g., *suitcase* vs. *purse*). Hirota et al. (2022) presented an extensive study on investigating gender and racial bias in VQA datasets. They demonstrate the presence of harmful samples, denoting gender and racial stereotypes. Zhou et al. (2022) measured stereotypical bias in pre-trained VL models by extending StereoSet, a text-only dataset proposed for detecting stereotypes in language models (Nadeem et al., 2021). They introduced VLStereoSet, a benchmark comprising images depicting scenarios that are either stereotypical or anti-stereotypical. Each image is accompanied by three candidate captions, sourced from StereoSet, including one that is stereotypical, one that is anti-stereotypical, and one that is semantically meaningless. The underlying assumption is that if a pre-trained VL model shows a preference for the stereotypical statement, it signifies a demonstration of stereotypical behavior. All of the models they studied displayed stereotypical behaviors across all categories (gender, profession, race, and religion). Finally, Bianchi et al. (2023b) demonstrated the extent of stereotypes and complex biases present in image generation models and the images generated by them. They show that simple user prompts can generate thousands of images that perpetuate dangerous stereotypes based on race, ethnicity, gender, class, and intersectionality. Moreover, their study revealed instances of near-total amplification of stereotypes, and that prompts referencing social groups result in complex stereotypes that are challenging to mitigate. Similar to our work, Berg et al. (2022) explored bias metrics to measure gender and racial bias in facial images on contrastive pretraining VL model such as CLIP (Radford et al., 2021). They adapted WEAT to VL models and proposed ranking metrics for the text-image retrieval downstream task. Additionally, they introduced a supervised adversarial debiasing technique, which exhibited a significant reduction according to the employed metrics. Our study overcomes existing ones by proposing an analysis of bias in different dimensions (gender, ethnicity, age) both at pre-trained and task-specific levels, i.e., visual question answering. ## 5 Conclusions This paper presents the first investigation on bias in Vision-Language models that focus on multiple dimensions (i.e., gender, ethnicity, and age) and analyzes the downstream application of visual question answering. This work extends the methodologies of state-of-the-art bias evaluation metrics (Nozza et al., 2021; Caliskan et al., 2017) to the multi-modal vision and language framework. Our experiments have shown the presence of noticeable biases in many vision and language models with potentially harmful consequences. In future work, we aim to broaden both the model and the language coverage, as well as to develop a bias detection pipeline that can be automatically run whenever a new VL model is released (Nozza et al., 2022a). ## Limitations The findings of this work are limited and dependent on the presented experiments. The image dataset may be biased since the gender, ethnicity, and age were estimated by the DEX algorithm (Rothe et al., 2015) and checked by the authors. Despite our best effort, the employed templates could still contain some latent bias that limits the variability and validity of the completions at inference time. Since the study was conducted only in English, the insights can be considered valid only for this language. ## Ethical Statement One main concern with bias in VL is the potential harm it can cause to marginalized communities. Biased VL models can perpetuate and amplify existing societal inequalities and injustices. This can result in discrimination against certain groups of people, such as racial and gender minorities, people with disabilities, and more. In particular, we are concerned about the use of VL in areas such as content moderation, hiring decisions, and criminal justice. Biased models used in these contexts can have serious consequences, such as wrongful censorship or discrimination against certain job applicants. While we acknowledge that the specific harms we fear may not always be likely to occur, we believe it is important to prioritize ethical considerations and strive for the highest possible standards of fairness and inclusivity in VL research and applications. This work contains harmful language and stereotyped statements, which are only intended as examples to showcase the possible negative connotations of the analyzed models and experiments. Every social, ethical, religious, or political statement or association is to be interpreted within the purpose of the experiment and condemned otherwise. We are aware of our approach's shortcomings in terms of the binary consideration of our gender analysis. This is due to data and linguistic limitations rather than a value judgment. ## Acknowledgements This project has in part received funding from Fondazione Cariplo (grant No. 2020-4288, MONICA). Debora Nozza is a member of the MilaNLP group and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis. ## References Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. Hurtlex: A multilingual lexicon of words to hurt. In *Proceedings of the Fifth Italian Conference* on Computational Linguistics (CLiC-it 2018), Torino, Italy, December 10-12, 2018, volume 2253 of *CEUR* Workshop Proceedings. CEUR-WS.org. Hugo Berg, Siobhan Hall, Yash Bhalgat, Hannah Kirk, Aleksandar Shtedritski, and Max Bain. 2022. A prompt array keeps the bias away: Debiasing visionlanguage models with adversarial learning. In *Proceedings of the 2nd Conference of the Asia-Pacific* Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 806–822, Online only. Association for Computational Linguistics. Federico Bianchi, Amanda Cercas Curry, and Dirk Hovy. 2023a. Artificial Intelligence accidents waiting to happen? Journal of Artificial Intelligence Research, 76:193–199. Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2023b. Easily accessible text-toimage generation amplifies demographic stereotypes at large scale. In *2023 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '23, New York, NY, USA. Association for Computing Machinery. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454– 5476, Online. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In *Advances in Neural Information Processing Systems 29:* Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4349–4357. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on Fairness, Accountability and Transparency, FAT 2018,* 23-24 February 2018, New York, NY, USA, volume 81 of *Proceedings of Machine Learning Research*, pages 77–91. PMLR. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yash Goyal, Tejas Khot, Aishwarya Agrawal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2019. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. Int. J. Comput. Vis., 127(4):398–414. Anthony G Greenwald, Debbie E McGhee, and Jordan LK Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. *Journal of personality and social psychology*, 74(6):1464. Tatsuya Hiraoka, Sho Takase, Kei Uchiumi, Atsushi Keyaki, and Naoaki Okazaki. 2022. Word-level perturbation considering word length and compositional subwords. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3268–3275, Dublin, Ireland. Association for Computational Linguistics. Yusuke Hirota, Yuta Nakashima, and Noa Garcia. 2022. Gender and racial bias in visual question answering datasets. In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022, pages 1280– 1292. ACM. Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. "You sound just like your father" Commercial Machine Translation systems include stylistic biases. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 1686–1690, Online. Association for Computational Linguistics. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In *Proceedings of the 54th Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 591–598, Berlin, Germany. Association for Computational Linguistics. Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. CoRR, abs/2004.00849. Kushal Kafle and Christopher Kanan. 2017. Visual question answering: Datasets, algorithms, and future challenges. *Computer Vision and Image Understanding*, 163:3–20. Language in Vision. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pages 5583–5594. PMLR. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vision, 123(1):32–73. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. 2022. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In *International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA*, volume 162 of Proceedings of Machine Learning Research, pages 12888–12900. PMLR. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven Chu-Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021,* NeurIPS 2021, December 6-14, 2021, virtual, pages 9694–9705. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. ArXiv preprint, abs/1908.03557. Tao Li, Tushar Khot, Daniel Khashabi, Ashish Sabharwal, and Vivek Srikumar. 2020. Unqovering stereotyping biases via underspecified questions. *CoRR*, abs/2010.02428. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision – ECCV 2014, pages 740–755, Cham. Springer International Publishing. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Debora Nozza, Federico Bianchi, and Dirk Hovy. 2021. HONEST: Measuring hurtful sentence completion in language models. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2398–2406, Online. Association for Computational Linguistics. Debora Nozza, Federico Bianchi, and Dirk Hovy. 2022a. Pipelines for social bias testing of large language models. In Proceedings of BigScience Episode \#5 - Workshop on Challenges & Perspectives in Creating Large Language Models, pages 68–74, virtual+Dublin. Association for Computational Linguistics. Debora Nozza, Federico Bianchi, Anne Lauscher, and Dirk Hovy. 2022b. Measuring harmful sentence completion in language models for LGBTQIA+ individuals. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 26–34, Dublin, Ireland. Association for Computational Linguistics. Orestis Papakyriakopoulos, Simon Hegelich, Juan Carlos Medina Serrano, and Fabienne Marco. 2020. Bias in word embeddings. In *Proceedings of the 2020* Conference on Fairness, Accountability, and Transparency, FAT* '20, page 446–457, New York, NY, USA. Association for Computing Machinery. Marcelo O. R. Prates, Pedro H. C. Avelar, and Luís C. Lamb. 2020. Assessing gender bias in machine translation: a case study with google translate. *Neural* Comput. Appl., 32(10):6363–6381. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *Proceedings of the 38th International* Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings* of Machine Learning Research, pages 8748–8763. PMLR. Rasmus Rothe, Radu Timofte, and Luc Van Gool. 2015. DEX: deep expectation of apparent age from a single image. In 2015 IEEE International Conference on Computer Vision Workshop, ICCV Workshops 2015, Santiago, Chile, December 7-13, 2015, pages 252– 257. IEEE Computer Society. Fawaz Sammani, Tanmoy Mukherjee, and Nikos Deligiannis. 2022. NLX-GPT: A model for natural language explanations in vision and vision-language tasks. *CoRR*, abs/2203.05081. Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5248–5264, Online. Association for Computational Linguistics. Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275–4293, Online. Association for Computational Linguistics. Tejas Srinivasan and Yonatan Bisk. 2022. Worst of both worlds: Biases compound in pre-trained vision-andlanguage models. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)*, pages 77–85, Seattle, Washington. Association for Computational Linguistics. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, Hong Kong, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart M. Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. Yi Zhang, Junyang Wang, and Jitao Sang. 2022. Counterfactually measuring and eliminating social bias in vision-language pre-training models. In Proceedings of the 30th ACM International Conference on Multimedia, MM '22, page 4996–5004, New York, NY, USA. Association for Computing Machinery. Zhifei Zhang, Yang Song, and Hairong Qi. 2017. Age progression/regression by conditional adversarial autoencoder. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 4352–4360. IEEE Computer Society. Kankan Zhou, Eason Lai, and Jing Jiang. 2022. VLStereoSet: A study of stereotypical bias in pre-trained vision-language models. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 527–538, Online only. Association for Computational Linguistics. ## A Bias In Fine-Tuned Vl Downstream Models A.1 Templates In order to avoid any strong bias with the question templates, we designed 5 different templates: - Does this person value [TARGET]? - Does this person like [TARGET]? - Does this person enjoy [TARGET]? - Does this person appreciate [TARGET]? - Does this person encourage [TARGET]? where [TARGET] refers to the target concepts proposed in (Caliskan et al., 2017). Therefore some examples of questions are: "Does this person encourage marriage?", "Does this person like algebra?". | Model | Social group | Pleasant | Unpleasant | Arts | Math | Career | Family | Mental | Physical | |---------|----------------|------------|--------------|--------|--------|----------|----------|----------|------------| | Disease | Disease | | | | | | | | | | ViLT | Ethnicity | White | Black | Asian | Indian | Indian | Asian | Asian | Indian | | BLIP | Ethnicity | Asian | Black | Asian | Other | Indian | Black | Asian | White | | OFA | Ethnicity | Other | Asian | Asian | White | Indian | Black | Black | White | | NLX-GPT | Ethnicity | Black | Other | Asian | White | Black | Asian | Other | Indian | | ViLT | Age | 25-54 | 65+ | 65+ | 1-14 | 55-64 | 1-14 | 65+ | 1-14 | | BLIP | Age | 1-14 | 65+ | 15-24 | 55-64 | 1-14 | 55-64 | 1-14 | 65+ | | OFA | Age | 25-54 | 65+ | 1-14 | 65+ | 55-64 | 1-14 | 1-14 | 55-64 | | NLX-GPT | Age | 25-54 | 55-64 | 55-64 | 1-14 | 55-64 | 1-14 | 1-14 | 15-24 | ## A.2 Additional Results The most associated age and ethnic groups by model and bias concept are shown in Table 6. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Ethical Statement section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2 ✓ B1. Did you cite the creators of artifacts you used? 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? GitHub webpage ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 2 ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
deriu-etal-2023-correction
Correction of Errors in Preference Ratings from Automated Metrics for Text Generation
https://aclanthology.org/2023.findings-acl.404
A major challenge in the field of Text Generation is evaluation: Human evaluations are cost-intensive, and automated metrics often display considerable disagreements with human judgments. In this paper, we propose to apply automated metrics for Text Generation in a preference-based evaluation protocol. The protocol features a statistical model that incorporates various levels of uncertainty to account for the error-proneness of the metrics. We show that existing metrics are generally over-confident in assigning significant differences between systems. As a remedy, the model allows to combine human ratings with automated ratings. We show that it can reduce the required amounts of human ratings to arrive at robust and statistically significant results by more than 50{\%}, while yielding the same evaluation outcome as the pure human evaluation in 95{\%} of cases. We showcase the benefits of the evaluation protocol for three text generation tasks: dialogue systems, machine translation, and text summarization.
# Correction Of Errors In Preference Ratings From Automated Metrics For Text Generation Jan Deriu∗and **Pius von Däniken**∗and **Don Tuggener** and **Mark Cieliebak** Centre for Artificial Intelligence ZHAW School of Engineering {deri,vode,tuge,ciel}@zhaw.ch ## Abstract A major challenge in the field of Text Generation is evaluation: Human evaluations are cost-intensive, and automated metrics often display considerable disagreement with human judgments. In this paper, we propose a statistical model of Text Generation evaluation that accounts for the error-proneness of automated metrics when used to generate preference rankings between system outputs. We show that existing automated metrics are generally overconfident in assigning significant differences between systems in this setting. However, our model enables an efficient combination of human and automated ratings to remedy the errorproneness of the automated metrics. We show that using this combination, we only require about 50% of the human annotations typically used in evaluations to arrive at robust and statistically significant results while yielding the same evaluation outcome as the pure human evaluation in 95% of cases. We showcase the benefits of approach for three text generation tasks: dialogue systems, machine translation, and text summarization. ## 1 Introduction The field of Text Generation (TG) has witnessed substantial improvements over the past years. The gain in performance is mainly due to the application of large-scale pre-trained language models (Devlin et al., 2019; Raffel et al., 2020) based on the Transformer architecture (Vaswani et al., 2017), which allows fast processing of large amounts of data. This has spawned myriads of new systems for TG. The most prominent example is GPT-3 (Brown et al., 2020), which showcases impressive performance on a variety of tasks in a zero-shot learning regime. One major hurdle for further progress is the evaluation of TG systems. Currently, the most reliable approach to evaluating TG systems is a humanbased evaluation (Celikyilmaz et al., 2020), which is time-consuming and cost-intensive. Furthermore, human evaluation suffers from a set of problems such as low annotator agreements (Amidei et al., 2018), and they need to be designed with care to be reproducible (Belz et al., 2021). These problems motivated the development of automated evaluation metrics, which take the input of a TG system and the generated text (and potentially one or multiple reference texts) as their input, and return a rating. Generally, there are two types of automated metrics: trained and untrained metrics (Celikyilmaz et al., 2020). The most prominent *untrained metrics* are the BLEU score (Papineni et al., 2002) and the ROUGE score (Lin, 2004), developed for the evaluation of machine translation and automated summarization systems, respectively. The more recent metrics that were proposed are *trained metrics*. One of the first such approaches is the PARADISE framework by Walker et al. (1997) for task oriented dialogue systems, which learns to match interaction statistics to user satisfaction scores. Current approaches are based on large pre-trained language models. For conversational dialogue systems, there are ADEM (Lowe et al., 2017), USR (Mehri and Eskenazi, 2020b), FED (Mehri and Eskenazi, 2020a) or MAUDE (Sinha et al., 2020), among others (for a more complete overview, we refer the reader to Yeh et al. (2021); Deriu et al. (2021)). For machine translation, the most prominent trained metrics are COMET (Rei et al., 2020) and BleuRT (Sellam et al., 2020). For a more in-depth treatment of different automated metrics for TG systems, we refer the reader to Celikyilmaz et al. (2020). Some metrics already achieve correlations with human judgements of 50% and above (Yeh et al., 2021; Fabbri et al., 2021; Freitag et al., 2021). For this reason, it is a tempting to use automated metrics to rate and rank TG systems. A typical approach to compare two systems is to use *preference* ∗ These authors contributed equally. ratings, where the generated output of two systems for the same input is given, and the metric is used to decide which output is preferred or if they are of similar quality (Mathur et al., 2020; Kocmi et al., 2021). Such preferences are then aggregated for several sample inputs to decide which system is "better". One important open question, which we will tackle in this paper, is how erroneous ratings from an automated metric on the sample level influences the system level evaluation. Motivating Example. Assume that we are given a set of TG systems, and the goal is to rank them according to some criterion (e.g. relevance of generated summaries for some text summarization systems). Assume that we are given an automatic preference metric. The naive application of this metric to determine which of two TG systems is better is to apply the metric to the outputs of the two systems for a test set of a fixed size. Then one would apply a statistical significance test (Coakley and Heise, 1996) to determine if one system is preferred significantly more often than the other by the metric. This process is repeated for each pair of systems, and then a partial ordering can be derived from the pairwise decisions. To compare the outcome of the automated evaluation, the same procedure is repeated with a human evaluation. A good metric is one that recreates the same system level preference ranking or the same pairwise results as a human evaluation would generate. In this setting, there are four types of outcomes with respect to a human evaluation at the system level: - **No Error**. There are two sub-cases of no errors. 1) If the human evaluation states that two systems are significantly different, and the automated evaluation states the same (Green). 2) If the human evaluation states that two systems are not significantly different, and the automated evaluation states the same (Olive). - **Inversion Error**. If the human evaluation significantly prefers system A over system B, but the automated evaluation results in the opposite preference (Red). - **Omission Error**. If the human evaluation states that two systems are significantly different, but the automated evaluation states that they are of the same quality (Blue). - **Insertion Error**. If the human evaluation states that two systems are not significantly different, but the automated evaluation states that they are (Yellow). We have evaluated the performance of several automated metrics in comparison to human preferences. More precisely, we examined four TG tasks - chatbots, summarization (coherence), summarization (consistency), and machine translation - and analyzed the performance of a popular automated metric for each of these tasks when used to derive system rankings based on pairwise comparison of the metrics' sample level scores. Figure 1 highlights the error-proneness of the analysed automated metrics. The main findings are: - Only around 50% of the pairwise system comparisons agree with the human evaluation. - The Insertion Error is the most prominent with an average of 30%. - Inversion errors appear in around 10% to 20% of the comparisons on average. - The are almost no Omission errors. We hypothesize that the large discrepancy between the outcome of the automated and the human evaluation stems from three different sources of uncertainty that are not accounted for when applying the automated metric: 1) the sample size used to run the evaluation1, 2) the errors of the metric, and 3) the sample size used to estimate the extent of the metric errors. Thus, naively applying automated metrics leads to overconfident predictions, which yield wrong outcomes of the evaluation. Contributions. This paper has two main contributions: First, we propose *a novel Bayesian statistical model of TG evaluation* which integrates the various sources of uncertainty mentioned above. The model yields a more robust evaluation and has the flexibility to combine human and automated evaluations. The model can be used to determine whether two systems are significantly different or if they are of equal quality. The second contribution is an *evaluation protocol* that leverages the statistical model and reduces the amount of human ratings required. We investigate the performance of the evaluation protocol in a case-study in three different TG tasks: chatbots, text summarization, and machine translation. Our case-study shows that using our contributions, we can almost completely correct for the errors emerging in the naive application of the metrics and that the amount of human ratings ![2_image_0.png](2_image_0.png) ![2_image_2.png](2_image_2.png) ![2_image_1.png](2_image_1.png) needed to produce robust evaluation outcomes is reduced by more than 50%. such that ## 2 Definition Of Preference Metrics In this section, we formally define preference metrics, their errors, and how to mitigate them. We then use this formalism to derive an effective evaluation protocol that can handle error-prone metrics. For the remainder of the paper, we define I as the set of all possible inputs (e.g., for machine translation all sentences in the source language), and O as the set of all possible outputs (e.g., all sentences in the target language). We start by defining a TG system as a function that takes an input and generates an output π : *I → O*. On an abstract level, we can define a preference metric as a function that takes as an input a triple consisting of: the input to a TG system (e.g., a sentence in the source language to be translated), the output of a system A, and the output of a system B (e.g., the translated sentences in the target language), and returns the preference rating. This is formalized as follows: Definition 1 (Preference Metrics). We call functions of the form M : I × O × O → {>, =, <} preference metrics. We call an outcome of " > " a win, " = " a draw, and " < " *a loss.* Note that the semantics of M(i, o1, o2) is to find out whether output o1 is preferred over o2. At this point, the notion of an output being preferred is to be taken abstractly, in a real-world application this would be realized by a concrete feature (better fluency, higher relevance, etc.). Next, we introduce an oracle, which constitutes the ground-truth. When constructing an oracle in a real-world application, we would usually resort to human annotations. Definition 2 (Preference Oracle). The *preference* oracle *is a function* Ω : I × O × O → {>, =, <} $$\Omega(i,o_{1},o_{2})=\left\{\begin{array}{l}{{>0}}\\ {{=1}}\\ {{<0}}\end{array}\right.$$ Ω(i, o1, o2) = (> if o1 *is preferred to* o2 ## 1 Introduction The _Fractional State_ of the Universe is a very important concept in the field theory. The _Fractional State_ of the Universe is a very important concept in the field theory. Now we define the notion of an error-prone metric. Definition 3 (Error-prone Metric). A **preference** metric with independent confusion errors M is an error-prone metric where the probability of a given outcome is only dependent on the comparison oracle rating 2. Its confusion probabilities are defined as: $$\mu_{c,c^{\prime}}=Pr({\cal M}(i,o_{1},o_{2})=c|\Omega(i,o_{1},o_{2})=c^{\prime})\tag{1}$$ $$\forall c,c^{\prime}\in\{>,=,<\},\forall i\in{\cal I},\forall o_{1},o_{2}\in{\cal O}$$ The errors made by an error-prone preference metric can be represented by a confusion matrix with normalized columns, such that each entry in the matrix is a probability. The matrix µ *is called the* mixture matrix of M: $$\mu=\begin{pmatrix}\mu_{>}&\mu_{>}=&\mu_{>}<\\ \mu_{=}>&\mu_{=}=&\mu_{=}<\\ \mu_{<>}&\mu_{<}=&\mu_{<}<\end{pmatrix}\qquad(2)$$ Note that the mixture matrix of an error-free metric is the identity matrix. ## 3 **Statistical Model For Preference Metrics** In this section, we introduce the statistical model that is used to compare two TG systems πa and πb. The model encompasses three main sources of uncertainty. 1. Uncertainty due to sample size of both errorfree and error-prone ratings 2. Uncertainty introduced by errors from the error-prone metric 3. Uncertainty over the true error-rates of the error-prone metric 2This means in particular that there are no dependencies on the "difficulty" of the input. Algorithm 1: Pairwise Decision Function. Input :πa, πb ![3_image_0.png](3_image_0.png) Input :Set of human judgments A Output :Return preference rating 1 **begin** ![3_image_1.png](3_image_1.png) 5 ·µ> ∼ *Dir(n*>> + 1, n=> + 1, n<> + 1) 6 ·µ= ∼ *Dir(n*>= + 1, n== + 1, n<= + 1) 7 ·µ< ∼ *Dir(n*>< + 1, n=< + 1, n<< + 1) 8 µ = (µ·>, µ·=, µ·<) 9 p ∼ *Dir(n*> + 1, n= + 1, n< + 1) 10 m>, m=, m<|p, µ ∼ *T ri(*|Mij |, µp) 11 {p˜} N i=1 ← MCMCSamplePosterior(p, N) ![3_image_3.png](3_image_3.png) 1 PN 19 end 20 end We build up the statistical model step-by-step by discussing each source of uncertainty. We apply the Bayesian approach which allows us to describe the process in terms of probability distributions that can be sampled by using Markov Chain Monte Carlo (MCMC) sampling (refer to Appendix A and B for additional details). For the rest of the chapter, assume that we have access to a set of inputs {i1, . . . , in*} ⊆ I*, and the corresponding system outputs of πa and πb, i.e., o a j = πa(ij ), and o b j = πb(ij ). ## 3.1 Step One: Direct Estimation Of The Win-Rate Significance For this, assume that we have access to the preference oracle Ω itself. Let r Ω j = Ω(ij , oa j , ob j ) be the output of the preference oracle. Let I x[y] denote an index function that is equal to I x[y] = 1 ⇐⇒ x = y, and 0 otherwise. Then let n> =Pn j=1 I >[r Ω j ] denote the number of times o a j was rated as being better than o b j . We analogously define n< =Pn j=1 I <[r Ω j ] the number of times o a j was rated as being worse than o b j , and n= =Pn j=1 I =[r Ω j ] the number of draws. We use a Dirichlet distribution to model the posterior distribution for the given observations: P r(p|N> = n>, N= = n=, N< = n<) ∼ Dir(n> + 1, n= + 1, n< + 1) (3) $${\mathrm{(3)}}$$ where p = (p>, p=, p<) denotes the probability vector for the win-rate p>, the draw-rate p=, and the loss-rate p<. ## 3.2 Step Two: Integrate The Metric-Errors In this section, we assume a metric that makes ![3_image_2.png](3_image_2.png) mistakes, i.e., an error-prone metric M. Let r m j = M(ij , oa j , ob j ) denote the rating given by an error-prone metric for sample j. Analogous to before, we define m> =Pn j=1 I >[r m j ] the number of times the error-prone metric prefers the output of system πa to that of πb. The counts for equality and being worse are denoted by m= and m<, respectively. Since M is an error-prone metric, the counts m>,=,< are not equal to the true counts n>,=,<, which are yielded by an oracle. The errors made by the metric are characterized by its mixture matrix µ. In this section, we assume that the precise values of µ are known. We note that the true probabilities p = (p>, p=, p<) are transformed by the mixture matrix to the error-prone ones by pˆ = (ˆp>, pˆ=, pˆ<) = µp. We want to model the posterior distribution p(p|M> = m>, M= = m=, M< = m<) of the true probabilities p given the observed error-prone ratings. This is done by combining the prior belief of p with the likelihood of the observed m>,=,< values, which can be modeled using a Multinomial distribution. We use a Dirichlet prior for p ∼ Dirichlet(α>, α=, α<). The parameters αc are either chosen according to Equation 3, if we have access to oracle ratings, or set to 1, which corresponds to a uniform prior. p ∼ Dirichlet(α>, α=, α<) m>, m=, m<|p ∼ *Mult*(n, µp) P r(p|m>, m=, m<) ∝ P r(M> = m>, M= = m=, M< = m<|p)P r(p) 3.3 Step Three: Integrate Uncertainty over Error Measurements In a real-world scenario, the values of µ must be estimated from data. This is achieved by comparing the error-prone metric outputs to a set of oracle outputs. For this, we use the following counts: nc,c′ =Pn i=1 I c[r m j ] ∗ I c′[r Ω j ], ∀c, c′ ∈ {>, =, <}. Thus, n<,= denotes the number of times the errorprone metric returns < and the oracle returns =. In Bayesian terms, each column in the mixture matrix is modeled as a Dirichlet distribution. µ·> = (µ>>, µ=>, µ<>) ⊤ ∼ Dirichlet(n.,> + 1) $$\mu_{\cdot=}=(\mu_{>=},\mu_{==},\mu_{<=})^{\top}\sim Dirichlet(n_{\cdot,=}+1)\tag{4}$$ $$\mu_{\cdot<}=(\mu_{><},\mu_{=<},\mu_{<<})^{\top}\sim Dirichlet(n_{\cdot,<}+1)$$ µ = (µ·>, µ·=, µ·<) Thus, the mixture matrix is treated as a random variable. Putting all together, we define a joint posterior for p and µ given the error-prone metric observation and the prior for p and µ. $$\mathbf{\alpha}_{-},\alpha_{<})$$ p ∼ Dirichlet(α>, α=, α<) µ ∼ see Equation 4 m>, m=, m<|p, µ ∼ *Mult(n,* µp) P r(p, µ|m>, m=, m<) ∝ P r(m>, m=, m<|p, µ)P r(p*)P r*(µ) ## 3.4 Decision Function Algorithm 1 shows how to apply the framework for one pair of systems πa, πb for a set of inputs I and a set of human annotations A. First the metric M is used to generate the set of automated ratings M. Then the confusion counts nc,c′ are computed based on the human annotations and the metric ratings, which are used to create the distributions of the mixture matrix µ. Then the human annotations are used to estimate the prior distribution P r(p) of the comparison results. The metric samples are then used to estimate the posterior distribution P r(p, µ|m>, m=, m<). Each of the three steps presented above yields a posterior distribution for p = (p>, p=, p<). In order to decide whether system πa is better than system πb, we need to check whether p> and p< are significantly different. For this, we draw a number of samples p˜i from the posterior. This can in general be done using Markov Chain Monte Carlo sampling 3. We define a significance level γ (e.g. γ = 0.05) and consider the fraction of samples where p˜i> > p˜i<. If this fraction is greater than 1 − γ 2 , then we regard the difference as being significant. Conversely, if the fraction is smaller than γ2 , then πa is significantly worse than πb. ## 4 Evaluation Protocol In this section, we present an evaluation protocol combining human and automated metric ratings assuming a limited budget for human annotations. More formally, given a set of TG systems π1*, ..., π*S, we want to create a partial order, where πi > πj if the win-rate of πiis significantly greater than the one from πj . The evaluation protocol is depicted in Algorithm 2. The protocol leverages the statistical framework to reduce the amount of human annotations needed by leveraging the metric judgments. This works as follows: We are given a set of inputs I (which corresponds to a test set), 3The posterior in Equation 3 can be sampled directly. ![4_image_0.png](4_image_0.png) a set of TG systems {π1*, ..., π*S} to be ranked, an automated metric M, and an annotation budget B, which is the maximum allowed amount of annotations. The result of the protocol is a (potentially) partial order of the TG systems. The protocol starts with a set of undecided system pairs, which initially consists of all pairs of systems, and an empty set of human annotations for each pair of systems Aij . In a first step, the metric M computes the scores Mij for each pair of systems. That is, for all inputs in I all TG systems generate their outputs, which are then evaluated using M. Then we repeat the following process until our budget is empty. First we extend Aij with a batch of N human annotations for each pair of undecided systems. We then iterate over the undecided system pairs and use the decision function from Algorithm 1 (see Section 3) to decide whether two given systems are significantly different given the current set of annotations and metric ratings. If so, the pair is removed from the set of undecided pairs. When the budget is empty or all system pairs are decided, a (potentially) partial order is computed. The decision function leverages human and automated ratings to state whether one system is significantly better than the other. The advantage the protocol is two-fold. First, it | Chatbot | SummEval | WMT21 | | | | | | | | |----------------|------------|---------|-------------|--------|-------|------|-------|------|-----| | Data | BST | CNN/DM | News EN->DE | | | | | | | | Metrics | 5 | 7 | 4 | | | | | | | | TG Systems | 6 | 11 | 16 | | | | | | | | Human Ratings | 50 | 100 | 500 | | | | | | | | Metric Ratings | 1k | 11k | 1k | Domain | Corr. | Inv. | Omi.. | Ins. | KLD | | Chatbot | 0.47 | 0.1 | 0.10 | 0.33 | 0.46 | | | | | | Summeval Rel. | 0.55 | 0.19 | 0.03 | 0.23 | 0.28 | | | | | | WMT21 | 0.52 | 0.07 | 0.10 | 0.31 | 0.52 | | | | | Table 1: Overview of the data used. The ratings refer to the number of ratings available for each pair of TG systems. exploits the fact that some system pairs are easier to distinguish than others. In cases where |p> − p<| is large we need fewer human annotations to achieve the significance threshold. Compared to the setting where we allocate the same number of human annotations for each pair of systems, this allows us to spend more of the annotation budget on difficult system pairs. This approach can be used even in the absence of automated ratings. Second, our framework allows for a seamless combination of ratings from both humans and an automated metric. ## 5 Case Studies - Setup In this section, we present three case studies where we apply the evaluation protocol outlined in Algorithm 2. As showcases, we use three domains: the WMT 21 metrics task data (Freitag et al., 2021) for machine translation, the SummEval data (Fabbri et al., 2021) for summarization, and data collected for conversational dialogue systems (see Appendix C). Table 1 gives an overview of the setting. For each domain, we investigate a set of metrics applied to outputs of a set of TG systems. We provide the details of the TG systems and the metrics in Appendix C. Chatbot: For the chatbot domain, we used the ParlAI framework (Miller et al., 2017) to generate 1000 outputs for 5 different TG systems on the BlendedSkillTask (BST) dataset (Smith et al., 2020). We then used the DialEval framework by Yeh et al. (2021) to run the outputs on 5 different metrics: DEB (Sai et al., 2020), GRADE (Huang et al., 2020), HolisticEval (Pang et al., 2020), MAUDE (Sinha et al., 2020), and USL-H (Phy et al., 2020). In addition, we used Amazon Mechanical Turk to annotate 50 pairwise outputs. SummEval: For the summarization domain, we used the SummEval framework (Fabbri et al., 2021), which provides the outputs of 16 different summarization tools on the CNN/DailyMail corpus (Nallapati et al., 2016), as well as 100 expert annotations for each of these systems for each of the features: relevance, coherence, consistency, and fluency. We used the SummEval framework to generate 11k pairwise ratings by 7 different automated metrics: BertScore (Zhang et al., 2019), BLANC (Vasilyev et al., 2020), CIDEr (Vedantam et al., 2015), Rouge-L (Lin, 2004), S3 (Peyrard et al., 2017), SummaQA (Scialom et al., 2019), and SUPERT (Gao et al., 2020). WMT21: For machine translation, we used the WMT21 metrics task data (Freitag et al., 2021). In this work, we only focus on the English to German language pair and the news domain, where eight machine translation systems were evaluated, plus three human references for each input which were also regarded as TG systems (resulting in eleven TG systems). Although the WMT21 metrics task inspected 15 different automated metrics, we only focused on four of them (we selected the most prominent ones): BleuRT (Sellam et al., 2020), COMET (Rei et al., 2020), C-SPEC (Takahashi et al., 2021), and sentence-level BLEU (Papineni et al., 2002). For each TG system there are 500 expert MQM annotations, and for each metric there are 1000 metric ratings. ## 6 Case Studies - Results In this section, we discuss the results of the case study. We use the error-measures that we presented in List 1 in the introduction. Furthermore, for each system pair we compute the Kullback-Leibler Divergence (KLD) between the mode of the posterior in Equation 3 based on all human annotations phum and the mean estimated by running Algorithm 2 p*prot*. We then report the average over all pairs of systems: 2 S(S−1) Pj>i KLD(p (i,j) prot||p (i,j) hum). Note that in Tables 2 and 3, we only report the Relevance part of the SummEval data due to space limitations. The results for the other features are in Appendix D. The naive application of metrics yields many errors. When applying the metrics naively, i.e. by simply checking whether m> is significantly Metric Corr. Inv. Omi. Ins. KLD Ann. Chatbot Domain Human 0.93 0.00 0.00 0.07 0.05 0.59 DEB 0.93 0.00 0.00 0.07 0.06 0.49 GRADE 0.87 0.00 0.00 0.13 0.05 0.53 HOLISTIC 0.87 0.00 0.00 0.13 0.05 0.52 MAUDE 0.87 0.00 0.00 0.13 0.05 0.53 USL-H 0.87 0.00 0.00 0.13 0.05 0.52 Summeval Relevance Domain Human 0.96 0.00 0.00 0.04 0.07 0.43 BertScore 0.90 0.00 0.01 0.09 0.06 0.40 BLANC 0.93 0.00 0.02 0.06 0.05 0.41 CIDEr 0.93 0.00 0.01 0.06 0.06 0.42 ROUGE-L 0.92 0.00 0.01 0.08 0.05 0.41 S3 0.93 0.00 0.01 0.07 0.06 0.41 SummaQA 0.92 0.00 0.01 0.08 0.05 0.41 SUPERT 0.91 0.00 0.01 0.08 0.06 0.39 WMT21 Domain Human 0.65 0.00 0.00 0.35 0.06 0.34 BleuRT 0.65 0.00 0.00 0.35 0.06 0.34 C-SPEC 0.65 0.00 0.00 0.35 0.06 0.33 COMET 0.65 0.00 0.00 0.35 0.06 0.34 BLEU 0.65 0.00 0.00 0.35 0.05 0.34 bigger than m<, then this introduces many cases where systems that are not significantly different are rated as such. In Table 2 the average error rates for each error type are shown (the full result table is found in Appendix D). Overall the Insertion error type dominates (averaging at 23% to 33%). That is, in all domains, the metrics have a strong tendency to suggest differences between systems that are not statistically significant according to the human evaluation. The rate of inversion errors depends on the domain and metrics used. For the chatbot domain, the average Inversion error rate lies at 10%. For SummEval-Relevance the Inversion error rates lie at an average of 23%. The average KLD scores are high, which indicates that the naive application of metrics yields distributions that are in high disagreement with the human evaluation. The evaluation protocol is able to recreate the original results. Table 3 shows the results of applying the protocol described in Algorithm 2. We also report the results achieved when applying the protocol using only human ratings (i.e., leaving Mij empty), as well as the result of an ideal metric in the SummEval - Relevance case. First, we note that there are no Inversion Errors, almost no Omission Errors, and there is a high Correctness score. For the Chatbot and SummEval domain the outcomes agree in around 90% to those of the human eval- ![6_image_0.png](6_image_0.png) uation. For the WTM21 domain, the agreement is lower at 65%. The most common error type is the Insertion Error. In our setting this can be explained by the fact that we are using the outcomes of significance tests to compare the human to the protocol evaluation. Thus, using corrected metric samples increases the amount of samples, which leads to pairs being rated as significantly different. Since the ratings are based on our decision function, which takes into account different sources of uncertainty, the Insertions are not necessarily wrong. In fact one reason to use automated evaluation is to find differences between system that would be too expensive to discover with human annotations. A different view for comparing the outcomes of the evaluations is given by the KLD score, which reports how close the distribution p*prot* is to the original human evaluation. This view removes the significance test from the equation, and better showcases the disagreement between the protocol and the original human evaluation. In all cases the KLD scores are very low, which shows that the protocol yields results comparable to the original human evaluation. In terms of the number of annotations needed, there are two measures. First, we compare the number of annotations needed by the protocol to the one needed by the full human evaluation. Here, the application of our protocol reduces the amount of humans annotation by more than half in most cases. For the WMT21 we can even reduce it by two thirds. The second view is comparing the number of annotations needed by the protocol to the annotations needed when the protocol is applied to human ratings only. For the Chatbot and SummEval domain, leveraging automated metrics results in less data needed (up to 10% for the Chatbot domain, and 5% for the SummEval domain). For the WMT21 domain, only 1% difference is measured. We assume that this are due to the fact that the metrics are not yet of high enough quality to yield the boost needed to have a large impact. Summary. Figure 2 summarizes the main outcomes of this work. The Figure shows the number of annotations (x-axis) in relation to the negative log-KLD score achieved (y-axis). The full human evaluation is set as reference, that is, using 100% of annotations, and a KLD score of 0 (thus, not shown). The Figure shows that on average using the full protocol on real-world metrics yields using 40% of annotations, and achieving a KLD score of 0.08. On the other hand, not using metric ratings in the protocol needs 43% of annotations and achieving a worse KLD score of 0.6. The naive application of the metrics does not need any annotations but yields high KLD scores (1.6 on average). To showcase an upper limit, we also added the KLD divergence for an ideal metric, which we simulated using the Bayesian model, where we use a fixed µ (see Appendix E for details). The ideal metric only needs 38% of annotation, and achieves a KLD score of 0.02. An ideal metric would also achieve a low KLD when applied naively. ## 7 Related Work We here focus on approaches that discuss theorydriven analysis of metrics-based evaluation of TG systems that involve human annotations. Chaganty et al. (2018) propose an approach to combine human and metrics-based evaluation using control variates to reduce biases in automated metrics and save annotation costs. They explore automated scalar metrics in the Summarization and QA domain and find that their approach can lead to marginal reductions of the required human annotations. They conclude that further improvement of automated metrics and exploration of rankingbased evaluation are potential future directions. One interesting take-away from this work is the influence of the quality of the human annotations. Currently, we approximate the oracle preference ratings through human annotations without explicitly modeling uncertainty stemming from annotator disagreement. Wei and Jia (2021) pick up this point and apply a statistical approach to identify a setting in which automated metrics for Machine Translation and Summarization are more precise than human annotations: When the qualitative difference is small and there are only few human annotations. They argue that the reason is that while human annotations are unbiased, they have a high variance. Conversely, automatic metrics tend to have a high bias but low variance. Furthermore, they apply the bias-variance-noise decomposition from Domingos (2000) to analyse sources of errors in evaluation and asses bias levels in automated metrics. Our analysis, in comparison, is more fine-grained in terms of categorizing metric errors, and we propose how to combine human and metric evaluation under a budget constraint. Similar to this work, von Däniken et al. (2022) propose a model that captures uncertainties stemming from imperfect metrics and insufficiently sized test sets. With their framework, the required size of a test set that is needed to distinguish a given difference in performance between two systems with a given automated metric can be calculated. Their investigation is limited to the case when scalar metrics are converted to binary metrics, however. Card et al. (2020) also analyse the required data set sizes that enable the detection of significant differences between systems, but they do not account for metric errors explicitly. Hashimoto et al. (2019) propose an evaluation approach that combines human and automated evaluation in a statistical framework to estimate diversity and quality of NLG systems jointly. Their focus is the creation of a novel metric, while our goal is to evaluate existing ones and to combine them with human annotations to obtain robust evaluations. ## 8 Conclusion In this work, we introduced a novel Bayesian model that explicitly handles errors from automated metrics at the sample level. We then proposed an evaluation protocol that leverages this statistical model to reduce the amount of human annotations needed while yielding similar evaluation outcomes. We applied the protocol to three tasks in a case study. Namely, Dialogue Systems, Summarization, and Machine Translation. The results show that the Bayesian model is able to successfully include various types of uncertainty, which leads to more trustworthy applications of automated metrics. When applying the protocol, we achieve similar results as a purely human evaluation with only half the annotations needed. ## Acknowledgments This work was supported by Swiss National Science Foundation within the project "Virtual Kids - Virtuelle Charaktere zur Verbesserung der Qualität von Kindesbefragungen" [10001A_189236/1], and internal funding by the Zurich University of Applied Sciences. ## Limitations Human Ratings as Oracle. In this work, we make the strong assumption that human ratings are equivalent to the oracle. As noted in the introduction, human evaluation is hard to setup and does not always lead to satisfactory agreement scores. However, for SummEval and WMT21 the human ratings are provided by experts, and thus, can be seen as close to oracle ratings. For the Dialogue domain, the ratings are made by crowdsourcing where we applied MACE (Hovy et al., 2013) to get the highest quality ratings. In future work, we will integrate the uncertainty of the human evaluation in the Bayesian model as well, which is not trivial. Pairwise µ. We noted that the for each pair of systems, the mixture matrix is different. As a consequence, the errors made by the metric must be computed for each pair of systems separately, which is more cost-intensive. In future work, we aim to develop methods to transfer the knowledge from one system pair to another. This also highlights one issue of automated metrics, namely, that they are biased towards certain output types, which are exhibited by certain TG systems. Draws are ignored. One issue with preferencebased ratings is the question of how to handle draws on the sample basis. Currently, we use p> and p< to decide if two systems are significantly different. However, if we consider the case of p> = 0.02, p< = 0.01, and p= = 0.97, then with enough samples, we will measure a statistical significant difference between the two systems. However, in 97% of cases the outputs are of equal quality. Thus, can we really state that one system is better than the other? Statistical Significance Decision. To compare the outcomes of the automated evaluation to the human evaluation, we rely on statistical significance testing. For this, we use the standard approaches, which are widely adopted. However, we noted that the significance decisions are rather arbitrary and make it hard to compare two evaluations, especially the interpretation of Insertion Errors is not trivial. The large amounts of additional automated metric ratings result in some pairs being rated as significantly different. However, it is not clear whether this is a mistake or if we were able to distinguish two systems that were not distinguishable due to too little data. The KLD score gives better insights for this as it compares distributions. Differences in Samples. Currently, we disregard the fact that there are samples which are harder to rate than others. In fact, we treat each sample as being equal in Defintion 3. However, the sample difficulty could be leveraged to distinguish different systems from each other. For instance, if two machine translation systems are evaluated only on easy samples, then they might be rated as being of equal quality. However, a test on a harder sample might show the difference in capabilities between the two systems. Conversion to Preference Ratings. Current automated metrics are built such that they return a scalar value ∈ R to rate a given pair of input and output. We have to transform these values into preference ratings by looking at the sign of the difference between the ratings of two outputs (see Appendix C). This leads to a few problems. First, there are only few draws, since metrics rarely return the exact same floating point value for two different outputs. Second, we disregard the magnitude of the scalar value. The magnitude can be used to assess the certainty of the preference of one output against another. In preliminary experiments we tried including a minimal threshold that the difference needs to surpass in order to be regarded as a preference decision. This will have to be explored in more detail in future work. Current Metric performance. Since the current metrics are not yet of high enough quality, the impact they have on the protocol is small. This might give the impression that the protocol does not offer any remedy. However, the results show that our Bayesian model is able to rectify the overconfidence of low-performance metrics, and in cases where a metric is of low quality, its impact is reduced. ## References Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Rethinking the agreement in human evaluation tasks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3318–3329, Santa Fe, New Mexico, USA. Association for Computational Linguistics. and Michael I. Jordan. 2003. An introduction to mcmc for machine learning. 50:5–53. Anya Belz, Anastasia Shimorina, Shubham Agarwal, and Ehud Reiter. 2021. The ReproGen shared task on reproducibility of human evaluations in NLG: Overview and results. In Proceedings of the 14th International Conference on Natural Language Generation, pages 249–258, Aberdeen, Scotland, UK. Association for Computational Linguistics. Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul A. Szerlip, Paul Horsfall, and Noah D. Goodman. 2019. Pyro: Deep universal probabilistic programming. *J. Mach. Learn. Res.*, 20:28:1–28:6. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Dallas Card, Peter Henderson, Urvashi Khandelwal, Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020. With little power comes great responsibility. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9263–9274, Online. Association for Computational Linguistics. Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. *arXiv* preprint arXiv:2006.14799. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643–653, Melbourne, Australia. Association for Computational Linguistics. Clint W. Coakley and Mark A. Heise. 1996. Versions of the sign test in the presence of ties. *Biometrics*, 52(4):1242–1251. Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2021. Survey on evaluation methods for dialogue systems. *Artificial Intelligence Review*, 54(1):755–810. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Pedro M. Domingos. 2000. A unified bias-variance decomposition and its applications. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In *Proceedings of the Sixth Conference on Machine Translation*, pages 733–774, Online. Association for Computational Linguistics. Yang Gao, Wei Zhao, and Steffen Eger. 2020. SUPERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1347– 1354, Online. Association for Computational Linguistics. Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In *North American Chapter of the Association for Computational* Linguistics. Matthew D. Hoffman and Andrew Gelman. 2014. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15(47):1593–1623. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130, Atlanta, Georgia. Association for Computational Linguistics. Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: Automatic graphenhanced coherence metric for evaluating opendomain dialogue systems. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230–9240, Online. Association for Computational Linguistics. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In *International Conference* on Learning Representations. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In *Text Summarization Branches Out: Proceedings of the ACL-04 Workshop*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. *Revista Tradumàtica: tecnologies de* la traducció, (12):455–463. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic Turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116–1126, Vancouver, Canada. Association for Computational Linguistics. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4984–4997, Online. Association for Computational Linguistics. Shikib Mehri and Maxine Eskenazi. 2020a. Unsupervised evaluation of interactive dialog with DialoGPT. In *Proceedings of the 21th Annual Meeting of the* Special Interest Group on Discourse and Dialogue, pages 225–235, 1st virtual meeting. Association for Computational Linguistics. Shikib Mehri and Maxine Eskenazi. 2020b. USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 681–707, Online. Association for Computational Linguistics. Nicholas Metropolis and S. Ulam. 1949. The monte carlo method. *Journal of the American Statistical* Association, 44(247):335–341. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gulçehre, and Bing Xiang. 2016. ˘ Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 3619–3629, Online. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. Learning to score system summaries for better content selection evaluation. In Proceedings of the Workshop on New Frontiers in Summarization, pages 74–84, Copenhagen, Denmark. Association for Computational Linguistics. Du Phan, Neeraj Pradhan, and Martin Jankowiak. 2019. Composable effects for flexible and accelerated probabilistic programming in numpyro. *arXiv preprint* arXiv:1912.11554. Vitou Phy, Yang Zhao, and Akiko Aizawa. 2020. Deconstruct to reconstruct a configurable evaluation metric for open-domain dialogue systems. In *Proceedings of* the 28th International Conference on Computational Linguistics, pages 4164–4178, Barcelona, Spain (Online). International Committee on Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Ananya B. Sai, Akash Kumar Mohankumar, Siddhartha Arora, and Mitesh M. Khapra. 2020. Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining. *Transactions of* the Association for Computational Linguistics, 8:810– 827. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3246–3256, Hong Kong, China. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Kurt Shuster, Mojtaba Komeili, Leonard Adolphs, Stephen Roller, Arthur Szlam, and Jason Weston. 2022a. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. *arXiv preprint* arXiv:2203.13224. Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022b. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188. Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L. Hamilton, and Joelle Pineau. 2020. Learning an unreferenced metric for online dialogue evaluation. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 2430–2441, Online. Association for Computational Linguistics. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 2021–2030, Online. Association for Computational Linguistics. Kosuke Takahashi, Yoichi Ishibashi, Katsuhito Sudoh, and Satoshi Nakamura. 2021. Multilingual machine translation evaluation metrics fine-tuned on pseudonegative examples for WMT 2021 metrics task. In Proceedings of the Sixth Conference on Machine Translation, pages 1049–1052, Online. Association for Computational Linguistics. Oleg Vasilyev, Vedant Dharnidharka, and John Bohannon. 2020. Fill in the BLANC: Human-free quality estimation of document summaries. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 11–20, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Pius von Däniken, Jan Deriu, Don Tuggener, and Mark Cieliebak. 2022. On the effectiveness of automated metrics for text generation systems. *arXiv preprint* arXiv:2210.13025. Marilyn A. Walker, Diane J. Litman, Candace A. Kamm, and Alicia Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 271–280, Madrid, Spain. Association for Computational Linguistics. Johnny Tian-Zheng Wei and Robin Jia. 2021. The statistical advantage of automatic nlg metrics at the system level. In *Annual Meeting of the Association for Computational Linguistics*. Jing Xu, Arthur Szlam, and Jason Weston. 2022. Beyond goldfish memory: Long-term open-domain conversation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 5180–5197, Dublin, Ireland. Association for Computational Linguistics. Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. In *The First Workshop on Evaluations and* Assessments of Neural Conversation Systems, pages 15–33, Online. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## A Derivations Dirichlet. We will first explain the usage of Dirichlet distributions in Equations 3 and 4. The Dirichlet distribution of order K is defined for all K dimensional probability vectors p = (p1*, . . . , p*K) such that PK i=1 pi = 1 and pi ≥ 0. It has K parameters α = (α1*, . . . , α*K) and its density is: $$p(\mathbf{p}|\mathbf{\alpha})={\frac{1}{B(\mathbf{\alpha})}}\prod_{i=1}^{K}p_{i}^{\alpha_{i}-1}$$ , where B(α) is the multivariate Beta function used to normalize the distribution. Note that if all αi = 1 then the distribution is constant at all points p, meaning it is equivalent to the uniform distribution in that case. Our main interest in using the Dirichlet distribution is because it is the conjugate prior of the Multinomial distribution. In Section 3 the counts from the oracle ratings n>, n=, and n< follow a Multinomial distribution with unknown probabilities p = (p>, p=, p<), meaning that: $$\begin{array}{r l}{P(N_{+}=n_{+},N_{-}=n_{-},N_{<}=n_{<}|p)}\\ {{}={\frac{(n_{>}+n_{-}+n_{<})}{n_{>}!n_{<}!}}p_{>}^{n_{>}}p_{=}^{n_{<}}}\end{array}$$ If we assume a Dirichlet prior for p ∼ Dirichlet(α>, α=, α<) then we can compute its posterior: $$\begin{array}{l}{{p(p|N)\propto P(N|p)p(p)}}\\ {{\propto p_{>}^{n_{>}}p_{=}^{n_{=}}p_{<}^{n_{<}}p_{>}^{\alpha_{>}-1}p_{=}^{\alpha_{=}-1}p_{<}^{\alpha_{<}-1}}}\\ {{\propto p_{>}^{\alpha_{<}+n_{<}-1}p_{=}^{\alpha_{=}+n_{=}-1}p_{<}^{\alpha_{<}+n_{<}-1}}}\end{array}$$ We left out the normalization constants in this derivation. We can see that on the final line we arrive at the density of an updated Dirichlet distribution Dirichlet(α> + n>, α= + n=, α< + n<). Setting αi = 1 we get our result in Equation 3. We can apply the same principle to the columns of µ. The first column µ·> = (µ>>, µ=>, µ<>) T denotes the conditional probabilities of getting a specific outcome from the metric conditioned on the oracle rating being >. The associated confusion counts n>>, n=>, and n<> again follow a Trinomial distribution with outcome probabilities of µ·>. If we again assume a uniform prior for µ·> then we can derive the posteriors in Equation 4. Mixture. In Sections 3.2 and 3.3 we use the fact that the probabilities associated with the counts of metric ratings is pˆ = µp. We know that pˆ> = P(M(ij , oa j , ob j ) = >). Using the law of total probability: $\hat{P}>=\sum_{c\in\{>,=,<\}}P(\mathcal{M}(i_{j},o_{j}^{a},o_{j}^{b})=>|\Omega(i_{j},o_{j}^{a},o_{j}^{b})=c)$ $P(\Omega(i_{j},o_{j}^{a},o_{j}^{b})=c)$ We note that P(Ω(ij , oa j , ob j ) = c) = pc, and therefore pˆ> =Pc∈{>,=,<} µ>cpc and analogously for pˆ= and pˆ<. This leads us to our original statement pˆ = µp. Using Annotations multiple times. In Algorithm 1 it can be unclear which subsets of annotations should be used for the various counts. In general, the test set of inputs I can be split into three subsets: IM,A, the samples for which we have paired ratings from both the metric and humans, IM, the samples for which we have only ratings from the automated metric, and IA, the samples for which we only have human ratings. It is relatively obvious that we can use IM to count m>, m=, m<, IA for n>, n=, n<, and IM,A for the confusion counts ncc′. The question is whether it is sound to use the ratings from IM,A to augment nc and mc. In general, it should not be an issue to use the human ratings in IM,A as additional counts for n>, n=, n<. But we must not use the metric ratings to get additional counts m>, m=, m<. This means that, in principal, IA could be empty. ## B Markov Chain Monte Carlo (Mcmc) Sampling Markov Chain Monte Carlo (MCMC) methods are often used in Bayesian modelling when there is no analytic closed form solution for the resulting posterior. The main idea is that expected values of functions of the posterior can be reasonably approximated by averaging over samples drawn from it (Metropolis and Ulam, 1949). Samples are generated sequentially, and the next sample is usually generated by first modifying the current sample randomly and then either accepting or rejecting it based on its likelihood. We refer the interested reader to Andrieu et al. (2003) for an introduction. We use the *Numpyro* (Bingham et al., 2019; Phan et al., 2019) library to implement the framework laid out in Section 3. We use the built-in NoU-Turn (NUTS) Sampler (Hoffman and Gelman, 2014). When running the decision function laid out in Algorithm 1, we run 5 chains in parallel. We use a warm-up period of 2000 samples per chain, which are discarded, and draw 10000 samples per chain to keep and compute the difference in win rates. ## C Case Study Details For the case studies, we require two types of ratings: preference ratings made by humans to simulate the oracle Ω, and metric ratings for the error-prone ratings M. We collect this data for three types of domains: Conversational Dialogue Systems, Automated Text Summarization, and Machine Translation. Since most metrics return a scalar value, we need to transform them into a preference rating, which is done as follows: Definition 4 (Scalar Metric). We call real valued functions of inputs and outputs *scalar metrics*: Ms : *I × O →* R. A preference metric can be constructed from a scalar metric as follows: Definition 5. The *derived comparison metric* M of a given scalar metric Ms *is defined as* $$\mathcal{M}(i,o_{1},o_{2})=\begin{cases}>&\mathcal{M}_{s}(i,o_{1})>\mathcal{M}_{s}(i,o_{2})\\ =&\text{otherwise}\\ <&\mathcal{M}_{s}(i,o_{1})<\mathcal{M}_{s}(i,o_{2})\end{cases}$$ ## C.1 Dialog System Data Collection For the Dialog domain, we used the ParlAI framework (Miller et al., 2017) to generate the outputs of systems. For this, we selected 5 state-of-the-art dialog systems, and used ParlAI to generate response for a static context. The Dialogue Systems are: - **Blenderbot 1.0 - 400distill (BL400distill)**. BlenderBot 1.0 (Shuster et al., 2022b) is a transformer-based encoder-decoder bot trained on the Blended Skill Task (BST) data (Smith et al., 2020). - **Blenderbot 2.0 - 3B Params (BL2-3B)**. The BlenderBot 2.0 extends Blenderbot 1.0 with internet access (Komeili et al., 2022) and a long-term memory (Xu et al., 2022). - **DialoGPT**. DialoGPT (Zhang et al., 2020) is a decoder-only dialogue system that fine-tunes GPT-2 (Radford et al.) on the Reddit dataset proposed by the DialoGPT authors. - **PolyEncoder**. The PolyEnocder (Humeau et al., 2020) is a retrieval based dialogue system, which selects the most suitable response from a set of candidates. - **SeekerDial3B**. SeekerDial (Shuster et al., 2022a) improves on the internet search proposed in BlenderBot 2.0 We used ParlAI to generate the response of each of the dialogue systems for 1000 static contexts from the Blended Skill Task (BST) test set. From this set, we selected 50 contexts, and generated all pairwise outputs between all 5 dialogue systems and the human reference. That is, for each context, there are 15 pairs of outputs to be rated. We let workers on Amazon Mechanical Turk 4 perform a preference rating. That is, for each pair of output, the workers decided, which output is more appropriate. Figure 3 shows the annotation tool. Each sample was annotated by three workers. Each worker is payed 15 cents per annotation, and at a rate of 1.5 annotations per minute on average, they achieve a wage of 12$ per hour. We used workers with the Master status and restricted their geographic location to English-speaking countries (USA, UK, Canada, Ireland, and Australia). In Figure 4 shows the instructions given to the annotators. The annotations are ratings on the overall adequacy of the utterances. Since each sample is annotated by three different workers, we aggregate the ratings using the MACE (Hovy et al., 2013) software, which computes a trustworthiness score for each annotator and generates a weighted average to get the final label. Note that this annotation scheme directly yields preference ratings according to our framework. In order to generate the metric ratings, we used the DialEval framework by (Yeh et al., 2021), which integrates a large pool of metrics. We selected five metrics that were easy to setup and achieved decent correlations to human judgments in the evaluation by (Yeh et al., 2021). Since the metrics return scalar values for each sample, we create preference ratings as suggested in Definition 5. Since the scalar metrics yield real numbered values, there are almost no cases where the derived preference rating yields a draw. ## C.2 Summarization Data Collection For the Summarization domain, we use the data provided by the SummEval framework (Fabbri et al., 2021). It contains data from the Dailymail/CNN dataset (Nallapati et al., 2016), which contains a test set of 11k samples. The SummEval framework contains the outputs of 23 summarization systems (for a detailed description of these systems, we refer the reader to Section 3.2 of (Fabbri et al., 2021)). For 16 of the 23 summarization systems, the authors let 100 generated outputs be rated by three experts on the four characteristics: fluency, consistency, coherence, and relevance. Since these ratings are on a Likert-scale, we transformed those ratings into preference ratings by averaging the three rat4https://www.mturk.com ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) Figure 4: Screenshot of the Instruction of the Dialogue ings per sample, and applying the transformation proposed in definition 5. We chose 7 automated metrics based on their popularity and ease of setup. We applied the SummEval framework to generate the automated ratings for each of the 16 summarization systems on the full test set. Analogous to above, we converted the scalar ratings to pairwise ratings by applying definition 5. ## Machine Translation Data Collection C.3 For the Machine Translation domain, we used the WMT-21 (Freitag et al., 2021 ) metrics task data. For space limitations, we used the EN → DE section of the data only. The WMT-21 dataset consists of 15 different metrics for 8 machine translation systems and 3 human references. The human annotations consist of 500 MQM ratings (Lommel et al., 2014) done by expert translators. To create preference ratings, we computed the average score of a sample, and compared the score of two outputs of two different systems for the same input by applying definition 5. The WMT-21 dataset already contains the ratings of the automated metrics for 1000 samples, out | Metric | Corr. | Inv. | Omi. | Ins. | KLD | |-----------------------------|---------|--------|--------|--------|-------| | Chatbot Domain | | | | | | | deb | 0.60 | 0.00 | 0.07 | 0.33 | 0.39 | | grade | 0.53 | 0.07 | 0.20 | 0.20 | 0.53 | | holistic | 0.27 | 0.20 | 0.13 | 0.40 | 0.57 | | maude | 0.40 | 0.13 | 0.13 | 0.34 | 0.42 | | usl | 0.53 | 0.07 | 0.00 | 0.40 | 0.40 | | Summeval Coherence Domain | | | | | | | BertScore | 0.58 | 0.22 | 0.00 | 0.20 | 0.35 | | BLANC | 0.45 | 0.29 | 0.02 | 0.24 | 0.33 | | CIDEr | 0.38 | 0.36 | 0.02 | 0.24 | 0.43 | | ROUGE-L | 0.45 | 0.33 | 0.00 | 0.22 | 0.34 | | S3 | 0.53 | 0.24 | 0.01 | 0.22 | 0.35 | | SummaQA | 0.53 | 0.19 | 0.07 | 0.21 | 0.31 | | SUPERT | 0.45 | 0.25 | 0.06 | 0.24 | 0.38 | | Summeval Consistency Domain | | | | | | | BertScore | 0.49 | 0.16 | 0.00 | 0.35 | 1.92 | | BLANC | 0.44 | 0.15 | 0.02 | 0.39 | 1.74 | | CIDEr | 0.30 | 0.30 | 0.02 | 0.38 | 2.17 | | ROUGE-L | 0.35 | 0.25 | 0.02 | 0.38 | 1.98 | | S3 | 0.46 | 0.17 | 0.01 | 0.37 | 1.90 | | SummaQA | 0.55 | 0.09 | 0.03 | 0.33 | 1.81 | | SUPERT | 0.46 | 0.13 | 0.04 | 0.38 | 1.76 | | Summeval Fluency Domain | | | | | | | BertScore | 0.46 | 0.14 | 0.00 | 0.40 | 1.03 | | BLANC | 0.40 | 0.16 | 0.01 | 0.43 | 0.98 | | CIDEr | 0.34 | 0.21 | 0.02 | 0.43 | 1.08 | | ROUGE-L | 0.41 | 0.18 | 0.00 | 0.42 | 0.99 | | S3 | 0.42 | 0.16 | 0.01 | 0.42 | 1.03 | | SummaQA | 0.48 | 0.08 | 0.05 | 0.39 | 0.97 | | SUPERT | 0.43 | 0.12 | 0.03 | 0.42 | 1.06 | | Summeval Relevance Domain | | | | | | | BertScore | 0.64 | 0.13 | 0.01 | 0.22 | 0.26 | | BLANC | 0.51 | 0.23 | 0.02 | 0.25 | 0.25 | | CIDEr | 0.43 | 0.29 | 0.03 | 0.25 | 0.35 | | ROUGE-L | 0.52 | 0.24 | 0.01 | 0.23 | 0.25 | | S3 | 0.62 | 0.15 | 0.01 | 0.23 | 0.26 | | SummaQA | 0.61 | 0.11 | 0.07 | 0.22 | 0.23 | | SUPERT | 0.51 | 0.20 | 0.05 | 0.24 | 0.35 | | Ideal-M | 0.75 | 0.00 | 0.00 | 0.25 | 0.02 | | WMT21 Domain | | | | | | | BleuRT | 0.47 | 0.02 | 0.13 | 0.38 | 0.46 | | C-SPEC | 0.78 | 0.00 | 0.07 | 0.15 | 0.53 | | COMET | 0.40 | 0.09 | 0.13 | 0.38 | 0.65 | | BLEU | 0.44 | 0.16 | 0.07 | 0.33 | 0.43 | | Metric | Corr. | Inv. | Omi. | Ins. | KLD | Ann. | |-----------------------------|---------|--------|--------|--------|-------|--------| | Chatbot Domain | | | | | | | | Human | 0.93 | 0.00 | 0.00 | 0.07 | 0.05 | 0.59 | | DEB | 0.93 | 0.00 | 0.00 | 0.07 | 0.06 | 0.49 | | GRADE | 0.87 | 0.00 | 0.00 | 0.13 | 0.05 | 0.53 | | HOLISTIC | 0.87 | 0.00 | 0.00 | 0.13 | 0.05 | 0.52 | | MAUDE | 0.87 | 0.00 | 0.00 | 0.13 | 0.05 | 0.53 | | USL-H | 0.87 | 0.00 | 0.00 | 0.13 | 0.05 | 0.52 | | Summeval Coherence Domain | | | | | | | | Human | 0.97 | 0.00 | 0.00 | 0.03 | 0.04 | 0.42 | | BertScore | 0.96 | 0.00 | 0.00 | 0.04 | 0.04 | 0.38 | | BLANC | 0.96 | 0.00 | 0.00 | 0.04 | 0.03 | 0.40 | | CIDEr | 0.95 | 0.00 | 0.00 | 0.05 | 0.04 | 0.39 | | ROUGE-L | 0.96 | 0.00 | 0.00 | 0.04 | 0.03 | 0.40 | | S3 | 0.97 | 0.00 | 0.00 | 0.03 | 0.03 | 0.41 | | SummaQA | 0.94 | 0.00 | 0.01 | 0.05 | 0.04 | 0.39 | | SUPERT | 0.94 | 0.00 | 0.00 | 0.06 | 0.04 | 0.38 | | Summeval Consistency Domain | | | | | | | | Human | 0.93 | 0.00 | 0.00 | 0.07 | 0.02 | 0.53 | | BertScore | 0.88 | 0.00 | 0.00 | 0.12 | 0.02 | 0.45 | | BLANC | 0.88 | 0.00 | 0.00 | 0.12 | 0.02 | 0.48 | | CIDEr | 0.87 | 0.00 | 0.01 | 0.13 | 0.02 | 0.46 | | ROUGE-L | 0.88 | 0.00 | 0.00 | 0.12 | 0.02 | 0.46 | | S3 | 0.88 | 0.00 | 0.00 | 0.12 | 0.02 | 0.46 | | SummaQA | 0.90 | 0.00 | 0.00 | 0.10 | 0.02 | 0.47 | | SUPERT | 0.88 | 0.00 | 0.00 | 0.12 | 0.02 | 0.46 | | Summeval Fluency Domain | | | | | | | | Human | 0.95 | 0.00 | 0.00 | 0.05 | 0.03 | 0.60 | | BertScore | 0.91 | 0.00 | 0.01 | 0.08 | 0.04 | 0.56 | | BLANC | 0.91 | 0.00 | 0.01 | 0.08 | 0.02 | 0.58 | | CIDEr | 0.93 | 0.00 | 0.00 | 0.07 | 0.04 | 0.55 | | ROUGE-L | 0.93 | 0.00 | 0.00 | 0.07 | 0.04 | 0.57 | | S3 | 0.92 | 0.00 | 0.01 | 0.08 | 0.04 | 0.57 | | SummaQA | 0.92 | 0.00 | 0.01 | 0.08 | 0.03 | 0.58 | | SUPERT | 0.88 | 0.00 | 0.01 | 0.11 | 0.04 | 0.54 | | Summeval Relevance Domain | | | | | | | | Human | 0.95 | 0.00 | 0.00 | 0.05 | 0.07 | 0.43 | | BertScore | 0.95 | 0.00 | 0.01 | 0.04 | 0.06 | 0.40 | | BLANC | 0.90 | 0.00 | 0.03 | 0.07 | 0.05 | 0.42 | | CIDEr | 0.94 | 0.00 | 0.02 | 0.04 | 0.06 | 0.42 | | ROUGE-L | 0.94 | 0.00 | 0.01 | 0.05 | 0.05 | 0.41 | | S3 | 0.95 | 0.00 | 0.01 | 0.04 | 0.06 | 0.41 | | SummaQA | 0.93 | 0.00 | 0.01 | 0.06 | 0.05 | 0.41 | | SUPERT | 0.93 | 0.00 | 0.02 | 0.05 | 0.06 | 0.39 | | Ideal-M | 0.85 | 0.00 | 0.00 | 0.15 | 0.02 | 0.38 | | WMT21 Domain | | | | | | | | Human | 0.65 | 0.00 | 0.00 | 0.35 | 0.06 | 0.34 | | BleuRT | 0.65 | 0.00 | 0.00 | 0.35 | 0.06 | 0.34 | | C-SPEC | 0.65 | 0.00 | 0.00 | 0.35 | 0.06 | 0.33 | | COMET | 0.65 | 0.00 | 0.00 | 0.35 | 0.06 | 0.34 | | BLEU | 0.65 | 0.00 | 0.00 | 0.35 | 0.05 | 0.34 | of which 500 overlap with the samples annotated by humans. Thus, we generated pairwise ratings by applying definition 5. ## D Full Results Tables 4 and 5 show the results for all the metrics over all tasks and features. Figure 5 shows the pairwise errors when applying our protocol. ## E Ideal Metric Since the real-world metrics that we use in this work are not yet of very high quality, we also generated synthetic data for the SummEval-Relevance domain. That is, we simulated a metric using a fixed µsim = 0.8 0.25 0.1 0.1 0.5 0.1 0.1 0.25 0.8 . For this, we used the ![16_image_0.png](16_image_0.png) human win-rates p to randomly assign labels for each pair of system for the 11k samples. That is, for each sample, we randomly chose rating r p. These sampled labels are treated as the groundtruth, which has the same distribution as the human labels. Then we used pˆ = µsimp to generate corrupted labels, which correspond to metric ratings. Since µsim makes less mistakes than the existing metrics, and it makes no Omission or Inversion Errors. We added the ideal metric to the full results in Table 4 and 5. It also achieves a very good KLD score when applied naively. However, such a metric is currently out of scope and this only serves to illustrate an upper bound for automated metrics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Additional unnumbered section after Section 8 Conclusion ✗ A2. Did you discuss any potential risks of your work? We provide a framework to correct for errors made by other automated systems. The application scope is extremely limited, as such we expect very low potential for malicious use compared to other works. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 Introduction (paragraph Contributions) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5, Appendix C ✓ B1. Did you cite the creators of artifacts you used? Section 5, Appendix C ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All 3rd party artifacts are publicly accessible and we use them as intended. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our use of 3rd artifacts should obviously stay within their intended use (we apply them exactly as in the work in which they were originally proposed). The core of our code does not depend on 3rd party data, only the scripts for reproducing the results we presented. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use existing datasets as is and assume that this check was performed by the original authors. While working with the data, we did not notice any anonymity problems. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5, Appendix C ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5, Appendix C The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Sections 5, 6 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Our statistical model has at most 15 parameters that are explicitly described in the paper. Compared to Deep Learning models the computation needed for our work is negligible. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sections 3 and 4 describe our entire model. The only hyperparameters we use are in the selected prior distributions, which are fixed for all experiments. The hyperparameters for the evaluation protocol are described in Section 5 and Appendix C. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, 6 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We use everything out-of-the-box with default parameters. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 5, Appendix C ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix C ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The data was mainly generated by trained systems. Human data used was generated in the context of other works. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Our institution provides an ethics self-checklist that was consulted. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We used Amazon Mechanical Turk, which we restricted to workers from English-Speaking countries: USA, UK, Ireland, Canada, and Australia. We do not have any other demographic data.
he-etal-2023-peer
{PEER}: Pre-training {ELECTRA} Extended by Ranking
https://aclanthology.org/2023.findings-acl.405
The BERT model and its variants have made great achievements in many downstream natural language processing tasks. The achievements of these models, however, demand highly expensive pre-training computation cost. To address this pre-training efficiency issue, the ELECTRA model is proposed to use a discriminator to perform replaced token detection (RTD) task, that is, to classify whether each input token is original or replaced by a generator. The RTD task performed by the ELECTRA accelerates pre-training so substantially, such that it is very challenging to further improve the pre-training efficiency established by the ELECTRA by using or adding other pre-training tasks, as the recent comprehensive study of Bajaj et al. (2022) summarizes. To further advance this pre-training efficiency frontier, in this paper we propose to extend the RTD task into a task of ranking input tokens according to K different quality levels. Essentially, we generalize the binary classifier in the ELECTRA into a K-level ranker to undertake a more precise task with negligible additional computation cost. Our extensive experiments show that our proposed method is able to outperform the state-of-the-art pre-training efficient models including ELECTRA in downstream GLUE tasks given the same computation cost.
# Peer: Pre-Training Electra Extended By Ranking Ru He, Wei Wang, Songfang Huang, Fei Huang Alibaba Group {ru.he, hebian.ww, songfang.hsf, f.huang}@alibaba-inc.com ## Abstract The BERT model and its variants have made great achievements in many downstream natural language processing tasks. The achievements of these models, however, demand highly expensive pre-training computation cost. To address this pre-training efficiency issue, the ELECTRA model is proposed to use a discriminator to perform replaced token detection (RTD) task, that is, to classify whether each input token is original or replaced by a generator. The RTD task performed by the ELECTRA accelerates pre-training so substantially, such that it is very challenging to further improve the pre-training efficiency established by the ELECTRA by using or adding other pre-training tasks, as the recent comprehensive study of Bajaj et al. (2022) summarizes. To further advance this pre-training efficiency frontier, in this paper we propose to extend the RTD task into a task of ranking input tokens according to K different quality levels. Essentially, we generalize the binary classifier in the ELECTRA into a K-level ranker to undertake a more precise task with negligible additional computation cost. Our extensive experiments show that our proposed method is able to outperform the state-of-the-art pre-training efficient models including ELECTRA in downstream GLUE tasks given the same computation cost. ## 1 Introduction Language model pre-training has made great achievements in many natural language processing (NLP) downstream tasks, by designing and using effective pre-training tasks. A milestone is the BERT model, which conducts a masked language model (MLM) task by randomly masking a proportion (typically 15%) of tokens in the input sentence and then recover the original sentence. Since the success of the BERT, many variant models (Liu et al., 2019; Joshi et al., 2019; Wang et al., 2019; Yang et al., 2019; Dong et al., 2019; Wu et al., 2020) have been proposed to further improve the performance of the BERT by refining or adding pre-training tasks. One common issue of these MLM-based models, however, is the pre-training efficiency, because they all need highly expensive pre-training computation cost to achieve good performance. To address this issue, the ELECTRA model is proposed by Clark et al. (2020b). The ELECTRA uses an auxiliary generator network to provide plausible tokens to replace a proportion (typically 15%) of the original tokens according to the input context, and then utilizes a main discriminator network to perform replaced token detection (RTD) task, that is, to classify whether each token is original or replaced by the generator. In order to prevent the generator from producing replaced tokens overchallenging for the training of discriminator, the ELECTRA make the generator relative weaker than the discriminator by decreasing the hidden size of the generator. After its pre-training, the generator is discarded and the discriminator is further finetuned for downstream NLP tasks. The ELECTRA has shown impressive advantages over MLM-based models in various downstream tasks under similar computation cost, especially when a model size is small. After the success of the ELECTRA, researchers have proposed quite a few models each of which has an auxiliary generator network. Because the RTD task performed by the ELECTRA has accelerated pre-training so substantially, however, it is very challenging to advance the efficiency frontier established by the ELECTRA, by using or adding other pre-training tasks, as the recent comprehensive study of Bajaj et al. (2022) summarizes. Thus, to further improve the pre-training efficiency, we propose to extend the RTD task in the ELECTRA into a token quality ranking (TQR) task, a task of ranking input tokens according to K different quality levels. Besides determining whether each input token is replaced by a generator or not, the 6475 TQR task also needs to distinguish replaced tokens by ranking them according to their replacement quality. We call our method PEER, Pre-training ELECTRA Extended by Ranking. Please refer to Figure 1 for demonstration. Our proposal is based on the key observation that the quality of replaced tokens are not even. While some replaced tokens fit the context nearly as well as the corresponding original tokens, others do not. Thus, our PEER generalizes the binary classifier in the ELECTRA into a K-level ranker to perform a more precise task. We design a scheme capable of retrieving rank labels for a majority of replaced tokens from the relative weak generator, which serves as the basis for the TQR task. The extension from the ELECTRA to the PEER also adds negligible computation cost, because the TQR task largely re-uses the computation already performed by the original ELECTRA. Additionally, our PEER adopts partial transformer-layer sharing technique between generator and ranker to further reduce computation cost in our method, as its advantage has been demonstrated in the TEAMS model (Shen et al., 2021), a recent model proposed to improve the ELECTRA. Our extensive experiments in small and base scale models show that the PEER is able to outperform both the ELECTRA and the TEAMS in downstream GLUE tasks using the same or less computation cost. ## 2 Related Work As introduced in Section 1, since ELECTRA (Clark et al., 2020b) greatly boosts the pre-training efficiency, a few models have been proposed in order to further advance this pre-training efficiency frontier. The Electric model is proposed by Clark et al. (2020a) as an energy-based model to perform the cloze task (Taylor, 1953) using noise-contrastive estimation (Gutmann and Hyvärinen, 2012). It is particularly effective at producing likelihood scores for text but slightly under-performs ELECTRA on the GLUE tasks. The MC-BERT model is proposed by Xu et al. (2020) to replace the RTD binary classification task in ELECTRA with a multi-choice cloze test with a reject option (which is essentially a multi-class classification task). The MC-BERT consists of a meta controller network and a generator network. The meta controller corrupts the original input sentence by replacing a proportion of tokens with sampled tokens, just as ELECTRA's generator does. Meanwhile, the meta controller also generates a set of k candidate tokens for each token in the input sentence. The generator uses the corrupted sentence as the input and learns to correct each token by choosing the correct answer among its k candidates. Xu et al. (2020) empirically show that the overall performance of the MC-BERT is similar to that of the ELECTRA in GLUE tasks, since the MC-BERT outperforms the ELECTRA in GLUE semantic tasks but is worse than the ELECTRA in the GLUE syntactic task CoLA. COCO-LM (Meng et al., 2021) is proposed to improve ELECTRA by using two new pre-training tasks called corrective language modeling (CLM) task and sequence contrastive learning (SCL) task. While ELECTRA's main network (discriminator) conducts only RTD task for each token position, COCO-LM's main network undertakes the CLM task by jointly performing both RTD task and MLM task for each token position in the corrupted input. Additionally, COCO-LM's main network also performs the SCL task to find a pair of the MLM replaced sentence and the cropped sentence originated from the same source sentence among all other sentences in the same training batch. The DeBERTaV3 (He et al., 2021) is proposed to combine both the advantages of the DeBERTa model (He et al., 2020) and those of the ELECTRA. The DeBERTa (He et al., 2020) introduces two novel mechanisms to improve the effectiveness of the MLM task: disentangled attention and an enhanced mask decoder. The disentangled attention computes the attention weights among tokens using disentangled matrices on two separate vectors (content vector and relative position vector) of each token, while an enhanced mask decoder includes absolute positions in the decoding layer to predict the masked tokens. The DeBERTaV3 keeps these mechanisms but replaces the MLM task (used in the DeBERTa) with ELECTRA's RTD task, and shows that the new combination outperforms both the original DeBERTa and the ELECTRA. Additionally, the DeBERTaV3 introduces gradientdisentangled embedding sharing method as a better alternative to the vanilla token embedding sharing used in the ELECTRA. The SAS (self-augmentation strategy) is proposed by Xu et al. (2021) in order to improve ELECTRA's pre-training efficiency from the perspective of data augmentation. The SAS uses a single network to jointly conduct MLM and RTD tasks ![2_image_0.png](2_image_0.png) in order to reduce computation cost and regularize the model parameters for training balance. Essentially, the generator and the discriminator share all their transformer layers in the SAS, and only two separate light-weight heads (MLM and RTD heads) are built on top of the common heavy-weight transformer layers. The MLM head also samples one token in each selected position in order to generate the corrupted input used for the next epoch of the pre-training. The SAS is empirically shown by Xu et al. (2021) to outperform the ELECTRA in small models in GLUE tasks given the same computation cost, but such an advantage vanishes in larger models. The TEAMS is proposed by Shen et al. (2021) to improve the ELECTRA by adding a multi-word selection (MWS) task along with the original RTD task. Similar to the MC-BERT, the MWS task, which is a multi-choice cloze test, is conducted to choose one correct answer from a candidate set of tokens provided from the generator. Different from the MC-BERT, however, the candidate set in the TEAMS does not contain a reject option, since the MWS task is only performed at the masked positions (instead of all positions). Besides adding the MWS task, the TEAMS introduces two refinements to model structure. One is to share bottom transformer layers of the generator and the discriminator, the other is to use separate top transformer layers for RTD head and MWS head. Both refinements have been empirically shown to be able to further improve the performance of the TEAMS. Recently, Bajaj et al. (2022) conduct a comprehensive empirical study of ELECTRA-style pretraining techniques, and propose a corresponding pre-training recipe for Model-generated dEnoising TRaining Objective (METRO). Their pre-training recipe incorporates a set of techniques to improve the efficiency and stability of large scale model pretraining, such as the ZeRO optimizer (Rajbhandari et al., 2020), scaled initialization techniques, customized Fused Operations in mix-precision training. In terms of pre-training tasks, however, the empirical study by Bajaj et al. (2022) shows that many previously proposed tasks, such as multi-choice cloze test (Xu et al., 2020), CLM and SCL (Meng et al., 2021), do not provide much improvement for the RTD task in GLUE and SQuAD tasks. ## 3 Method In this section, we describe our PEER method, which extends the binary discriminator of the ELECTRA into a ranker. Our PEER method jointly trains two neutral networks, an auxiliary generator network G and a main ranker network R. Each network is mainly a Transformer encoder (Vaswani et al., 2017), which transforms an input token sequence x = (x1, x2, · · · , xn) into a sequence of contextualized representation vectors h(x) = (h(x)1, h(x)2, · · · , h(x)n). ## 3.1 Generator In Peer The generator G in the PEER works exactly the same as the generator in the ELECTRA. It first randomly selects a proportion (typically 15%) of position indexes {1, · · · , n} to produce a masked position set M. It then generates a masked token sequence xM by replacing xiin x with a special mask token [MASK] for each i ∈ M. Afterwards, the generator G transforms the input xM into hG(xM) through transformer layers. For position i, the token generating probability of any token xv given the context xM is produced from a softmax function as follows: $$p_{G}^{(i)}(x_{v}|\mathbf{x}^{M})=\frac{\exp\{e(x_{v})^{T}h_{G}(\mathbf{x}^{M})_{i}\}}{\sum_{x^{\prime}\in V}\exp\{e(x^{\prime})^{T}h_{G}(\mathbf{x}^{M})_{i}\}},\tag{1}$$ where $e(x_{v})$ is the embedding of token $x_{v}$, and $V$ is the vocabulary. The inner product e(xv) T hG(xM)i in E.q. (1) is essentially a logit of token xv at position i given the context xM, denoted as $$\operatorname{logit}^{(i)}(x_{v}|\mathbf{x}^{M}):=e(x_{v})^{T}h_{G}(\mathbf{x}^{M})_{i},\quad(2)$$ which will be also used in our ranker. The loss of the MLM task LMLM(x; θG) is a cross entroy loss (i.e., negative log likelihood): $$\mathcal{L}_{M L M}(\mathbf{x};\theta_{G})=-\sum_{i\in\mathcal{M}}\log p_{G}^{(i)}(x_{i}|\mathbf{x}^{M}).$$ For each position i in M, the generator also sample one token xˆi from the token generating probability p (i) G (·|xM), and then replace the original token xi with the sampled xˆito produce a corrupted token sequence x C. ## 3.2 Ranker In Peer Given the corrupted token sequence x C, the Klevel ranker performs token quality ranking (TQR) task, that is, assigns each token in x C into a rank value r ∈ {1, 2*, . . . , K*}. Assuming that rank label Ri at position i of the corrupted token sequence x C is given, for rank value r ∈ {1, 2*, . . . , K* − 1}, the probability that Ri ≤ r is given: $$P(R_{i}\leq r|\mathbf{x}^{C})=\sigma(-w^{T}h_{R}(\mathbf{x}^{C})_{i}+\xi_{r}),\quad(3)$$ where σ is a sigmoid function, hR(x C)iis the contextualized representation vector at position i out of the ranker transformer, w is the to-be-learned weight vector, {ξ1, ξ2*, . . . , ξ*K−1} is a set of tobe-learned threshold parameters with the property ξ1 < ξ2 < · · · < ξK−1. The binary discriminator in the ELECTRA can be viewed as a ranker with K = 2, where rank label Riis naturally given by $$\left\{\begin{array}{l l}{R_{i}=1}&{{\mathrm{if~}}x_{i}^{c}\neq x_{i}}\\ {R_{i}=2}&{{\mathrm{if~}}x_{i}^{c}=x_{i}}\end{array}\right.$$ where x c i is the token at position i in x C. Accordingly, the loss of the TQR task with K = 2 levels is a binary cross entropy loss: $$\mathcal{L}_{T Q R}(\mathbf{x};\theta_{R})$$ $$=-\left[\sum_{i=1}^{n}\Big{(}I[R_{i}\leq K-1]\cdot\right.$$ $$\left.\log P(R_{i}\leq K-1|\mathbf{x}^{C})\right.$$ $$\left.+\left.I[R_{i}>K-1]\log P(R_{i}>K-1|\mathbf{x}^{C})\right)\right],$$ where I[] is an indicator function. ## 3.2.1 Rank Label Retrieving Scheme In order to use a ranker with K > 2, we need to assign a rank label Rito each x c i , the token at position i in x C. Thus, we design a label retrieving scheme to obtain rank labels from the generator. For notational convenience, we use p (i) o to denote p (i) G (xi|xM), the generating probability of the original token xi at position i in the context; and use p (i) c denote p (i) G (x c i|xM), the generating probability of any token x c i at position i in the context. We use rank(p (i) o ) ≤ T to represent that p (i) o is within top T out of all |V | probabilities from p (i) G (·|xM), where T is a hyperparameter with a small value1. Our rank label retrieving scheme is shown in Table 1, where {τ1, τ2, · · · , τK−2} are a set of probability partitioning hyperparameters with property 0 < τ1 < τ2 < · · · < τK−2. 2 We set the rank label of the original token to the highest value K. For each replaced token x c i (which differs from xi), we set up the levels (buckets) based on p (i) o and {τ1, τ2, · · · , τK−2}, so that the rank label of x c i is set according to the bucket which p (i) c will fall into. Note, however, just as the ELECTRA, the generator in our PEER is set to be weak (small) relative to the ranker in order to prevent generating toochallenging replaced tokens. Therefore we use the condition rank(p (i) o ) ≤ T to identify every position i where the generator can provide well-estimated probability p (i) G (·|xM) for tokens xi and x c i . For every replaced token x c i at position i where rank(p (i) o ) > T, we just set its rank label to a special value −1 to indicate that its rank is less than K but the exact rank value is unknown.3 1We set T to 3 in our experiments. 2We always set τ1 to 1 for a ranker with K > 2 in our experiments. 3Appendix A.3 will show that a majority of tokens in the masked positions have their rank labels other than −1 when T is set to 3 in both small and base models. $${\mathrm{i)}}\leq T$$ ![4_image_1.png](4_image_1.png) | i = xi (i) c ∈ [p (i) o /τ1, 1] ∧ x c i ̸= xi ∧ rank(p (i) o ) ≤ T (i) (i) (i) c (i) c ∈ [p o /τ2, p o /τ1) ∧ x i ̸= xi ∧ rank(p o ) ≤ T (i) (i) (i) c i ̸= xi ∧ rank(p (i) c ∈ [p o /τK−2, p o /τK−3) ∧ x o ) ≤ T (i) (i) c (i) c ∈ [0, p o /τK−2) ∧ x i ̸= xi ∧ rank(p o ) ≤ T | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ![4_image_0.png](4_image_0.png) ![4_image_2.png](4_image_2.png) | i = xi c i |xM) − logit(i) (xi|xM) ∈ [− log τ1, ∞) ∧ x c | (i) | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------| | i ̸= xi ∧ rank(p o ) ≤ T | | | c i |xM) − logit(i) (xi|xM) ∈ [− log(τ1(1 + δ)), − log τ1) ∧ x c i ̸= xi ∧ rank(p (i) o ) ≤ T (i) c i |xM) − logit(i) (xi|xM) ∈ [− log τ2, − log(τ1(1 + δ)) ∧ x c i ̸= xi ∧ rank(p o ) ≤ T c i |xM) − logit(i) (xi|xM) ∈ [− log τK−2, − log(τK−3(1 + δ))) ∧ x c i ̸= xi ∧ rank(p (i) o ) ≤ T (i) c i |xM) − logit(i) (xi|xM) ∈ [− log(τK−2(1 + δ)), − log τK−2) ∧ x c i ̸= xi ∧ rank(p o ) ≤ T c i |xM) − logit(i) (xi|xM) ∈ (−∞, − log(τK−2(1 + δ)))) ∧ x c (i) i ̸= xi ∧ rank(p o ) ≤ T | | | c i ̸= xi ∧ rank(p (i) o ) > T | | For the purpose of numerical stability, determining which bucket p (i) c falls into (in Table 1) is actually implemented using its equivalent form on the basis of the logit difference: logit(i)(x c i|xM) − logit(i)(xi|xM), where both logit terms are defined in E.q. (2). Additionally, in Table 2 we also introduce a buffer option between level k and level k+ 1 for each k ∈ {1, · · · , K − 2} to further safe-guard against the relative weakness of the generator. All the data points in the buffer are regarded as being in a grey (potentially noisy) area and are excluded for the binary classification between level k and level k + 1. If we want to use the buffer option, we add a positive hyperparameter δ inside the relevant buckets4to set up the buffers and add a small fixed value ∆ ∈ (0, 1) to the corresponding rank label5. A larger value of δ leads to the smaller number of the training data points, but adds confidence in removing potentially noisy data points. If we do not want to use the buffers, we set hyperparameter δ equal to 0 so that these buffers will disappear. ## 3.2.2 Loss Of Tqr Task Because some replaced tokens have their exact rank labels unknown (represented by the special value −1), the loss of the TQR task cannot be directly formulated as the loss of standard ordinal regression (McCullagh and Nelder, 1989). To address this challenge, we set the loss of the TQR task with K levels to be the summation of K −1 binary cross entropy losses: $$\mathcal{L}_{TQR}(\mathbf{x};\theta_{R})$$ $$=-\left[\sum_{i=1}^{n}\Big{(}I[R_{i}\leq K-1]\cdot\right.$$ $$\left.\log P(R_{i}\leq K-1|\mathbf{x}^{C})\right.$$ $$\left.+\,I[R_{i}>K-1]\log P(R_{i}>K-1|\mathbf{x}^{C})\right)+$$ $$\sum_{r=1}^{K-2}\gamma_{r}\sum_{\begin{subarray}{c}i\in\{1,\cdots,n\}\\ i:R_{i}\neq-1\end{subarray}}\Big{(}I[R_{i}\leq r]\log P(R_{i}\leq r|\mathbf{x}^{C})\tag{5}$$ $$+\,I[R_{i}>r+\Delta]\log P(R_{i}>r|\mathbf{x}^{C})\Big{)}\right],$$ where γr is a positive relative weight hyperparameter for the binary cross entropy loss at level r 6 Essentially, L*T QR* contains both the loss of RTD task stated in E.q. (4) and each binary entropy loss at level r ∈ {1, · · · , K − 2}. We set the loss of the TQR task to the summation of K − 1 binary cross entropy losses in E.q. (5) in the entire pre-training process except a beginning warming-up phase. In the warming-up phase7 we still use only one binary cross entropy stated in E.q. (4) as the loss of the TQR task, in order to ensure that the generator gets some basic training so that its token generating probability p (i) G (·|xM) is generally reliable for the rank labeling purpose. Overall, we train the PEER by minimizing a combined loss: X $\neg\exists M\to M$ ## X∈X LMLM (x; θG) + λL*T QR*(x; θR), where λ is the relative weight for the loss of TQR task8. After pre-training, we discard the generator and fine-tune the ranker for downstream NLP tasks. As an additional note, extending the ELECTRA to the PEER requires negligible increase in computation cost. The only added parameters in our PEER are K − 1 threshold parameters {ξ1, ξ2*, . . . , ξ*K−1}. The same sequence contextualized representation vectors hR(x C) out of the ranker transformer is re-used for different levels, along with the shared weight parameter vector w in E.q. (3). The Rilabeling is also based on the logit information in E.q. (2) which has already been computed for p (i) G (·|xM) in the generator. ## 4 Experiments 4.1 Experimental Setup Pre-training Details: We implement the PEER within Huggingface Transformers framework (Wolf et al., 2020). We include ELECTRA, TEAMS as well as BERT for comparison. Under the current constraints of computation resource, we focus on the small and base models which have been extensively studied and compared by Clark et al. (2020b), and we set architectures and hyperparameters largely aligned with ELECTRA. Please refer to Appendix B for the detailed model architecture and pre-training hyperparameter values. We implement each model by largely re-using the corresponding code from Huggingface (Wolf et al., 2020), if a pre-trained checkpoint has not been publicly released by its authors. We use the same pre-training data as BERT, ELECTRA-Small and ELECTRA-Base, which consists of 3.3 Billion tokens from Wikipedia and BooksCorpus datasets. For fair comparison, we follow Clark et al. (2020b) to use FLOPs (floating point operations) to measure computation usage (since FLOPs is a measure agnostic to the particular hardware and low-level optimizations). We reuse the FLOPs computation code9released from Clark et al. (2020b) so that we essentially take the exactly same assumptions made by Clark et al. (2020b). Some details of the experimented models are as follows. - **ELECTRA**: We pre-train ELECTRA-Small and ELECTRA-Base using the exactly same hyperparameter values as Clark et al. (2020b), except for larger batch size and learning rate for ELECTRA-Small to reduce the pretraining time (which is not reflected in the FLOPs calculation). For ELECTRA-Small model as well as all other small models, we use batch size 512 and 250K pre-training steps, instead of batch size 128 and 1M steps in Clark et al. (2020b). Accordingly, we add 100% increase in learning rate for ELECTRASmall and BEET-Small, and add 50% increase in learning rate for TEAMS-Small and PEERSmall 10. We observe that the change in batch size and learning rate is able to significantly reduce the pre-training time without degrading the model performance. As a reference point, we also include ELECTRA-Small++ whose pre-trained model checkpoint is publicly released by Clark et al. (2020b). Note that ELECTRA-Small++ uses 18x training FLOPs compared to ELECTRA-Small, because it is pre-trained much longer with much larger data and its input sequence length is also quadrupled (Clark et al., 2020b). den size11, according to the convention of the BERT models. Please refer to the appendix for the details about the hyperparameters. Our BERT-Small setting makes its FLOPs similar to that of ELECTRA-Small when the training steps are the same, so that fair comparison of their performance can be made directly. - **TEAMS**: We pre-train TEAMS-Small and TEAMS-Base using the same hyperparameter values described by Shen et al. (2021), except the aforementioned larger batch size and learning rate. The model structures of TEAMSSmall and TEAMS-Base are also the same as the ones used by Shen et al. (2021). Specifically, the discriminator in TEAMS-Small has 12 transformer layers and set its hidden size to 256; and the discriminator in TEAMS-Base has 12 transformer layers and set its hidden size to 768. The generator has 6 transformer layers and set its hidden size same as the corresponding discriminator. The generator and the discriminator share three layers on the bottom, the discriminator also has one additional separate transformer layer on the top for its MWS task. - **PEER**: We pre-train PEER using the hyperparameter values the same as TEAMS. With respect to hyperparameter δ, we set it to 3 in PEER-Small and set it to 9 in PEER-Base for model comparison, and discuss the effect of δ in Appendix A.2 due to space constraint. We focus on the PEER with 3-level ranker, and discuss the effect of the number of levels in Appendix A.1. The model structures of the PEER are the same as the TEAMS, except that there is no additional transformer layer for MWS task in the PEER. This difference makes FLOPs per training step in the PEER smaller than the ones of the corresponding TEAMS. However, FLOPs per training step in the PEER are still larger than the ones of the corresponding ELECTRA. This is largely because the generator in the ELECTRA decreases its hidden size (instead of its number of transformer layers), which in turn leads to the decrease in the intermediate size in every fully connected feed-forward network (FFN). We will clearly record these training FLOPs in our experimental results and ensure that our PEER uses FLOPs no more than other models during the performance comparison. Downstream Tasks and Metrics: We evaluate all models on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). It contains a variety of tasks covering natural language inference tasks MNLI (Williams et al., 2017), QNLI (Rajpurkar et al., 2016) and RTE (Giampiccolo et al., 2007); semantic similarity tasks MRPC (Dolan and Brockett, 2005), QQP (Iyer et al., 2017), and STS-B (Cer et al., 2017); sentiment classification task SST-2 (Socher et al., 2013); and linguistic acceptability classification CoLA (Warstadt et al., 2019). See Appendix C.1 for more details on the GLUE tasks. The evaluation metrics are the average of MNLImatch accuracy and MNLI-mismatch accuracy for MNLI, the average of Spearman correlation and Pearson correlation for STS-B, Matthews correlation for CoLA, and accuracy for other GLUE tasks. We also take the average of metrics of these eight GLUE tasks, denoted by G-AVG, as the overall performance metric on these tasks. All the evaluation is based on the Dev dataset. Fine-tuning Procedure: For the fine-tuning of GLUE tasks, we add simple linear classifiers on top of the encoder of a pre-trained model. Because we observe a large performance variance in the GLUE tasks with small data sizes (including CoLA, MRPC, STS-B and RTE), we adopt the following two methods to reduce the variance. First, we follow the strategy proposed in the papers (Mosbach et al., 2020; Zhang et al., 2020; Dodge et al., 2020) to train more epochs with small learning rates for these small tasks. Second, we fine-tune these small tasks by using multiple random seeds and obtain the average score across the seeds. Please refer to Appendix C for the details in fine-tuning hyperparameter settings. For base models, we pre-train each model once and then use the above fine-tuning strategy to obtain the score of each GLUE task. Since for some small models we still observe non-negligible variance of the resulting scores, we pre-train each small model using five different random seeds. The finally reported score of each task is the average across the five pre-trained model checkpoints. | Model | Train | G-AVG | MNLI | CoLA | SST-2 | MRPC | STS-B | QQP | QNLI | RTE | |----------------------------------------------------------------------------------------|---------|------------|--------|--------|---------|--------|---------|-------|--------|-------| | Mean±Std | | | | | | | | | | | | FLOPs | | | | | | | | | | | | BERT-Small | 1.27e18 | 79.11±0.08 | 79.97 | 49.53 | 90.09 | 84.52 | 86.15 | 89.57 | 86.79 | 66.23 | | ELECTRA-Small | 1.29e18 | 80.77±0.16 | 80.22 | 59.40 | 89.19 | 86.48 | 86.72 | 89.93 | 88.27 | 65.99 | | ELECTRA-Small++* | 2.40e19 | 82.05 | 82.52 | 58.37 | 91.40 | 87.01 | 87.95 | 90.54 | 88.93 | 69.68 | | TEAMS-Small | 1.55e18 | 80.84±0.23 | 81.05 | 55.83 | 89.81 | 87.71 | 87.34 | 89.72 | 88.31 | 66.94 | | PEER-Small (212.5K) | 1.27e18 | 81.40±0.25 | 81.08 | 59.70 | 89.40 | 87.99 | 87.70 | 90.12 | 88.45 | 66.71 | | PEER-Small (250K) | 1.50e18 | 81.50±0.30 | 81.19 | 59.66 | 89.29 | 88.09 | 87.66 | 90.15 | 88.55 | 67.44 | | *: ELECTRA-Small++ is the pre-trained model publicly released by Clark et al. (2020b). | | | | | | | | | | | Table 3: Comparison of small models on the GLUE dev set. | Model | Train | G-AVG | MNLI | CoLA | SST-2 | MRPC | STS-B | QQP | QNLI | RTE | |----------------------------------------------------------------------------------|---------|---------|--------|--------|---------|--------|---------|-------|--------|-------| | FLOPs | | | | | | | | | | | | BERT-Base (1M)* | 6.43e19 | 83.51 | 84.51 | 60.07 | 93.00 | 86.03 | 89.51 | 91.27 | 91.51 | 72.20 | | ELECTRA-Base (766K) | 6.43e19 | 86.42 | 85.96 | 67.05 | 92.09 | 91.05 | 90.47 | 91.57 | 92.10 | 81.05 | | TEAMS-Base (666.9K) | 6.66e19 | 86.11 | 86.48 | 66.30 | 93.00 | 90.44 | 90.22 | 91.38 | 92.36 | 78.70 | | PEER-Base (666.9K) | 6.39e19 | 86.77 | 86.69 | 68.57 | 92.66 | 91.18 | 90.92 | 91.78 | 92.57 | 79.78 | | *: BERT-Base is the pre-trained model publicly released by Devlin et al. (2018). | | | | | | | | | | | Table 4: Comparison of base models on the GLUE dev set. ## 4.2 Overall Comparison Results Table 3 shows the performance comparison among the small models. In the table, the second column lists the training FLOPs of each model, and the third column shows the mean and the standard deviation of the G-AVG for each model across five independently pre-trained checkpoints. We report the performance of each small model pre-trained through 250K steps (i.e., 5 epochs). Additionally, we report the performance of PEER-Small pretrained exactly after 212.5K steps to ensure that its computation cost is no more than that of any other competitor. Note that the G-AVG of ELECTRA-Small implemented by us is about 98.44% of that of ELECTRASmall++ released by Clark et al. (2020b) (80.77 vs. 82.05), which is higher than the 97.87% in Table 8 of the original paper (Clark et al., 2020b). This verifies the correctness of our ELECTRA implementation. As for TEAMS-Small, the G-AVG of TEAMS-Small is slightly higher than that of ELECTRA-Small when they go through the same number of pre-training steps, which is consistent with the comparison results shown by Shen et al. (2021). While Shen et al. (2021) do not report the performance of each individual task, our result shows that TEAMS-Small performs much better in MNLI task but much worse in CoLA task when comparing with ELECTRA-Small. With respect to our PEER, Table 3 clearly demonstrates its advantages over all the other competitors in small models. Using less computation cost, PEER-Small (212.5K) outperforms both ELECTRA-Small and TEAMS-Small in six out of eight GLUE tasks, as SST-2 and RTE tasks are the only two exceptions. The G-AVG of PEERSmall (212.5K) is 0.63 point higher than that of ELECTRA-Small and is 0.56 point higher than that of TEAMS-Small. Because we have independently run the whole (pre-training and finetuning) process five times for each small model, by using the two-sample t test with unequal variances, we can conclude with strong evidence (at the significance level 0.005) that the real mean of G-AVG of our PEER-Small (212.5K) is larger than that of ELECTRA-Small. Similarly, based on the two-sample t test with unequal variances, we can conclude with strong evidence (at the significance level 0.005) that the real mean of G-AVG of our PEER-Small (212.5K) is larger than that of TEAMS-Small. Table 4 shows the comparison results on the base models. In the first column of the table, we show the pre-training steps of each model and have ensured that PEER-Base takes FLOPs no more than other models. Using less computation cost, PEERBase achieves the best performance among all the investigated models in six out of eight GLUE tasks, while two exceptions are SST-2 and RTE tasks (just as in small models). Overall, PEER-Base has the highest G-AVG, which is 0.35 point higher than that of ELECTRA-Base and is 0.66 point higher than that of TEAMS-Base. ![8_image_0.png](8_image_0.png) ## 4.3 Pre-Training Efficiency To further investigate the pre-training efficiency, in Figures 2 and 3, we plot G-AVG and MNLI accuracy score with respect to the number of pretraining epochs for PEER-Small, ELECTRA-Small and TEAMS-Small. For each model, we select the median run whose pre-training random seed achieves the median G-AVG among the five random seeds. Then for the selected median run of each model, we save a checkpoint every epoch (i.e, 50K pre-training steps), and fine-tune it on every GLUE task and finally report the scores across the tasks. Note that the ratio of the training FLOPs per epoch among PEER-Small, ELECTRA-Small and TEAMS-Small is 1.50 : 1.29 : 1.55, which has also been shown in Table 3. Figure 2 shows that PEER-Small starts to significantly outperform its competitors in G-AVG since the second epoch, and its G-AVG at the end of third epoch is already higher than G-AVG of both ELECTRA-Small and TEAMS-Small at the end of the whole pre-training. Figure 3 shows that both PEER-Small and TEAMSsmall perform considerably better than ELECTRASmall in MNLI task, and PEER-Small performs better than TEAMS-small (by using less computation cost) since the third epoch. ## 5 Conclusion And Future Work We propose the PEER by extending ELECTRA's RTD task to a token quality ranking (TQR) task in order to further improve the pre-training efficiency. Besides detecting whether every token is replaced or not, the TQR task also needs to rank replaced tokens into different levels according to their quality given the context. We design a scheme to retrieve rank label information from the generator so that the complete TQR task can be performed for a majority of replaced tokens. We empirically show that our proposed PEER outperforms the state-ofthe-art pre-training efficient competitors in small and base scale models using the same or less computation cost. In the future, we will validate the advantages of our PEER in larger scale models when sufficient computation resources are available. We also plan to improve our rank label retrieving scheme so that even larger proportion of replaced tokens can be involved in the complete TQR task. ## Limitations There are several limitations in our paper. First, we have not validated the advantages of our proposed PEER in model scales larger than base model, due to the constraint in our computation resource. We plan to experiment the PEER in larger scale models when more computation resource is available. Second, in order to filter out potential noise from the relative weak generator, our current rank label retrieving scheme uses a strict condition T = 3, which leads to the fact that a significant proportion of tokens have rank label −1 and essentially are involved only in the original RTD task. Please refer to the details in Appendix A.3. We intend to design some label retrieving scheme which applies a softer criterion so that more tokens can be fully or partially involved in the complete TQR task. Finally, our PEER currently does not have the ability of automatically searching for an optimal value of hyperparameter δ, which we also plan to design in the future. ## References Payal Bajaj, Chenyan Xiong, Guolin Ke, Xiaodong Liu, Di He, Saurabh Tiwary, Tie-Yan Liu, Paul Bennett, Xia Song, and Jianfeng Gao. 2022. METRO: Efficient denoising pretraining of large scale autoencoding language models with model generated signals. arXiv preprint arXiv:2204.06644. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Kevin Clark, Minh-Thang Luong, Quoc Le, and Christopher D. Manning. 2020a. Pre-training transformers as energy-based cloze models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 285–294, Online. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020b. ELECTRA: Pretraining text encoders as discriminators rather than generators. *arXiv preprint arXiv:2003.10555*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv* preprint arXiv:2002.06305. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. *CoRR*, abs/1905.03197. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Michael U. Gutmann and Aapo Hyvärinen. 2012. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13(11):307– 361. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTaV3: Improving DeBERTaV using ELECTRA-style pre-training with gradientdisentangled embedding sharing. *arXiv preprint* arXiv:2111.09543. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decoding-enhanced BERT with disentangled attention. arXiv preprint arXiv:2006.03654. Shankar Iyer, Nikhil Dandekar, and Kornél Csernai. 2017. First quora dataset release: Question pairs. data. quora. com. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. *CoRR*, abs/1907.10529. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. P. McCullagh and J. A. Nelder. 1989. *Generalized Linear Models*. Chapman & Hall / CRC, London. Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. 2021. COCO-LM: Correcting and contrasting text sequences for language model pretraining. *arXiv* preprint arXiv:2102.08473. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. *arXiv preprint arXiv:2006.04884*. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. ZeRo: Memory optimizations toward training trillion parameter models. In *SC20:* International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1– 16. IEEE. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. *arXiv preprint* arXiv:1606.05250. Jiaming Shen, Jialu Liu, Tianqi Liu, Cong Yu, and Jiawei Han. 2021. Training ELECTRA augmented with multi-word selection. *arXiv preprint* arXiv:2106.00139. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642. Wilson L. Taylor. 1953. "cloze procedure": A new tool for measuring readability. *Journalism Quarterly*, 30(4):415–433. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30, page 6000–6010. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. *CoRR*, abs/1804.07461. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pre-training for deep language understanding. *CoRR*, abs/1908.04577. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466. Yifei Xu, Jingqiao Zhang, Ru He, Liangzhu Ge, Chao Yang, Cheng Yang, and Ying Nian Wu. 2021. SAS: Self-augmented strategy for language model pretraining. *arXiv preprint arXiv:2106.07176*. Zhenhui Xu, Linyuan Gong, Guolin Ke, Di He, Shuxin Zheng, Liwei Wang, Jiang Bian, and Tie-Yan Liu. 2020. MC-BERT: efficient language pre-training via a meta controller. *CoRR*, abs/2006.05744. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. *CoRR*, abs/1906.08237. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. *arXiv preprint arXiv:2006.05987*. ## A Supplementary Experimental Results A.1 Number Of Levels K We vary K (the number of levels used in the ranker) from 3 to 5 in PEER-Small models to see its impact. Table 5 shows the corresponding results. Each model is pre-trained 212.5K steps and has nearly the same computation cost. The table shows that increasing K from 3 does not lead to further improvement in the performance of GLUE tasks. The G-AVG of the PEER-Small with 4 or 5 levels actually decreases slightly, though it is still larger than that of its competitors shown in Table 3 by using less computation cost. We conjecture that the main reason is that increasing K leads to the smaller number of tokens staying in low levels, which in turn brings difficulty in the learning process. We will further investigate the impact of K in our future work. ## A.2 Buffer Hyperparameter Δ We test the impact of buffer hyperparameter δ by using a set of three different values {0, 3, 9}, where value 0 leads to no buffer and value 9 leads to a large buffer. By its design, a larger buffer leads to the smaller number of the training data points, but adds confidence in removing potentially noisy data points due to the relative weakness of the generator. Tables 6 and 7 show the results in the PEER-Small models and PEER-Base models respectively. Since the value of δ has a negligible effect in the training FLOPs, we do not list the training FLOPs here as they have already been shown in Table 3 and 4. Both tables show that the G-AVG decreases slightly when δ decreases to 0, though it is still no worse than that of any its competing model by using less computation cost. The PEER-Small achieves the highest G-AVG and MNLI scores when δ is set to 3. The PEER-Base achieves the highest G-AVG when δ is set to 9, and achieves the highest MNLI score when δ is set to 3. In the future we will investigate how to let the PEER automatically search for an optimal value of δ during its pre-training to further boost its performance. ## A.3 Proportion Of Tokens With Rank Label −1 Figures 4 and 5 demonstrate the proportion of tokens with rank label −1 in the masked positions during the pre-training for PEER-Small and PEERBase. With respect to PEER-Small, the proportion decreases from 44.82% at 40K steps (i.e., the end ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) of the warm-up phase) to 40.42% at the end of the pre-training. With regards to PEER-Base, the proportion decreases from 36.20% at 33344 steps (i.e., the end of the warm-up phase) to 27.33% at the end of the pre-training. Thus, a majority of replaced tokens have their rank labels other than −1 during the pre-training of both PEER-Small and PEER-Base. ## B Pre-Training Details The following pre-training details apply to our PEER and its competing methods including the BERT, the ELECTRA and the TEAMS. We always use Adam as the optimizer with weight decay. We mostly use the same hyperparameters as BERT and ELECTRA. Our own implementation does not include the next sentence prediction (NSP) task proposed in the original BERT, as the recent works such as Liu et al. (2019) have suggested that it does not improve the performance. We searched for the best learning rate for Small models out of [1e-3, 7.5e-4, 5e-4] . Otherwise, we did no hyperparameter tuning beyond the experiments. The full set of hyperparameters is listed in Table 8. K {τ1, · · · , τK−2} G-AVG Mean±Std MNLI CoLA SST-2 MRPC STS-B QQP QNLI RTE 3 {1} 81.40±0.25 81.08 59.70 89.40 87.99 87.70 90.12 88.45 66.71 4 {1, 10} 81.13±0.14 81.06 59.08 88.90 87.45 87.59 90.00 88.39 66.57 5 {1, 8, 32} 81.28±0.12 80.77 59.68 88.74 87.79 87.45 90.08 88.15 67.58 Table 5: Comparison of PEER-Small models with different K levels (under 212.5K pre-training steps) on the GLUE dev set. δ G-AVG Mean±Std MNLI CoLA SST-2 MRPC STS-B QQP QNLI RTE 0 81.14±0.44 81.00 58.89 88.78 87.94 87.48 90.06 88.35 66.64 3 81.40±0.25 81.08 59.70 89.40 87.99 87.70 90.12 88.45 66.71 9 81.27±0.42 80.97 59.62 89.16 87.58 87.60 90.07 88.38 66.79 Table 6: Comparison of PEER-Small models with different δ values on the GLUE dev set. Each PEER-Small model has 3 levels and is pre-trained 212.5K steps. ## C Fine-Tuning Details We originally fine-tuned all the pre-trained models for 4 epochs. However, because we observed a large variance in the small tasks in GLUE, following the advice from Mosbach et al. (2020), we increase the fine-tuning process to 20 epochs and select the best epoch for the four small tasks including CoLA, MRPC, STS-B and RTE. For Small models, we searched for the best learning rate out of [1e-4, 7.5e-5]. For Base models, we searched for a learning rate out of [5e-5, 3e-5] without the layerwise learning-rate decay proposed by ELECTRA, but otherwise used the same hyperparameters as for small models. Due to limited computation resource, we adjust the number of independent fine-tuning runs (with different random seeds) so that we finetune more times for these tasks with smaller data sizes (i.e., with more variability). The full set of hyperparameters is listed in Table 9. Following the BERT and the ELECTRA, we do not show results on the WNLI GLUE task for the Dev set results. ## C.1 Details About Glue We provide further details about the GLUE benchmark tasks as follows. CoLA: Corpus of Linguistic Acceptability (Warstadt et al., 2019). The task is to determine whether a given sentence is linguistically acceptable or not. The dataset contains 8.5k train examples from books and journal articles on linguistic theory. SST-2: Stanford Sentiment Treebank (Socher et al., 2013). The task is to determine if the sentence is positive or negative in sentiment. The dataset contains 67k train examples from movie reviews. MRPC: Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005). The task is to predict whether two sentences are semantically equivalent or not. The dataset contains 3.7k train examples from online news sources. STS-B: Semantic Textual Similarity (Cer et al., 2017). The task is to predict how semantically similar two sentences are on a 1-5 scale. The dataset contains 5.8k train examples drawn from news headlines, video and image captions, and natural language inference data. QQP: Quora Question Pairs (Iyer et al., 2017). The task is to determine whether a pair of questions are semantically equivalent. The dataset contains 364k train examples from the community questionanswering website Quora. MNLI: Multi-genre Natural Language Inference (Williams et al., 2017). Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither. The dataset contains 393k train examples drawn from ten different sources. QNLI: Question Natural Language Inference; constructed from SQuAD (Rajpurkar et al., 2016). The task is to predict whether a context sentence contains the answer to a question sentence. The dataset contains 108k train examples from Wikipedia. RTE: Recognizing Textual Entailment (Giampiccolo et al., 2007). Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis or not. | δ | G-AVG | MNLI | CoLA | SST-2 | MRPC | STS-B | QQP | QNLI | RTE | |-----|---------|--------|--------|---------|--------|---------|-------|--------|-------| | 0 | 86.42 | 86.72 | 66.44 | 92.09 | 90.20 | 90.75 | 91.69 | 92.59 | 80.87 | | 3 | 86.63 | 86.88 | 68.24 | 92.43 | 90.20 | 90.63 | 91.70 | 92.48 | 80.51 | | 9 | 86.77 | 86.69 | 68.57 | 92.66 | 91.18 | 90.92 | 91.78 | 92.57 | 79.78 | Table 7: Comparison of PEER-Base models with different δ values on the GLUE dev set. Each PEER-Base model has 3 levels and is pre-trained 666.9K steps. | Hyperparameter | ELECTRA-Small | All Other Small | All Base Models | |-----------------------|-----------------|-------------------|-------------------| | Models | | | | | Number of layers | 12 | 12 | 12 | | Hidden size | 256 | 256 | 768 | | FFN inner hidden size | 1024 | 1024 | 3072 | | Attention heads | 4 | 4 | 12 | | Attention head size | 64 | 64 | 64 | | Embedding size | 128 | 256 | 768 | | Sequence length | 128 | 128 | 512 | | Mask percent | 15 | 15 | 15 | | Learning rate decay | Linear | Linear | Linear | | Warmup steps | 10000 | 10000 | 10000 | | Learning rate | 1e-3 | 1e-3/7.5e-4 | 2e-4 | | Adam ϵ | 1e-6 | 1e-6 | 1e-6 | | Adam β1 | 0.9 | 0.9 | 0.9 | | Adam β2 | 0.999 | 0.999 | 0.999 | | Attention dropout | 0.1 | 0.1 | 0.1 | | Dropout | 0.1 | 0.1 | 0.1 | | Weight decay | 0.01 | 0.01 | 0.01 | | Batch size | 512 | 512 | 256 | | Train steps | 250K | 250K | 666.9K - 1M | Table 8: Pre-training hyperparameters for all the models pre-trained by us. The dataset contains 2.5k train examples from a series of annual textual entailment challenges. | Hyperparameter | Value | |---------------------|-----------------------------------------------------------------| | Learning rate | 1e-4, 7.5e-5 for Small; 5e-5, 3e-5 for Base | | Adam ϵ | 1e-6 | | Adam β1, β2 | 0.9, 0.999 | | Layerwise LR decay | None | | Learning rate decay | Linear | | Warmup fraction | 0.1 | | Attention dropout | 0.1 | | Dropout | 0.1 | | Weight decay | None | | Batch size | 16, 32 | | Train epochs | 20 for CoLA, MRPC, STS-B, RTE; 4 for other tasks | | Seeds | 5 for CoLA, MRPC, STS-B, RTE; 3 for QNLI, SST2; 1 for MNLI, QQP | Table 9: Fine-tuning hyperparameters for all the investigated models. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 5, and Section Limitations ✗ A2. Did you discuss any potential risks of your work? My work has no particular risk other than the well-known risks existing in a general pre-training method. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
cheng-etal-2023-ml
{ML}-{LMCL}: Mutual Learning and Large-Margin Contrastive Learning for Improving {ASR} Robustness in Spoken Language Understanding
https://aclanthology.org/2023.findings-acl.406
Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue systems. However, the inevitable errors from automatic speech recognition (ASR) usually impair the understanding performance and lead to error propagation. Although there are some attempts to address this problem through contrastive learning, they (1) treat clean manual transcripts and ASR transcripts equally without discrimination in fine-tuning; (2) neglect the fact that the semantically similar pairs are still pushed away when applying contrastive learning; (3) suffer from the problem of Kullback{--}Leibler (KL) vanishing. In this paper, we propose Mutual Learning and Large-Margin Contrastive Learning (ML-LMCL), a novel framework for improving ASR robustness in SLU. Specifically, in fine-tuning, we apply mutual learning and train two SLU models on the manual transcripts and the ASR transcripts, respectively, aiming to iteratively share knowledge between these two models. We also introduce a distance polarization regularizer to avoid pushing away the intra-cluster pairs as much as possible. Moreover, we use a cyclical annealing schedule to mitigate KL vanishing issue. Experiments on three datasets show that ML-LMCL outperforms existing models and achieves new state-of-the-art performance.
# Ml-Lmcl: Mutual Learning And Large-Margin Contrastive Learning For Improving Asr Robustness In Spoken Language Understanding Xuxin Cheng, Bowen Cao†**, Qichen Ye**†, Zhihong Zhu†**, Hongxiang Li, Yuexian Zou*** School of ECE, Peking University, China {chengxx, cbw2021, zhihongzhu, lihongxiang}@stu.pku.edu.cn {yeeeqichen, zouyx}@pku.edu.cn ## Abstract Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue systems. However, the inevitable errors from automatic speech recognition (ASR) usually impair the understanding performance and lead to error propagation. Although there are some attempts to address this problem through contrastive learning, they (1) treat clean manual transcripts and ASR transcripts equally without discrimination in fine-tuning; (2) neglect the fact that the semantically similar pairs are still pushed away when applying contrastive learning; (3) suffer from the problem of Kullback–Leibler (KL) vanishing. In this paper, we propose Mutual Learning and Large-Margin Contrastive Learning (ML-LMCL), a novel framework for improving ASR robustness in SLU. Specifically, in fine-tuning, we apply mutual learning and train two SLU models on the manual transcripts and the ASR transcripts, respectively, aiming to iteratively share knowledge between these two models. We also introduce a distance polarization regularizer to avoid pushing away the intra-cluster pairs as much as possible. Moreover, we use a cyclical annealing schedule to mitigate KL vanishing issue. Experiments on three datasets show that ML-LMCL outperforms existing models and achieves new state-of-the-art performance. ## 1 Introduction Spoken language understanding(SLU) is an important component of various personal assistants, such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana and Google's Assistant (Young et al., 2013). SLU aims at taking human speech input and extracting semantic information for two typical subtasks, mainly including intent detection and slot filling (Tur and De Mori, 2011). Pipeline approaches and end-to-end approaches are two kinds of solu- ASR ![0_image_0.png](0_image_0.png) clean Intent (Audio, Volume_up) PLEASE TURN OUT THE BOLLOM PLEASE TURN UP THE VOLUME ## 1 Introduction The _Front_ of the Universe is a very important tool in the study of the evolution of the Universe. The _Front_ of the Universe is a very important tool in the study of the evolution of the Universe. (Iot, Hue_lightoff) Figure 1: An example of the intent being predicted incorrectly due to the ASR error. tions of SLU. Pipeline SLU methods usually combine automatic speech recognitgion (ASR) and natural language understanding (NLU) in a cascaded manner, so they can easily apply external datasets and external pre-trained language models. However, error propagation is a common problem of pipeline approaches, where an inaccurate ASR output can theoretically lead to a series of errors in subtasks. As shown in Figure 1, due to the error from ASR, the model can not predict the intent correctly. Following Chang and Chen (2022), this paper only focuses on intent detection. Learning error-robust representations is an effective method to mitigate the negative impact of errors from ASR and is gaining increasing attention. The remedies for ASR errors can be broadly categorized into two types: (1) applying machine translation to translate the erroneous ASR transcripts to clean manual transcripts (Mani et al., 2020; Wang et al., 2020; Dutta et al., 2022); (2) using masked language modeling to adapt the model. However, these methods usually requires additional speechrelated inputs (Huang and Chen, 2019; Sergio et al., 2020; Wang et al., 2022), which may not always be readily available. Therefore, this paper focuses on improving ASR robustness in SLU without using any speech-related input features. Despite existing error-robust SLU models have achieved promising progress, we discover that they suffer from three main issues: (1) **Manual and ASR transcripts are treated** as the same type. In fine-tuning, existing methods simply combine manual and ASR transcripts as the final dataset, which limits the performance. Intuitively, the information from manual transcripts and the information from ASR transcripts play different roles, so the model fine-tuned on their combination cannot discriminate their specific contributions. Based on our observations, models trained on the clean manual transcripts usually has higher accuracy, while models trained on the ASR transcripts are usually more robust to ASR errors. Therefore, manual and ASR transcripts should be treated differently to improve the performance of the model. (2) **Semantically similar pairs are still pushed** away. Conventional contrastive learning enlarges distances between all pairs of instances and potentially leading to some ambiguous intra-cluster and inter-cluster distances (Mishchuk et al., 2017; Zhang et al., 2022), which is detrimental for SLU. Specifically, if clean manual transcripts are pushed away from their associated ASR transcripts while become closer to other sentences, the negative impact of ASR errors will be further exacerbated. (3) **They suffer from the problem of KL vanishing.** Inevitable label noise usually has a negative impact on the model (Li et al., 2022; Cheng et al., 2023b). Existing methods apply self-distillation to minimize Kullback–Leibler (KL) divergence (Kullback and Leibler, 1951) between the current prediction and the previous one to reduce the label noises in the training set. However, we find these methods suffer from the KL vanishing issue, which has been observed in other tasks (Zhao et al., 2017). KL vanishing can adversely affect the training of the model. Therefore, it is crucial to solve this problem to improve the performance. In this paper, we propose Mutual Learning and Large-Margin Contrastive Learning (ML-LMCL), a novel framework to tackle above three issues. For the first issue, we propose a mutual learning paradigm. In fine-tuning, we train two SLU models on the manual and ASR transcripts, respectively. These two models are collaboratively trained and considered as peers, with the aim of iteratively learning and sharing the knowledge between the two models. Mutual learning allows effective dual knowledge transfer (Liao et al., 2020; Zhao et al., 2021; Zhu et al., 2021), which can improve the performance. For the second issue, our framework implements a large-margin contrastive learning to distinguish between intra-cluster and inter-cluster pairs. Specifically, we apply a distance polarization regularizer and penalize all pairwise distances within the margin region, which can encourage polarized distances for similarity determination and obtain a large margin in the distance space in an unsupervised way. For the third issue, following Fu et al. (2019), we mitigate KL vanishing by adopting a cyclical annealing schedule. The training process is effectively split into many cycles. In each cycle, the coefficient of KL Divergence progressively increases from 0 to 1 during some iterations and then stays at 1 for the remaining iterations. Experiment results on three datasets SLURP, ATIS and TREC6 (Bastianelli et al., 2020; Hemphill et al., 1990; Li and Roth, 2002; Chang and Chen, 2022) demonstrate that our ML-LMCL significantly outperforms previous best models and model analysis further verifies the advantages of our model. The contributions of our work are four-fold: - We propose ML-LMCL, which utilizes mutual learning to encourage the exchange of knowledge between the model trained on clean manual transcripts and the model trained on ASR transcripts. To the best of our knowledge, we make the first attempt to apply mutual learning to improve ASR robustness in SLU task. - To better distinguish between intra-cluster and inter-cluster pairs, we introduce a distance polarization regularizer to achieve large-margin contrastive learning. - We adopt a cyclical annealing schedule to mitigate KL vanishing, which is neglected in the previous SLU approaches. - Experiments on three public datasets demonstrate that the proposed model achieves new state-of-the-art performance. ## 2 Approach Our framework includes four elements: (1) Selfsupervised contrastive learning with a distance polarization regularizer in pre-training. (2) Mutual learning between the model trained on clean manual transcripts and the model trained on ASR transcripts in fine-tuning. (3) Supervised contrastive learning with a distance polarization regularizer in fine-tuning. (4) Self-distillation with the cyclical annealing schedule in fine-tuning. ## 2.1 Self-Supervised Contrastive Learning Following Chang and Chen (2022), we utilize selfsupervised contrastive learning in pre-training for learning sentence representations invariant to misrecognition to handle ASR errors. Inspired by the ![2_image_0.png](2_image_0.png) success of pre-trained models (Liu et al., 2022b; Zhang et al., 2023a; Cheng et al., 2023a; Zhang et al., 2023b; Yang et al., 2023a), we continually train a pre-trained RoBERTa (Liu et al., 2019) on spoken language corpus. Given a mini-batch of input data of N pairs of transcripts B ={(x p i , x q i )}i=1..N , where x p i denotes a clean manual transcript and x q i denotes its associated ASR transcript. As shown in Figure 2, we first apply the pre-trained RoBERTa and utilize the last layer of [CLS] to obtain the representation h p i for x p i and h q i for x q i : $$\begin{array}{l}{{h_{i}^{p}=\mathrm{RoBERTa}(x_{i}^{p})}}\\ {{h_{i}^{q}=\mathrm{RoBERTa}(x_{i}^{q})}}\end{array}$$ Then we apply the proposed self-supervised contrastive loss Lsc (Chen et al., 2020a; Gao et al., 2021) to adjust the sentence representations: $$\mathcal{L}_{sc}=-\frac{1}{2N}\sum_{(h,h^{+})\in P}\log\frac{e^{s(h,h^{+})/\tau_{sc}}}{\sum_{h^{\prime}\neq h}^{B}e^{s(h,h^{\prime})/\tau_{sc}}}$$ $$=-\mathbb{E}_{P}\Big{[}s(h,h^{+})/\tau_{sc}\Big{]}+\mathbb{E}\Big{[}\log\big{(}\sum_{h^{\prime}\neq h}^{B}e^{s(h,h^{\prime})/\tau_{sc}}\big{)}\Big{]}\tag{3}$$ where P is composed of 2N positive pairs of either (h p i , hq i ) or (h q i , hp i ), τsc is the temperature hyper-parameter and s(·, ·) denotes the cosine similarity function. In Eq.3, the first term brings the clean manual transcript and its associated ASR transcript (positive example) near together and the second term pushes irrelevant ones (negative examples) far apart to promote uniformity in representation space (Wang and Isola, 2020). Note that for a transcript, its negative examples may be clean manual transcripts or ASR transcripts. For example, in Figure 2, *recap my day* is a clean manual transcript and *chicken tikka recipe* is an ASR transcript. However, conventional contrastive learning has a problem that semantically similar pairs are still pushed away (Chen et al., 2021). It indiscriminately enlarges distances between all pairs of instances and may not be able to distinguish intracluster and inter-cluster correctly, which causes some similar instance pairs to still be pushed away. Moreover, it may discard some negative pairs and regard them as semantically similar pairs wrongly, even though their learning objective treat each pair of original instances as dissimilar. These problems result in the distance between the clean manual transcript and its associated ASR transcript not being significantly smaller than the distance between unpaired instance, which is detrimental to improving ASR robustness. Motivated by Chen et al. (2021), we introduce a distance polarization regularizer to build a large-margin contrastive learning model. For simplicity, we further denote the following normalized cosine similarity: find weather report $$(4)$$ recap my day $${\mathcal{D}}_{i j}=\left(1+s(h_{i},h_{j})\right)/2$$ (1) (2) $\frac{1}{2}$ which measures the similarity between the pairs of (hi, hj ) ∈ B with the real value Dij ∈ [0, 1]. We suppose that the matrix D = Dij ∈ RM×M where M = 2N denotes the total number of transcripts in B. D consists of distances Dij and there exists 0 < δ+ < δ− < 1 where the intra-class distances are smaller than δ + while the inter-class distances are larger than δ−. The proposed distance polarization regularizer Lreg is as follows: $$\mathcal{L}_{reg}=\left\|\min\left(\left(\mathcal{D}-\mathbf{\Delta}^{+}\right)\odot\left(\mathcal{D}-\mathbf{\Delta}^{-}\right),0\right)\right\|_{1}\tag{5}$$ $\mathbf{l}_{\sigma}\mathbf{A}=-\mathbf{\nabla}\mathbf{S}=\mathbf{\nabla}\mathbf{u}$. where ∆+ =δ + × 1M×M and ∆− =δ− × 1M×M are the threshold parameters and *∥ · ∥*1 denotes the ℓ1-norm. The region (δ +, δ−) ⊆ [0, 1] can be regarded as the large margin to discriminate the similarity of data pairs. Lreg can encourage the sparse distance distribution in the margin region (δ +, δ−), because any distance Dij fallen into the margin region (δ +, δ−) will increase Lreg. Minimizing the regularizer Lreg will encourage more pairwise distances {Dij} M i,j=1 to distribute in the regions [0, δ+] or [δ−, 1], and each data pair is adaptively separated into similar or dissimilar result. As a result, through introducing the regularizer, our framework can better distinguish between intracluster and inter-cluster pairs. Then the final large-margin self-supervised contrastive learning loss L reg sc is the weighted sum of self-supervised contrastive learning loss Lsc and the regularizer Lreg, which is calculated as follows: $${\mathcal{L}}_{s c}^{r e g}={\mathcal{L}}_{s c}+\lambda_{r e g}\cdot{\mathcal{L}}_{r e g}$$ $$(6)$$ 6494 ![3_image_0.png](3_image_0.png) ## 2.2 Mutual Learning Previous work reveals that mutual learning can exploit the mutual guidance information between two models to improve their performance simultaneously (Nie et al., 2018; Hong et al., 2021). By mutual learning, we can obtain compact networks that perform better than those distilled from a strong but static teacher. In fine-tuning, we use the same pre-trained model in Sec.2.1 to train two networks on the manual transcripts and the ASR transcripts, respectively. For a manual transcript x p i and its associated ASR transcript x q i , the output probabilities p t i,p and p t i,q at the t-th epoch are as follows: $$\begin{array}{l c r}{{p_{i,p}^{t}=M_{\mathrm{clean}}(x_{i}^{p})}}&{{}}&{{}}&{{(7)}}\\ {{p_{i,q}^{t}=M_{\mathrm{ars}}(x_{i}^{q})}}&{{}}&{{}}&{{(8)}}\end{array}$$ where Mclean denotes the model trained on clean manual transcripts and Masr denotes the model trained on ASR transcripts. We adopt Jensen-Shannon (JS) divergence as the mimicry loss, with the aim of effectively encouraging the two models to mimic each other. The mutual learning loss Lmut in Figure 3 is as follows: $${\mathcal{L}}_{m u t}=\sum_{i=1}^{N}J S(p_{i,p}^{t}\|p_{i,q}^{t})\qquad\qquad(9)$$ ## 2.3 Supervised Contrastive Learning We also apply supervised contrastive learning in fine-tuning by using label information. The pairs with the same label are regarded as positive samples and the pairs with different labels are regarded as negative samples. The embeddings of positive samples are pulled closer while the embeddings of negative samples are pushed away (Jian et al., 2022; Zhou et al., 2022). We utilize the supervised contrastive loss L p c for the model trained on manual transcripts and L q c for the model trained on ASR transcripts to encourage the learned representations to be aligned with their labels: $$\mathcal{L}_{c}^{p}=-\frac{1}{N}\cdot\sum_{i=1}^{N}\sum_{j\neq i}^{N}1_{y_{i}^{p}=y_{j}^{p}}\log\frac{e^{s(h_{i}^{p},h_{j}^{p})/\tau_{c}}}{\sum_{k\neq i}^{N}e^{s(h_{i}^{p},h_{k}^{p})/\tau_{c}}}$$ $$\mathcal{L}_{c}^{q}=-\frac{1}{N}\cdot\sum_{i=1}^{N}\sum_{j\neq i}^{N}1_{y_{i}^{q}=y_{j}^{q}}\log\frac{e^{s(h_{i}^{q},h_{j}^{q})/\tau_{c}}}{\sum_{k\neq i}^{N}e^{s(h_{i}^{q},h_{k}^{q})/\tau_{c}}}$$ $$\quad(10)$$ $$(11)$$ )/τc(10) )/τc(11) where y p i =y p j denotes the labels of h p i and h p j are the same, y q i =y q j denotes the label of h q i and h q j are the same and τc is the temperature hyper-parameter. Like Sec.2.1, we also use distance polarization regularizers L p reg and L qreg to enhance the generalization ability of contrastive learning algorithm: $$\mathcal{L}_{reg}^{P}=\left\|\min\left(\left(\mathbf{D}^{P}-\mathbf{\Delta}^{+}\right)\odot\left(\mathbf{D}^{P}-\mathbf{\Delta}^{-}\right),0\right)\right\|_{1}\tag{12}$$ $$\mathcal{L}_{reg}^{q}=\left\|\min\left(\left(\mathbf{D}^{q}-\mathbf{\Delta}^{+}\right)\odot\left(\mathbf{D}^{q}-\mathbf{\Delta}^{-}\right),0\right)\right\|_{1}\tag{13}$$ where Dp denotes the matrix consisting of pairwise distances on the clean manual transcripts and Dq denotes the matrix on the ASR transcripts. The large-margin supervised contrastive learning loss L reg c,p and L reg c,q in Figure 3 are as follows: $$\begin{array}{l}{{{\mathcal{L}}_{c,p}^{r e g}={\mathcal{L}}_{c}^{p}+\lambda_{r e g}^{p}{\mathcal{L}}_{r e g}^{p}}}\\ {{{\mathcal{L}}_{c,q}^{r e g}={\mathcal{L}}_{c}^{q}+\lambda_{r e g}^{q}{\mathcal{L}}_{r e g}^{q}}}\end{array}$$ $$\begin{array}{l}{(14)}\\ {(15)}\end{array}$$ reg (14) reg (15) where λ p reg and λ qreg are two hyper-parameters. The final large-margin supervised contrastive learning loss L reg c is as follows: $${\mathcal{L}}_{c}^{r e g}={\mathcal{L}}_{c,p}^{r e g}+{\mathcal{L}}_{c,q}^{r e g}$$ c,q (16) ## 2.4 Self-Distillation To further reduce the impact of ASR errors, we apply a self-distillation method. We try to regularize the model by minimizing Kullback–Leibler (KL) divergence (Kullback and Leibler, 1951; He et al., 2022) between the current prediction and the previous one (Liu et al., 2020, 2021). For the manual transcript x p i and its corresponding label y p i , p t i,p = P(y p i|x p i , t) denotes the probability distribution of x p i at the t-th epoch, and p t i,q = P(y q i|x q i , t) denotes the probability distribution of x q i at the t-th epoch. The loss functions L p d and L q d of selfdistillation in Figure 3 are formulated as: $$\begin{array}{l}{{{\mathcal{L}}_{d}^{p}{=}\frac{1}{N}\sum_{i=1}^{N}\tau_{d}^{2}K L\Big(\frac{p_{i,p}^{t-1}}{\tau_{d}}\|\frac{p_{i,p}^{t}}{\tau_{d}}\Big)}}\\ {{{\mathcal{L}}_{d}^{q}{=}\frac{1}{N}\sum_{i=1}^{N}\tau_{d}^{2}K L\Big(\frac{p_{i,q}^{t-1}}{\tau_{d}}\|\frac{p_{i,q}^{t}}{\tau_{d}}\Big)}}\end{array}$$ (17) (18) where τd is the temperature to scale the smoothness of two distributions, note that p 0 i,p is the one-hot vector of label y p i and p 0 i,q is that of label y q i . Then the final self-distillation loss Ld is the sum of two loss functions L p d and L q d : $${\mathcal{L}}_{d}={\mathcal{L}}_{d}^{p}+{\mathcal{L}}_{d}^{q}$$ d(19) ## 2.5 Training Objective Pre-training Following (Chang and Chen, 2022), the pre-training loss Lpt is the weighted sum of the large-margin self-supervised contrastive learning loss L reg sc and an MLM loss Lmlm: $${\cal L}_{pt}=\lambda_{pt}{\cal L}_{sc}^{reg}+(1-\lambda_{pt})\cdot{\cal L}_{mlm}\tag{20}$$ where $\lambda_{pt}$ is the coefficient balancing the two tasks. Fine-tuning Following Haihong et al. (2019); Chen et al. (2022), the intent detection objective is: $$\begin{array}{l}{{{\mathcal L}_{c e}^{p}=-\sum_{i=1}^{N}y_{i}^{p}\log p_{i,p}^{t}}}\\ {{{\mathcal L}_{c e}^{q}=-\sum_{i=1}^{N}y_{i}^{q}\log p_{i,q}^{t}}}\\ {{{\mathcal L}_{c e}={\mathcal L}_{c e}^{p}+{\mathcal L}_{c e}^{q}}}\end{array}$$ i,p (21) i,q (22) ce (23) The final fine-tuning loss Lf t is the weighted sum of cross-entropy loss Lce, mutual learning loss Lmut, large-margin supervised contrastive learning loss L reg c and self-distillation loss Ld: $${\mathcal{L}}_{f t}={\mathcal{L}}_{c e}+\alpha{\mathcal{L}}_{m u t}+\beta{\mathcal{L}}_{c}^{r e g}+\gamma{\mathcal{L}}_{d}\qquad(24)$$ $$(16)$$ where α, β, γ are the trade-off hyper-parameters. However, directly using KL divergence for selfditillation loss may suffer from the vanishing issue. To mitigate KL vanishing issue, we adopt a cyclical annealing schedule, which is also applied for this purpose in Fu et al. (2019); Zhao et al. (2021). Concretely, γ in Eq.24 changes periodically during training iterations, which is described by Eq.25: $$\begin{array}{l l}{{\gamma=\left\{\begin{array}{l l}{{\frac{r}{R C}},}&{{r\leqslant R G}}\\ {{1,}}&{{r>R G}}\end{array}\right.}}\\ {{r=m o d(t-1,G)}}\end{array}$$ $$(25)$$ (17) $\binom{18}{2}$ . where t represents the current training iteration and R and G are two hyper-parameters. ## 3 Experiments 3.1 Datasets And Metrics Following Chang and Chen (2022), we conduct the experiments on three publicly available benchmark datasets1: SLURP, ATIS and TREC6 (Bastianelli et al., 2020; Hemphill et al., 1990; Li and Roth, 2002; Chang and Chen, 2022). The statistics of the three datasets included are shown in Table 1. $$(19)$$ | Dataset | #Class | Avg. Length | Train | Test | |-----------|----------|---------------|---------|--------| | SLURP | 18 × 46 | 6.93 | 50,628 | 10,992 | | ATIS | 22 | 11.14 | 4,978 | 893 | | TREC6 | 6 | 8.89 | 5,452 | 500 | Table 1: The statistics of all datasets. The *test* set of SLURP is sub-sampled. (21) $$\begin{array}{l}\small\mathbf{(22)^{}}\end{array}$$ = (23) . SLURP is a challenging SLU dataset with various domains, speakers, and recording settings. An intent of SLURP is a (scenario, action) pair, the joint accuracy is used as the evaluation metric and the prediction is considered correct only when both scenario and action are correctly predicted. The ASR transcripts are obtained by Google Web API. ATIS and TREC6 are two SLU datasets for flight reservation and question classification respectively. 1SLURP is available at https://github.com/MiuLab/ SpokenCSE, and ATIS and TREC6 are available at https: //github.com/Observeai-Research/Phoneme-BERT. | w/o manual transcripts | w/ manual transcripts | | | | | | |------------------------------------------|-------------------------|--------|--------|--------|--------|--------| | Model | SLURP | ATIS | TREC6 | SLURP | ATIS | TREC6 | | RoBERTa (Liu et al., 2019) | 83.97 | 94.53 | 84.08 | 84.42 | 94.86 | 84.54 | | Phoneme-BERT (Sundararaman et al., 2021) | 83.78 | 94.83 | 85.96 | 84.16 | 95.14 | 86.48 | | SimCSE (Gao et al., 2021) | 84.47 | 94.07 | 84.92 | 84.88 | 94.32 | 85.46 | | SpokenCSE (Chang and Chen, 2022) | 85.26 | 95.10 | 86.36 | 85.64 | 95.58 | 86.82 | | ML-LMCL | 88.52† | 96.52† | 89.24† | 89.16† | 97.21† | 89.96† | We use the synthesized text released by PhonemeBERT (Sundararaman et al., 2021), where the data is synthesized by a text-to-speech (TTS) model and later transcribed by ASR. We adopt accuracy as the evaluation metric for intent detection. ## 3.2 Implementation Details We pre-train the model for 10K steps with a batch size 128 on each dataset, and finetune the whole model up to 10 epochs with a batch size 256 to avoid overfitting. The training will early-stop if the loss on dev set does not decrease for 3 epochs. On SLURP, two separate classification heads are trained for scenario and action with the shared BERT embeddings. The mask ratio of MLM is set to 0.15, τsc is set to 0.2, δ + is set to 0.2, δ− is set to 0.5, λreg is set to 0.1, τc is set to 0.2, λ p reg is set to 0.15, λ qreg is set to 0.15, τd is set to 5, λpt is set to 0.5, α is set to 1, β is set to 0.1, R is set to 0.5, and G is set to 5000. The reported scores are averaged over 5 runs. During both pre-training and fine-tuning, we utilize Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98, and 4k warm-up updates to optimize the parameters. The training process lasts a few hours. All experiments are conducted at an Nvidia Tesla-A100 GPU. ## 3.3 Baslines We compare our model with the following baselines: (1) RoBERTa (Liu et al., 2019): a RoBERTabase model directly fine-tuned on the target training data; (2) Phoneme-BERT (Sundararaman et al., 2021): a RoBERTa-base model which is further pretrained on an additional corpus with the phoneme information and then fine-tuned on the target training data; (3) SimCSE (Gao et al., 2021): a stateof-the-art sentence embedding method applying contrastive learning; (4) SpokenCSE (Chang and Chen, 2022): a strong baseline for improving ASR robustness in SLU task. ## 3.4 Main Results The performance comparison of ML-LMCL Net and baselines are shown in Table 2, from which we have the following observations: (1) Our ML-LMCL gains consistent improvements on all tasks and datasets. This is because our model achieves the mutual guidance between the model trained on the manual and ASR transcripts, allowing these two models to share the knowledge for each other. Moreover, large-margin contrastive learning encourages the model to more accurately distinguish between intra-cluster and inter-cluster pairs, which can avoid pushing away the semantically similar pairs as much as possible. And cyclical annealing schedule is applied to mitigate KL vanish, which can improve the robustness of the model. When not using manual transcripts, it still overpasses SpokenCSE, which also demonstrates the effectiveness of large-margin contrastive learning and cyclical annealing schedule to improve ASR robustness in SLU. (2) In contrast, it is obvious that the improvement on SLURP dataset is more significant. We believe the reason is that SLURP is a more challenging SLU dataset than ATIS and TREC6. An intent of SLURP is a (scenario, action) pair and the prediction is considered to be correct only if the scenario and action are both correctly predicted. Due to the shortcomings of conventional contrastive learning, previous work fail to align the ASR transcript and its associate manual transcript with high accuracy. As a result, due to ASR errors, it is common that one of the two components of an intent is incorrectly predicted. Our ML-LMCL is dedicated to overcome the shortcomings of conventional contrastive learning, resulting in better alignment and the improvement of performance. ## 3.5 Analysis To verify the advantages of ML-LMCL from different perspectives, we use clean manual transcripts and conduct a set of ablation experiments. The experimental results are shown in Table 3. | Model | w/ manual transcripts | | | |-----------------------|-------------------------|---------------|---------------| | SLURP | ATIS | TREC6 | | | ML-LMCL | 89.16 | 97.21 | 89.96 | | w/o Lmut | 88.68 (↓0.48) | 96.83 (↓0.38) | 89.52 (↓0.44) | | w/o Lreg | 88.92 (↓0.24) | 96.98 (↓0.23) | 89.77 (↓0.19) | | w/o L q reg & L q reg | 88.75 (↓0.41) | 96.92 (↓0.29) | 89.74 (↓0.22) | | w/o cyc | 88.98 (↓0.18) | 97.08 (↓0.13) | 89.85 (↓0.11) | | w/o Lmut + bsz↑ | 88.72 (↓0.44) | 96.92 (↓0.29) | 89.65 (↓0.31) | | w/ Lsof t | 89.12 (↓0.04) | 97.18 (↓0.03) | 89.92 (↓0.04) | ## 3.5.1 Effectiveness Of Mutual Learning One of the core contributions of ML-LMCL is mutual learning, which allows the two models trained on manual and ASR transcripts learn from each other. To verify the effectiveness of mutual learning, we remove mutual learning loss and refer it to w/o Lmut in Table 3. We observe that accuracy drops by 0.48, 0.38 and 0.44 on SLURP, ATIS and TREC6, respectively. Contrastive learning benefits more from larger batch size because larger batch size provides more negative examples to facilitate convergence (Chen et al., 2020a), and many attempts have been made to improve the performance of contrastive learning by increasing batch size indirectly (He et al., 2020; Chen et al., 2020b). Therefore, to verify that the proposed mutual learning rather than the indirectly boosted batch sizes works, we double the batch size after removing mutual learning loss and refer it to w/o Lmut + bsz↑. The results show that despite the boosted batch size, it still performs worse than ML-LMCL, which demonstrate that the improvements come from the proposed mutual language rather than the boosted batch size. ## 3.5.2 Effectiveness Of Distance Polarization Regularizer To verify the effectiveness of distance polarization regularizer, we also remove distance polarization regularizer in pre-training and fine-tuning, which is named as w/o Lreg and w/o L p reg & L p reg, respectively. When Lreg is removed, the accuracy drops by 0.24, 0.23 and 0.19 on SLURP, ATIS and TREC6, respectively. And when L p reg and L qreg are removed, the accuracy drops by 0.41, 0.29 and 0.22 on SLURP, ATIS and TREC6. The results demonstrate that distance polarization regularizer can alleviate the negative impact of conventional contrastive learning. Furthermore, the drop in accuracy is greater when fine-tuning than when pretraining. We believe that the reason is that supervised contrast learning in fine-tuning is easier to be affected by label noise than unsupervised contrast learning in pre-training. As a result, more semantically similar pairs are incorrectly pushed away in fine-tuning when the regularizer is removed. Chang and Chen (2022) also proposes a selfdistilled soft contrastive learning loss to relieve the negative effect of noisy labels in supervised contrastive learning. However, we believe that the regularizer can also effectively reduce the impact of label noise. Therefore, our ML-LMCL does not include another module to tackle the problem of label noise. To verify this, we augument ML-LMCL with the self-distilled soft contrastive learning loss, which is termed as w/ L*sof t*. We can observe that not only L*sof t* does not bring any improvement, it even causes performance drops, which proves that the distance polarization regularizer can indeed reduce the impact of label noise. ## 3.5.3 Effectiveness Of Cyclical Annealing Schedule We also remove cyclical annealing schedule and relate it to *w/o cyc*. We observe that the accuracy drops by 0.18, 0.13 and 0.11 on SLURP, ATIS and TREC6, respectively, which demonstrates that the cyclical annealing schedule also plays an important role in enhancing the performance by mitigating the problem of KL vanishing. ## 3.6 Visualization To better understand how mutual learning and largemargin contrastive learning affects and contributes to the final result, we show the visualization of an example on SLURP dataset in Figure 4. *"local* theater screening which movie" and "olly what movies are playing near me" are two manual transcripts with the same intent, and the representations of them and their associated ASR transcripts stay close to each other in ML-LMCL. However, in SpokenCSE, their representations keep a longer dis- ![7_image_0.png](7_image_0.png) tance, which further demonstrates that our method can align the ASR transcript and its associate manual transcript with high accuracy and better avoid semantically similar pairs being pushed away. ## 4 Related Work Error-robust Spoken Language Understanding SLU usually suffers from ASR error propagation and this paper focus on improving ASR robustness in SLU. Chang and Chen (2022) makes the first attempt to use contrastive learning to improve ASR robustness with only textual information. Following Chang and Chen (2022), this paper only focuses on intent detection in SLU. Intent detection is usually formulated as an utterance classification problem. As a large number of pre-trained models achieve surprising results across various tasks (Dong et al., 2022; Yang et al., 2023c; Zhu et al., 2023; Yang et al., 2023b), some BERTbased (Devlin et al., 2019) pre-trained work has been explored in SLU where the representation of the special token [CLS] is used for intent detection. In our work, we adopt RoBERTa and try to learn the invariant representations between clean manual transcripts and erroneous ASR transcripts. Mutual Learning Our method is motivated by the recent success in mutual learning. Mutual learning is an effective method which trains two models of the same architecture simultaneously but with different initialization and encourages them to learn collaboratively from each other. Unlike knowledge distillation (Hinton et al., 2015), mutual learning doesn't need a powerful teacher network which is not always available. Mutual learning is first proposed to leverage information from multiple models and allow effective dual knowledge transfer in image processing tasks (Zhang et al., 2018; Zhao et al., 2021). Based on this, Wu et al. (2019b) utilizes mutual learning to capture complementary features in semi-supervised classification. Wu et al. (2019a) applies mutual learning between contour extraction and edge extraction for saliency detection. In NLP, Zhao et al. (2021) utilizes mutual learning for speech translation to transfer knowledge between a speech translation model and a machine translation model. In our work, we apply a mutual learning framework to transfer knowledge between the model trained on manual transcripts and the model trained on ASR transcripts. Contrastive learning Contrastive learning aims at learning example representations by minimizing the distance between the positive pairs in the vector space and maximizing the distance between the negative pairs (Saunshi et al., 2019; Liang et al., 2022; Liu et al., 2022a), which is first proposed in the field of computer vision (Chopra et al., 2005; Schroff et al., 2015; Sohn, 2016; Chen et al., 2020a; Wang and Liu, 2021). In the NLP area, contrastive learning is applied to learn sentence embeddings (Giorgi et al., 2021; Yan et al., 2021), translation (Pan et al., 2021; Ye et al., 2022) and summarization (Wang et al., 2021; Cao and Wang, 2021). Recently, Chen et al. (2021) points that conventional contrastive learning algorithms are still not good enough since they fail to maintain a large margin in the distance space for reliable instance discrimination Inspired by this, we add a similar distance polarization regularizer as Chen et al. (2021) to address this issue. To the best of our knowledge, we are the first to introduce the idea of large-margin contrastive learning to the SLU task. ## 5 Conclusion In this paper, we propose ML-LMCL, a novel framework for improving ASR robustness in SLU. We apply mutual learning and introduce the distance polarization regularizer. Moreover, cyclical annealing schedule is utilized to mitigate KL vanishing. Experiments and analysis on three benchmark datasets show that our model significantly outperforms previous models whether clean manual transcriptions is available in fine-tuning or not. Future work will focus on improving ASR robustness with only clean manual transcriptions. ## Limitations By applying mutual learning, introducing distance polarization regularizer and utilizing cyclical annealing schedule, ML-LMCL achieves significant improvement on three benchmark datasets. Nevertheless, we summarize two limitations for further discussion and investigation of other researchers: (1) ML-LMCL still requires the ASR transcripts in fine-tuning to align with the target inference scenario. However, the ASR transcripts may not always be readily available due to the constraint of ASR systems and privacy concerns. In the future work, we will attempt to further improve ASR robustness without using any ASR transcripts. (2) The training and inference runtime of MLLMCL is larger than that of baselines. We attribute the extra cost to the fact that ML-LMCL has more parameters than baselines. In the future work, we plan to design a new paradigm with fewer parameters to reduce the requirement for GPU resources. ## Acknowledgements We thank all anonymous reviewers for their constructive comments. This paper was partially supported by Shenzhen Science & Technology Research Program (No: GXWD2020123116580700720200814115301001) and NSFC (No: 62176008). ## References Hervé Abdi and Lynne J Williams. 2010. Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4):433–459. Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. 2020. Slurp: A spoken language understanding resource package. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 7252–7262. Shuyang Cao and Lu Wang. 2021. Cliff: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633–6649. Ya-Hsin Chang and Yun-Nung Chen. 2022. Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding. In *Proc. Interspeech 2022*, pages 3458–3462. Dongsheng Chen, Zhiqi Huang, Xian Wu, Shen Ge, and Yuexian Zou. 2022. Towards joint intent detection and slot filling via higher-order attention. In *IJCAI*. Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, and Masashi Sugiyama. 2021. Large-margin contrastive learning with distance polarization regularizer. In *International Conference on Machine Learning*, pages 1673–1683. PMLR. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. *arXiv preprint arXiv:2003.04297*. Xuxin Cheng, Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, and Yuexian Zou. 2023a. M3st: Mix at three levels for speech translation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Xuxin Cheng, Zhihong Zhu, Hongxiang Li, Yaowei Li, and Yuexian Zou. 2023b. Ssvmr: Saliency-based selftraining for video-music retrieval. In *ICASSP 2023-* 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In *2005 IEEE Computer Society Conference on Computer Vision and* Pattern Recognition (CVPR'05), volume 1, pages 539–546. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying Shen, and Min Yang. 2022. A survey of natural language generation. *ACM Computing* Surveys, 55(8):1–38. Samrat Dutta, Shreyansh Jain, Ayush Maheshwari, Ganesh Ramakrishnan, and Preethi Jyothi. 2022. Error correction in asr using sequence-to-sequence models. *arXiv preprint arXiv:2202.01157*. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cyclical annealing schedule: A simple approach to mitigating KL vanishing. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 240–250. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 6894–6910. John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021. Declutr: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 879–895. E Haihong, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467– 5471. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pages 9729–9738. Rian He, Shubin Cai, Zhong Ming, and Jialei Zhang. 2022. Weighted self distillation for chinese word segmentation. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 1757– 1770. Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). Peixian Hong, Tao Wu, Ancong Wu, Xintong Han, and Wei-Shi Zheng. 2021. Fine-grained shapeappearance mutual learning for cloth-changing person re-identification. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pages 10513–10522. Chao-Wei Huang and Yun-Nung Chen. 2019. Adapting pretrained transformer to lattices for spoken language understanding. In *2019 IEEE Automatic Speech* Recognition and Understanding Workshop (ASRU), pages 845–852. IEEE. Yiren Jian, Chongyang Gao, and Soroush Vosoughi. 2022. Contrastive learning for prompt-based fewshot language learners. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5577–5587. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. *The annals of mathematical statistics*, 22(1):79–86. Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. Yinghui Li, Qingyu Zhou, Yangning Li, Zhongli Li, Ruiyang Liu, Rongyi Sun, Zizhen Wang, Chao Li, Yunbo Cao, and Hai-Tao Zheng. 2022. The past mistake is the future wisdom: Error-driven contrastive probability optimization for chinese spell checking. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 3202–3213. Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022. Jointcl: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 81–91. Baohao Liao, Yingbo Gao, and Hermann Ney. 2020. Multi-agent mutual learning at sentence-level and token-level for neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1715–1724. Risheng Liu, Zhiying Jiang, Shuzhou Yang, and Xin Fan. 2022a. Twin adversarial contrastive learning for underwater image enhancement and beyond. *IEEE* Transactions on Image Processing, 31:4922–4936. Ruiyang Liu, Yinghui Li, Linmi Tao, Dun Liang, and Hai-Tao Zheng. 2022b. Are we ready for a new paradigm shift? a survey on visual deep mlp. *Patterns*, 3(7):100520. Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. Fastbert: a selfdistilling bert with adaptive inference time. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 6035– 6044. Yang Liu, Sheng Shen, and Mirella Lapata. 2021. Noisy self-knowledge distillation for text summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 692–703. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Anirudh Mani, Shruti Palaskar, Nimshi Venkat Meripo, Sandeep Konam, and Florian Metze. 2020. Asr error correction and domain adaptation using machine translation. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6344–6348. IEEE. Anastasiia Mishchuk, Dmytro Mishkin, Filip Radenovic, and Jiri Matas. 2017. Working hard to know your neighbor's margins: Local descriptor learning loss. *Advances in neural information processing systems*, 30. Xuecheng Nie, Jiashi Feng, and Shuicheng Yan. 2018. Mutual learning to adapt for joint human parsing and pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 502– 517. Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li. 2021. Contrastive learning for many-to-many multilingual neural machine translation. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 244–258. Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. 2019. A theoretical analysis of contrastive unsupervised representation learning. In International Conference on Machine Learning, pages 5628–5637. PMLR. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pages 815–823. Gwenaelle Cunha Sergio, Dennis Singh Moirangthem, and Minho Lee. 2020. Attentively embracing noise for robust latent representation in bert. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 3479–3491. Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29. Mukuntha Narayanan Sundararaman, Ayush Kumar, and Jithendra Vepa. 2021. Phoneme-bert: Joint language modelling of phoneme sequence and asr transcript. *arXiv preprint arXiv:2102.00804*. Gokhan Tur and Renato De Mori. 2011. *Spoken language understanding: Systems for extracting semantic information from speech*. John Wiley & Sons. Chengyu Wang, Suyang Dai, Yipeng Wang, Fei Yang, Minghui Qiu, Kehan Chen, Wei Zhou, and Jun Huang. 2022. Arobert: An asr robust pre-trained language model for spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:1207–1218. Danqing Wang, Jiaze Chen, Hao Zhou, Xipeng Qiu, and Lei Li. 2021. Contrastive aligned joint learning for multilingual summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2739–2750. Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In *Proceedings of* the IEEE/CVF conference on computer vision and pattern recognition, pages 2495–2504. Haoyu Wang, Shuyan Dong, Yue Liu, James Logan, Ashish Kumar Agrawal, and Yang Liu. 2020. Asr error correction with augmented transformer for entity retrieval. In *Interspeech*, pages 1550–1554. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International* Conference on Machine Learning, pages 9929–9939. PMLR. Runmin Wu, Mengyang Feng, Wenlong Guan, Dong Wang, Huchuan Lu, and Errui Ding. 2019a. A mutual learning method for salient object detection with intertwined multi-supervision. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pages 8150–8159. Si Wu, Jichang Li, Cheng Liu, Zhiwen Yu, and HauSan Wong. 2019b. Mutual learning of complementary networks via residual correction for improving semi-supervised classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6500–6509. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consert: A contrastive framework for self-supervised sentence representation transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5065–5075. Bang Yang, Fenglin Liu, Xian Wu, Yaowei Wang, Xu Sun, and Yuexian Zou. 2023a. Multicapclip: Auto-encoding prompts for zero-shot multilingual visual captioning. In *Proceedings of the 61st Annual Meeting of the Association for Computational* Linguistics, Toronto, Canada. Association for Computational Linguistics. Bang Yang, Fenglin Liu, Yuexian Zou, Xian Wu, Yaowei Wang, and David A Clifton. 2023b. Zeronlg: Aligning and autoencoding domains for zero-shot multimodal and multilingual natural language generation. *arXiv preprint arXiv:2303.06458*. Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, and Jian Zhang. 2023c. Implicit neural representation for cooperative low-light image enhancement. arXiv preprint arXiv:2303.11722. Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5099–5113. Association for Computational Linguistics. Steve Young, Milica Gašic, Blaise Thomson, and Ja- ´ son D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. *Proceedings of the* IEEE, 101(5):1160–1179. Dong Zhang, Shimin Li, Xin Zhang, Jun Zhan, Pengyu Wang, Yaqian Zhou, and Xipeng Qiu. 2023a. Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities. arXiv preprint arXiv:2305.11000. Dong Zhang, Rong Ye, Tom Ko, Mingxuan Wang, and Yaqian Zhou. 2023b. Dub: Discrete unit backtranslation for speech translation. arXiv preprint arXiv:2305.11411. Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. 2018. Deep mutual learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4320–4328. Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, and Binqiang Zhao. 2022. A contrastive framework for learning sentence representations from pairwise and triple-wise perspective in angular space. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4892–4903. Jiawei Zhao, Wei Luo, Boxing Chen, and Andrew Gilman. 2021. Mutual-learning improves end-to-end speech translation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 3989–3994. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. Knncontrastive learning for out-of-domain intent classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5129–5141. Wei Zhu, Xiaoling Wang, Yuan Ni, and Guotong Xie. 2021. Gaml-bert: Improving bert early exiting by gradient aligned mutual learning. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3033–3044. Zhihong Zhu, Xuxin Cheng, Zhiqi Huang, Dongsheng Chen, and Yuexian Zou. 2023. Towards unified spoken language understanding decoding via label-aware compact linguistics representations. In *Findings of* the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In Limitation Section. ✗ A2. Did you discuss any potential risks of your work? This paper does not involve any data collection and release thus there are no privacy issues. All the datasets used in this paper are publicly available and widely adopted by researchers to test the performance of SLU models. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Section Abstract and Section 1. Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 3. Experiments. ✓ B1. Did you cite the creators of artifacts you used? In section 3. Experiments. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In section 3. Experiments. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In section 3. Experiments. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In section 3. Experiments. ## C ✓ **Did You Run Computational Experiments?** In Section 3. Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In section 3. Experiments. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In section 3. Experiments. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In section 3. Experiments. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In section 3. Experiments. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
tan-etal-2023-guiding
Guiding Dialogue Agents to Complex Semantic Targets by Dynamically Completing Knowledge Graph
https://aclanthology.org/2023.findings-acl.407
In the target-oriented dialogue, the representation and achievement of targets are two interrelated essential issues. In current approaches, the target is typically supposed to be a single object represented as a word, which makes it relatively easy to achieve the target through dialogue with the help of a knowledge graph (KG). However, when the target has complex semantics, the existing knowledge graph is often incomplete in tracking complex semantic relations. This paper studies target-oriented dialog where the target is a topic sentence. We combine the methods of knowledge retrieval and relationship prediction to construct a context-related dynamic KG. On dynamic KG, we can track the implicit semantic paths in the speaker{'}s mind that may not exist in the existing KGs. In addition, we also designed a novel metric to evaluate the tracked path automatically. The experimental results show that our method can control the agent more logically and smoothly toward the complex target.
# Guiding Dialogue Agents To Complex Semantic Targets By Dynamically Completing Knowledge Graph Yue Tan1**, Bo Wang**1∗ , Anqi Liu1**, Dongming Zhao**2, Kun Huang2, Ruifang He1**, Yuexian Hou**1 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2AI Lab, China Mobile Communication Group Tianjin Co., Ltd. {tanyue_098, bo_wang, anqi_liu }@tju.edu.cn ## Abstract In target-oriented dialogue, the representation and achievement of targets are two interrelated essential issues. In current approaches, the target is typically assumed to be a single object represented as a word, which makes it relatively easy to achieve through dialogue with the help of a knowledge graph (KG). However, when the target has complex semantics, the existing KG is often incomplete in tracking semantic relations. This paper studies target-oriented dialog where the target is a topic sentence. We combine the methods of knowledge retrieval and relationship prediction to construct a contextrelated dynamic KG, in which we can track the implicit semantic paths in the speaker's mind that may not exist in the existing KGs. In addition, we also designed a novel metric to evaluate the tracked path automatically. The experimental results show that our method can control the agent more logically and smoothly toward the complex target. ## 1 Introduction Different from the open-domain and task-oriented dialog, the target-oriented dialog is a more challenging task that aims to achieve a global target through the dialog. This process cannot be decomposed into subtasks as in a task-oriented dialogue and is excepted to be semantically coherent and effective with fewer turns. Target-oriented dialog agents have a broad-based demand, e.g., psychotherapy (Sharma et al., 2020), conversational recommendation (Kang et al., 2019), and education (Clarizia et al., 2018), where the agent is expected to guide the dialog to a global target, e.g., a mental state, an item, and a knowledge point, respectively. In general, the target of target-oriented dialog can be an entity (e.g., an item) or a topic (e.g., a knowledge point). The topic target is more challenging because of complex semantics, often simplified as keywords in existing works (Tang et al., ∗Corresponding author. Figure 1: An example of using KG path in transferring ![0_image_0.png](0_image_0.png) topic to a target sentence. The key phrase "learn different languages" in the context is missed in the KG, and only "translator" appears in the human transition response. This happens because some relations in the human speaker's mind do not exist in the KG. If missed concepts and relations can be completed in kG, we can link the context and target with the transition response. 2019; Zhong et al., 2021a). In this way, existing approaches often require a knowledge Graph (KG) to retrieve relevant knowledge between the current dialog context and target keywords (Zhong et al., 2021a; Yang et al., 2022). Some latest work of target-oriented dialog also used the stored knowledge in LM to generate knowledge paths to assist dialog generation (Gupta et al., 2022). However, there are still issues for knowledgebased approaches to target-oriented dialogue: (1) The keywords are often ambiguous to represent complex target semantics. (2) KG knowledge is often insufficient. Concepts and relations for targetoriented processes in specific dialogue are often missed in widely used common KG (Zhong et al., 2021a; Yang et al., 2022), which results in failed or redundant long processes. For example, in Figure 1, the key phrase "learn different languages" can reflect contextual semantics better than a single concept. But it is not a node in the KG. In addition, the critical two-hop logic used by the speaker (language-translator-job) is missed in KG, while in the alternative long path, the concepts (e.g., "English", "worker") are redundant for the response generation. (3) KG path acquisition is challenging due to the large search space. Furthermore, complex target semantics requires more precise control over the space of knowledge selection, which is different from current works that use knowledge to enrich response generation without target restriction (Zou et al., 2021; Zhou et al., 2022) or only use the keyword as the target (Gupta et al., 2022). To address these issues, in this work, we represent the target topic with a sentence instead of keywords. Subsequently, instead of using a static KG, we achieve the target sentence by reasoning on a dynamic KG. Before the response generation, the dynamic KG is generated based on static KG according to the dialogue context and the target sentence. This dynamic KG is expected to involve a more context-relevant and shorter path toward the target sentence. Specifically, besides the node and edges in the static KG, the additional dynamic nodes include key phrases in the dialog context. A relationship prediction model predicts the additional dynamic edges. To control the space of KG path selection more reasonably, in constructing the dynamic node, we use an extended "phrases bag" and a trained model respectively to ensure the diversity and relevance of nodes in the dynamic graph. In addition, we design an automatic metric for knowledge path evaluation, considering the convergence of path semantics with the context and target semantics. Our main contributions are as follows: (1) For guiding dialogues towards a given target sentence, we design a knowledge path generation method based on a dynamic KG. As far as we know, this is the first time relationship prediction has been used for multi-hop reasoning of topic transition in target-oriented dialogues. (2) We propose an automatic metric to evaluate the quality of generated knowledge paths, considering the inference relationship between path fragment semantics and sentence semantics. (3) We extracted a subset from the dialogue data set including hard cases where the target-oriented transition cannot be matched by a static KG path and verified the effectiveness of our method on it. ## 2 Related Work Target-oriented dialogue agents In the study of target-oriented dialogue agents, a typical simplified task is keyword-guided dialog leading the dialog to a given keyword or a recommended item through multi-turn dialogue. The task is often divided into two stages (Tang et al., 2019; Qin et al., 2020; Zhong et al., 2021a), in which the first stage is to predict a next-turn keyword, and the second stage is keyword-based response retrieval. Instead of keywords, our work uses sentences with more complex semantics as the global target. In this direction, (Gupta et al., 2022) obtains SOTA performance using a pre-trained language model to generate multi-hop paths between a pair of concepts for transition response generation. Regarding data, (Sevegnani et al., 2021) propose a popular dataset for target-oriented dialog, which will be used in our work. Commonsense Reasoning Recent approaches have realized the importance of commonsense reasoning in language generation, e.g., (Ji et al., 2020a) studied commonsense explanation generation. In this work, we follow the researches that utilize commonsense reasoning in generation models (Zhong et al., 2021b; Zou et al., 2021; Zhou et al., 2022). (Yang et al., 2022; Zou et al., 2021) select next-turn concepts from the static KG, conditioned on the dialogue context. Different from this kind of knowledge retrieval method, (Zhou et al., 2022; Gupta et al., 2022) generates implicit knowledge using a language model. (Becker et al., 2021) combines relation classification and target prediction for generating commonsense knowledge representations over text. Similarly to it, we also used the relation prediction method. Still, we use it to complement the knowledge graph to obtain multi-hop paths and combined knowledge retrieval to enhance controllability. Commonsense Path Evaluation Most research that involves utilizing commonsense knowledge for tasks such as question answering (Kapanipathi et al., 2020) and commonsense reasoning (Lin et al., 2019) tend to use paths extracted from static knowledge graphs. However, the effectiveness of the knowledge paths is evaluated indirectly through the performance of downstream tasks in these work. (Becker et al., 2021) automatically evaluates the knowledge path through the similarity with the implicit knowledge in the dataset. Still, this method only works when annotated golden paths are provided in the dataset. We will address this issue with a novel automatic evaluation metric based on the semantic connection between dialogues and paths. ## 3 Methodology 3.1 Problem Statement We frame the target-oriented response generation: ![2_image_0.png](2_image_0.png) Given a dialog context c and a target t, a conditional language model learns to predict a transition response y. Our model finds a bridge path p on a dynamically acquired KG G. Then we use p, c and t to generate a proper y. The explanation of c, t, and y are as follows: **dialog context** c: A sentence that can represent the topic in the current dialog context. **target** t: A sentence representing the target topic of the current dialog. **transition response** y: A sentence that logically connects the semantics of c and t. ## 3.2 Method Overview Figure 2 shows the overview architecture of our proposed model. Before using the pre-trained language model to generate a transition response, we built a dynamic graph to obtain the path, including two steps of node selection and edge construction. In the node selection, we ensure the diversity of nodes in the dynamic graph through extended "source-phrases bag" and "target-phrases bag" and ensure the contextual relevance of nodes through path routing and model selection. In the edge construction, we use a relation prediction model to complement the static graph. Finally, to generate transition responses, we generate multi-hop paths from the dynamic graph and send them into Commonsense Response Generator (CRG model) based on a pre-trained GPT2. In order to automatically and unbiasedly evaluate the advantages of our paths, we design an automatic evaluation metric referring to the idea of the NLI task. First, each candidate path is divided into fragments. We suppose that for a path reasonably connected with the dialog, the source sentence should entail the start fragment of the path, and the target sentence should entail the end fragment of the path. With this idea, we construct positive and negative samples to train a classifier model for path evaluation. ## 3.3 Dynamic Graph Building To both make full use of the existing knowledge in the KG and infer additional knowledge related to the current dialog, we combine knowledge retrieval and relation prediction to build a dynamic graph. ## 3.3.1 Dynamic Node Selection This step ensures that the nodes in the dynamic graph are diverse and context related. We referred to the idea of using path routing and concept selection to deactivate nodes in (Ji et al., 2020a), and made changes suitable for our tasks regarding path acquisition, path representation, concept word representation, etc. Path Routing To obtain the initial candidate nodes, we heuristically retrieve multi-hop paths from the ConceptNet based on the context and target sentence. To include diverse candidate words, we use an extended "source-phrases bag" as the start of the path, which contains both key phrases in the source sentence and the most semantically similar neighbor phrases. Similarly, the "targetphrases bag" is the end of the path. Then the path routing propagates the scores along the paths to each candidate concept. For each retrieved path p, we calculate a score s(p) according to a softmatching procedure. Each p is converted into a natural language form, and then we use SentenceBERT (Reimers and Gurevych, 2019) to measure s(p) as the p's semantic similarity with the dialogue sentences. Finally, we get the routing score of a candidate concept c by the average s(p) of all the paths passing c (i.e., Pv1→c→v2 ). $$s(c)={\frac{1}{|\mathbf{P_{v_{1}\to c\to v_{2}}}|}}\sum_{\mathbf{p\in P_{v_{1}\to c\to v_{2}}}}s(\mathbf{p})\quad\quad(1)$$ A high routing score of a c indicates that the paths through c are highly related to the dialogue, so the concept word is important for this context. Finally, we preserve Vs→t with top-K1 routing scores. Model Selection For all concepts in Vs→t, we use the sentence representation to query each concept representation by taking the dot-product attention and calculating the selection probability with supervision from concepts in gold response Cs→t: $$\begin{array}{r c l}{{P(c|s)}}&{{=}}&{{\sigma(h_{c}W h_{x}^{T})}}\end{array}$$ x) (2) $$\begin{array}{ll}\mathcal{L}_{\text{cocoeg}}&=-\sum_{\mathbf{c}\in\mathcal{V}_{\mathbf{x}\to\mathbf{t}}}\mathbf{I}\left(\mathbf{c}\in\mathcal{C}_{\mathbf{s}\to\mathbf{t}}\right)\log P(c\mid\mathbf{x})\\ &\quad+\left[1-\mathbf{I}\left(\mathbf{c}\in\mathcal{C}_{\mathbf{s}\to\mathbf{t}}\right)\right]\log[1-P(c\mid\mathbf{x})]\end{array}\tag{3}$$ where hc is the concept representation encoded by GloVe, hx is the concatenated representations of the source and target sentence encoded by GRU. W is a trainable parameter matrix. I (c ∈ Cs→t)is an indicator function taking the value 1 iff c ∈ Cs→t and 0 otherwise. Finally, the bridge concepts with top-K2 P(c|x) and the sentence pair's key phrase serve as the dynamic graph's nodes. ## 3.3.2 Dynamic Edge Construction Our dynamic graph first inherits existing edges in KG and then uses a relation prediction model and relation discriminator to construct dynamic edges. Relation Prediction and Discrimination We trained a relation prediction model to add edges to the dynamic graph. Given any pair of unconnected concepts, the model predicts and judges whether they can be connected. Specifically, we fine-tune a pre-trained language model DistilBERT on gold knowledge triples by masking the relations and treating it as a multi-classification task. To adapt to our tasks and minimize the limitation of incomplete knowledge, we filter and expand the training data (detailed in Section 4.1). Using the same training data, we also train a relation discriminator to ensure the predicated edges further. ## 3.4 Knowledge Path Search Subsequently, we connect a pair of phrases from the source and target sentence using multi-hop paths. Specifically, assuming the source and target consist of m and n key phrases, we take any of the m ∗ n pairs of key phrases as the start and the destination to find paths within three hops in the dynamic graph obtained in 3.3. Finally, we use the top paths with low perplexity and high diversity scores for the transition response generation. This way, selected paths contain less irrelevant and redundant information while ensuring diversity and logicality. ## 3.5 Training The Crg Model $$\left(2\right)$$ Inspired by (Gupta et al., 2022), we send the final path in 3.4 to the Commonsense Response Generator (CRG) model together with the sentence pair to generate a transition response. The CRG model (GPT-2 based) is trained as a conditional model with the following input sequence: "[context]*source sentence*[target] target sentence [knowledge] *knowledge path* [response] *transition response* ". We train the CRG model by minimizing the log-likelihood loss of the transition response. ## 3.6 Novel Evaluation Of Transition Path A good transition path should take into account the semantics of both the source and the target sentence and contains as little redundant information as possible. However, there is no annotated golden path in the corpus, and multiple reasonable paths may exist. We propose an automatic metric without ![4_image_0.png](4_image_0.png) I actually love to cook, but sharing the **kitchen** with three roommates makes it difficult. Target: I want to get my **own place**. Positive Source→Response cook *uses* kitchen Path Fragment Response→Target kitchen *is a* place Negative Source→Random cook *motivated by goal* create Path Fragment Random→Target landmark *at location* a place Source: I do not like to **cook**. golden references. Our primary hypothesis is that the semantics of the context and target sentence should entail the information of the start and the end fragment of the path, respectively. The proposed metric PATH-COHERENCE is based on a classification model trained to classify a sentencepath fragment pair are logically coherent or irrelevant. Formally, for a sentence s, a path fragment pf , letting confclass (*s, p*f ) represent the model's probability mass assigned to the predicted NLI class after softmax(This is similar to the UNLI concept proposed in (Chen et al., 2020), i.e. we do not directly use classification labels), the function is defined as NLIscore (*s, p*f ) = $$1*\mathrm{conf}_{\mathrm{entailment}}\begin{array}{c}(s,p_{f})\quad\mathrm{if\coherent}\\ 0\quad\mathrm{if\ irrelevant}\end{array}\tag{4}$$ For a complete path, we define the first triplet of the path as its start fragment pf−s and the last triplet as its end fragment pf−t. Then the PATH-COHERENCE of a path can be calculated as NLIscore ss, pf−s + NLIscore st, pf−t ,where ss and st represent the source sentence and the target sentence respectively. We use the transition paths from the golden responses to create positive samples for training. We identify its knowledge path through a hardmatching process with context c, target t, and response y (Table 1). Specifically, this process first identifies the key phrases in the sentence. If the key phrases of two adjacent sentences are directly connected in ConceptNet, the sentence and path fragment pair is regarded as a positive training sample. For the negative sample, we use the concepts in the "phrases bag" (mentioned in 3.2.1) of the sentence as the head or tail to randomly select the triples with different relationships in KG from the positive sample. The negative sample constructed in this way has a weak correlation with the dialogue, so it can better guarantee the model's discrimination. ## 4 Experiment 4.1 Dataset For the relation predictor training, we use the CN100k benchmark dataset (Li et al., 2016), based on the OMCS subpart of ConceptNet. The dataset comprises 37 relation types, 100k relation triples in the train set, and 1200 triples in the development and the test set, respectively. We extract a subset including 15 relationships that are most suitable for topic transition (detailed in the appendix). Intuitively, the knowledge triplets implied in the dialogue corpus that does not exist in the relation prediction training data, especially those with high frequency, actually reflect the commonsense logic of people in the real dialogue. With this idea, we filtered the concept pairs whose frequency of occurrence in two adjacent sentences is higher than a threshold in the OTTers corpus and defined their relationship as "DialogAct" to form new knowledge triplets. Finally, the dataset covers 102178 triples for training, 1236 triples for development, and 1245 triples for testing. We use two datasets to test the transition response generation: 1) Otters (Sevegnani et al., 2021) contains instances with context-targettransition response triplets. It consists of two sets of splits. The Out-Of-Domain (OOD) split ensures that none of the context-target pairs in the test set are present in the train set. In the In-Domain (ID) split, one of either the context or the target in each pair in the test set is allowed to appear in the train set. 2) Augmentation-DailyDialog is similar to OTTers, which is constructed by (Gupta et al., 2022) from DailyDialog (Li et al., 2017). This data is noisier because of too many turns, sentence fragmentation, and serious overlap between transition response and target sentences. To build a more challenging task, we also extracted a sub-dataset from OTTers, called "DiscreteOTTers"1, which contains difficult cases for topic transition where the three golden transition responses corresponding to the dialog cannot match a connected path in ConceptNet. ## 4.2 Baselines We compare our model with three groups of baselines: General generating model without additional knowledge (GPT-2), concept-guided models (Concept-Predict, MultiGen), and path-guided models (Static, CODA, TBS-Path). Implementation details of baselines are in Appendix A. GPT-2 (Radford et al., **2019)**, a pre-trained GPT–small language model fine-tuned on Otters data. Conditions on the context and target sentences to generate the transition response. Concept-Predict leverages concept prediction strategy in(Zhong et al., 2021a). The predicted concepts are filtered based on closeness to the target. MultiGen (Ji et al., **2020b)** combines the vocabulary distribution generated by the underlying GPT-2 model with a concept distribution from a commonsense knowledge base (ConceptNet). Static uses ConceptNet to extract paths between concepts from sentence pairs and generate a response using a generation model. CODA (Gupta et al., **2022)** proposes a method to generate multi-hop bridging paths for targetoriented response generation. TBS-Path first externalizes implicit commonsense knowledge based on the dialog context like Zhou et al. (2022) and uses the knowledge to generate responses. ## Ablation Models: StaticRelation variant that uses the multi-hop connection in ConceptNet to replace the edge predicted by the relationship prediction model in test paths. If no connections are within 4 hops, use "[SEP]" to connect. RandomConcept variant that randomly selects top-K2 neighbor nodes within two hops in the knowledge map of context concepts to construct the dynamic graph. FewerHops variant that uses a shorter path for transition response generation. ## 4.3 Evaluation Metrics 4.3.1 Paths Evaluations Automatic Evaluation Perplexity (PPL) measures the smoothness of the path, and our designed PATHCOHERENCE (Section 3.6) measures the correlation and coherence between the path and sentence. Human Evaluation For randomly selected 100 generated paths and their corresponding sentence pair, we ask annotators to judge 1) Relevance: Is this path relevant and coherent to the context of | Source Topic: | I enjoy staring up at the sky | |-----------------|----------------------------------------------------------------------------------------------------------------| | Response: | I love watching the sky while walking my dog. | | Target Topic: | I like to spend a lot of my free time with my pet. | | Manual Path: | sky -LocatedAt-> outside -RelatedTo-> nature <-RelatedTo- animals <-IsA- dog -IsA-> pet | | Source Topic: | i really love learning different languages and have been studying them for years. | | Response: | I want to work as a UN translator. | | Target Topic: | i hate my old job. language -RelatedTo-> English -RelatedTo-> | | Manual Path: | translation -RelatedTo-> translator -IsA-> worker -RelatedTo-> job | | Source Topic: | i tell jokes on stage. | | Response: | Being a comedian has opened up a lot of dating opportunities for me. | | Target Topic: | i date a lot of girls. jokes <-RelatedTo- comedian -RelatedTo-> comedy <-HasPrerequisite- entertaining someone | | Manual Path: | <-UsedFor- going to a film -UsedFor -> dating -RelatedTo-> date | Table 2: Examples of Discontinuous Paths in the Knowledge Graph Reflected in Dialogue Logic | Model | PPL | PC | Relevance | Makes Sense | |---------------------------------------------|------------|-------|-------------|---------------| | Static | 8.79 | 16.51 | 1.06 | 1.10 | | TBS-Path(Zhou et al., 2022) | 7.44 | 28.72 | 1.56 | 1.45 | | CODA(Gupta et al., 2022) | 9.15 | 29.91 | 1.78 | 1.69 | | Ours | 7.59 40.87 | 2.28 | 2.15 | | | kappa (The agreement among the annotators.) | 0.51 | 0.55 | | | Table 3: Evaluation for path quality. Our path has significant advantages in PC results. The consistency between PC metric and human evaluation also proves the rationality of this metric design. the sentence pair? 2) Makes sense: Does the path makes sense? Four annotators with an NLP background score the paths in 1, 2, 3, higher is better. ## 4.3.2 Response Evaluations Automatic Evaluation We report standard automated metrics such as BLEU(Papineni et al., 2002) 2, METEOR(Banerjee and Lavie, 2005),ROUGE-L(Lin, 2004) and BertScore(Zhang et al., 2019). Word-overlap metrics do not correlate well with human judgements(Liu et al., 2016). So we also adopted the metric TARGET COHERENCE designed by(Gupta et al., 2022), which does not require human references but evaluates the coherence of replies based on a trained classification model. Human Evaluation Annotators are requested to evaluate the transition response on the follow-2SacreBLEU (Post, 2018) provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. The calculation is carried out using multiple references from the dataset OTTer-ID OTTer-OOD BLEU METEOR ROUGE-L BS-f1 TC BLEU METEOR ROUGE-L BS-f1 TC GPT2(Radford et al., 2019) 10.44 16.93 17.79 76.91 41.39 10.06 17.71 19.06 77.65 41.79 Concept-Predict(Zhong et al., 2021a) 14.91 15.89 19.60 77.67 35.46 12.89 15.69 19.80 78.13 38.77 MultiGen(Ji et al., 2020b) 18.45 17.46 19.82 78.15 47.87 13.94 17.73 20.91 78.02 45.89 Static 11.93 18.13 17.49 76.82 41.27 13.19 19.87 20.08 78.02 48.70 CODA(Gupta et al., 2022) 16.05 16.61 19.83 77.60 46.84 14.76 16.64 20.76 77.97 49.82 TBS-Path(Zhou et al., 2022) 13.98 17.95 19.31 78.01 49.05 14.63 18.02 20.53 78.41 46.67 Ours **20.14**∗ 18.11 **21.12**∗ 78.01 52.98∗ **18.08** 18.78 22.18 78.67 **51.44**∗ Ours-StaticRelation 18.45 18.47 20.35 78.06 49.41 15.83 18.49 21.76 78.41 48.72 Ours-RandomConcept 19.49 18.08 20.41 77.61 49.23 17.61 18.80 21.78 78.52 50.41 Ours-2hop 19.17 18.04 20.75 77.91 49.25 18.05 19.18 22.53 78.79 50.87 Table 4: Automatic evaluation on OTTers. We also present results for our model's ablations. The results of our model on most reference-based metrics and model-based metrics exceed the baselines. (t-test with p-value < 0.05) Table 5: Automatic evaluation on AugmentationDailyDialog ing criteria: (1) Smooth: rate whether the response serves as a smooth transition between the dialogue context and target. (2) Sensible: whether the transition response makes sense in itself, i.e., it is grammatical and logically coherent. (3) Informative: how much informative content a transition response carries. Four annotators with an NLP background compare transition responses from two models. ## 4.4 Preliminary Experiment The preliminary experiment examines the model's ability to use discontinuous paths in the KG fully. We extracted such cases from the dataset: the key phrases in their source sentences, transition response sentences, and target sentences are not directly connected in the KG. We manually check the KG to annotate a reasonable transfer path for these cases. As shown in Table 2, the logical connection in dialogue is probably just a few discontinuous hops in the long path of the graph. If additional edges connect these discontinuous nodes, the transition path will be more efficient. ## 4.5 Results | BLEU | METEOR | ROUGE-L | BS-f1 | TC | | |-------------------------------------------|---------------|-----------|---------------|-------|-------| | GPT2(Radford et al., 2019) | 8.20 | 19.78 | 21.74 | 75.06 | 74.37 | | Concept-Predict(Zhong et al., 2021a) 6.09 | 17.69 | 18.19 | 74.85 | 71.83 | | | MultiGen(Ji et al., 2020b) | 2.83 | 14.75 | 14.60 | 73.13 | 76.53 | | Static | 9.87 | 21.09 | 21.89 | 74.99 | 74.74 | | CODA(Gupta et al., 2022) | 8.24 | 19.09 | 19.53 | 74.08 | 75.84 | | TBS-Path(Zhou et al., 2022) | 9.92 | 21.78 | 21.93 | 74.73 | 77.21 | | Ours | 12.61∗ 23.84∗ | 24.49∗ | 75.58∗ 77.46∗ | | | ## 4.5.1 Paths Evaluations In Table 3, the PPL results indicate that our paths have good fluency, which means they can be better accepted by the language model to generate transition responses. The significant advantage of the PC BLEU METEOR ROUGE-L BS-f1 TC Static 9.93 20.49 17.68 75.34 43.90 CODA(Gupta et al., 2022) 11.45 16.77 19.67 76.33 43.08 TBS-Path(Zhou et al., 2022) 10.49 16.54 18.82 73.09 45.15 Ours 14.08∗ 20.03∗ 20.10∗ 77.94∗ **54.11**∗ Table 6: Automatic evaluation of path-based methods on Discrete-OTTers. The performance of our model on this difficult data subset is still significantly better than that of other path-based models, which shows the effectiveness of our path-acquisition method. Model Coherent Sensible Informative Ours vs. GPT-2 64% 60% 59% Ours vs. Static 55% 57% 49% Ours vs. MultiGen 56% 55% 59% Ours vs. Concept-Predict 65% 63% 62% Ours vs. CODA 57% 60% 50% Ours vs. TBS-Path 62% 61% 56% metric proves that our method is effective in obtaining context-related paths. The results of the manual evaluation are similar to those of PC, which shows that the automatic evaluation metric we designed is largely consistent with human judgments. ## 4.5.2 Response Evaluations Automatic Evaluation As shown in Tables 4 and 5. on two datasets, the results of our model on most reference-based metrics and model-based metrics exceed the baselines. This indicates the advantage that the path we input to the model is semantically connected with less context-independent information. In Table 6, we provide the evaluation results of our model and three path-based baselines on the discrete OTTers we extracted. As mentioned earlier, this dataset is challenging because it is dif- | Source: i work in a library; | Source: my job helpe me teach kids; | | | |---------------------------------------------------------------------------------|-----------------------------------------------------|------------------------------------------|---------------------------------------------------------| | Target: i had cows as pets growing up | Target: education is a passion of mine. | | | | Response:My cat jumps on my good book. | Response: I teach kids for patience | | | | Static | Path: library causes read related to eyes | Static | Path: job related to patience related to patient | | is a part of cat is used for pet | related to passion | | | | Response: I grew up in a library. | Response:I want to have some sweet. | | | | CODA | textbfPath:library is the location which has books | CODA | Path:kid desires candy is a dependency of tasting sweet | | not capable of grow. | causes passion | | | | Response:I love to work for a dog. | TBS-Path | Response:I love teaching | | | TBS-Path | Path: work has prequisite not work is a subevent of | Path: teach kids has a context education | | | have dog motivated by goal grow Response: My school is located in a rural area. | Response: I teach children how to learn. | | | | Ours | Path: library is at location school dialog act area | Ours | Path: teach kids is used for children dialog act learn | | is used for pets growing | is used for education. | | | ficult to transfer the topic of the dialogs in it. The results show that our method has good robustness for such difficult situations. This is because we can use the knowledge outside the knowledge graph to connect two sentences with far semantics and ensure contextual relevance. Human Evaluation We collect 100 randomly selected data points from the test outputs on OTTers. The score in Table 7 is the percentage of times that our model is chosen as the better in pairwise comparison with its competitor. The results demonstrate that our outputs are preferred over the baselines, especially on "Smooth" and "Sensible". ## 4.5.3 Ablation Studies Ablation Results Are Shown In Table 4. Can Relation Prediction Effectively Use Discontinuous Paths In The Kg? As shown in Table 4 that after replacing the relation in our test paths with the relation or multi-hop path in the KG, all metrics decrease significantly. We analyze the results and find that the length of the replaced path has increased by seven hops on average, and the path contains a lot of ambiguous relations, such as "related to". This verifies that we can efficiently connect nodes in the dynamic graph through relation prediction. ## Can Concepts Filtering Reduce Redundant Information In The Path? After Using The Random top-K2 concepts within two hops as the nodes in the dynamic graph, the results are reduced, but the decline is not significant. We analyzed the test paths obtained by this method. We found that the relation prediction and discriminator in the model largely ensured that the final test path contained less redundant information. Specifically, due to random selection, most of the nodes in the dynamic graph are uncorrelated, so our relation discriminator mostly negates the results of relation prediction at this time. These irrelevant nodes are not connected in the dynamic graph. How much path information do we need? We explore whether 3-hop paths provide more redundant information than 2-hop paths. In Table 3 (Ours-2hop), we can see little difference between the word-overlap metrics using the two-hop path and the three-hop path. Still, the TC result has decreased, which proves that it is difficult to achieve a smooth transition by relying on an intermediate word. Therefore, we finally used the 3-hop paths as the test data. ## 4.5.4 Case Study We compare our method with the other three pathbased methods(Table 8). It can be seen from two examples that the path obtained by the Static contains many fuzzy relations and irrelevant concepts. Thanks to the training on the path data related to the response, the path obtained by CODA is better than the former. However, it still exists in the information irrelevant to the dialogue context. The path obtained from the TBS-Path contains more information duplicated with the conversation statement. The above problems lead to poor responses to these models, and there are some unreasonable points in the topic transfer logic. The path semantics obtained by our model is clear and logical, resulting in better responses. It is noted that the "DialogAct" relation we added also played a good role. ## 5 Conclusion For effectively guiding the dialog to a target sentence, we propose to make full use of discontinuous or even non-existent paths in the knowledge graph. We combine knowledge retrieval and relationship prediction by building a dynamic KG, which helps to obtain a path closer to the implicit logic in the speaker's utterances when transferring topics to the target sentence. In addition, we also designed an automatic metric to evaluate the quality of the knowledge path for semantic transfer. Both automatic and human evaluation verify the superiority of the proposed method in searching knowledge paths and subsequently generating transition responses compared with SOTA baselines, benefiting from the better logic and contextual relevance of the paths from the dynamic graph. In the future, we will explore the application of our method to multi-turn target-oriented dialogue. ## Limitations The current dialogue system still has some limitations. For example, although the current CRG model can make the output contain the key concept words in the knowledge path, due to the large scale of the pre-training model, the output semantics of the current method are still not very interpretable and controllable. A feasible way is to explore new fine-tuning methods to approach high-level semantic style control. In addition, our current dialogue system lacks human qualities such as empathy, factual correctness judgment, and moral common sense representation. A key breakthrough is to explore a goal-oriented dialogue dataset with richer dimensions. ## Ethics Statement A target-oriented dialogue system may have the risk of misusing it to guide users to malicious topics actively. Since the proposed system tries to ensure relevance with the dialogue context in the application deployment, the possibility of the above misuse is small on the premise of checking the training corpus. All models in this paper are trained on the public corpus. The used datasets do not contain personal information or unethical language. We also ensure the anonymization of the human evaluation. ## Acknowledgements This work was supported by the National Natural Science Foundation of China (62272340, 61876128, 61876129, 62276187, 61976154, 61402323), State Key Laboratory of Communication Content Cognition (Grant No.A32003). ## References Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Maria Becker, Katharina Korfhage, Debjit Paul, and Anette Frank. 2021. Co-nnect: A framework for revealing commonsense knowledge paths as explicitations of implicit knowledge in texts. *arXiv preprint* arXiv:2105.03157. Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Uncertain natural language inference. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8772–8779, Online. Association for Computational Linguistics. Fabio Clarizia, Francesco Colace, Marco Lombardi, Francesco Pascale, and Domenico Santaniello. 2018. Chatbot: An education support system for student. In International Symposium on Cyberspace Safety and Security, pages 291–302. Springer. Prakhar Gupta, Harsh Jhamtani, and Jeffrey P Bigham. 2022. Target-guided dialogue response generation using commonsense and data augmentation. arXiv preprint arXiv:2205.09314. Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, and Minlie Huang. 2020a. Generating commonsense explanation by extracting bridge concepts from reasoning paths. *arXiv preprint arXiv:2009.11753*. Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020b. Language generation with multi-hop reasoning on commonsense knowledge graph. *arXiv preprint arXiv:2009.11692*. Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul Crook, Y-Lan Boureau, and Jason Weston. 2019. Recommendation as a communication game: Selfsupervised bot-play for goal-oriented dialogue. arXiv preprint arXiv:1909.03922. Pavan Kapanipathi, Veronika Thost, Siva Sankalp Patel, Spencer Whitehead, Ibrahim Abdelaziz, Avinash Balakrishnan, Maria Chang, Kshitij Fadnis, Chulaka Gunasekara, Bassem Makni, et al. 2020. Infusing knowledge into the textual entailment task using graph convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8074–8081. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In *Proceedings of the 54th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1445–1455. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. *arXiv preprint* arXiv:1710.03957. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. *arXiv preprint* arXiv:1603.08023. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matt Post. 2018. A call for clarity in reporting bleu scores. *arXiv preprint arXiv:1804.08771*. Jinghui Qin, Zheng Ye, Jianheng Tang, and Xiaodan Liang. 2020. Dynamic knowledge routing network for target-guided open-domain conversation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34 Issue 05, pages 8657–8664. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Karin Sevegnani, David M Howcroft, Ioannis Konstas, and Verena Rieser. 2021. Otters: One-turn topic transitions for open-domain dialogue. arXiv preprint arXiv:2105.13710. Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. *arXiv preprint arXiv:2009.08441*. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, and Zhiting Hu. 2019. Targetguided open-domain conversation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624–5634. Zhitong Yang, Bo Wang, Jinfeng Zhou, Yue Tan, Dongming Zhao, Kun Huang, Ruifang He, and Yuexian Hou. 2022. Topkg: Target-oriented dialog via global planning on knowledge graph. In Proceedings of the 29th International Conference on Computational Linguistics, pages 745–755. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675. Peixiang Zhong, Yong Liu, Hao Wang, and Chunyan Miao. 2021a. Keyword-guided neural conversational model. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 35 Issue 16, pages 14568–14576. Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, and Chunyan Miao. 2021b. Care: Commonsense-aware emotional response generation with latent concepts. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 14577–14585. Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. 2022. Think before you speak: Explicitly generating implicit commonsense knowledge for response generation. In *ACL 2022*. Yicheng Zou, Zhihua Liu, Xingwu Hu, and Qi Zhang. 2021. Thinking clearly, talking fast: Concept-guided non-autoregressive generation for open-domain dialogue systems. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing. ## A Appendix A.1 Implementation Details Model training: The pre-trained models we used are from Huggingface library3. To construct the dynamic graph process, we set K1=100, K2=20, and use GloVe embedding of size 300 (Pennington et al., 2014) during node selection. When training the relation prediction model and the relation discrimination model, we finetune DistilBERT for ten epochs with batch size=64, learning rate=1 ∗ 10−5, and accumulate grad batches=4. The CRG model is based on GPT-2 small architecture. We use a batch size of 16 and accumulate grad batches=2 for GPT-2 models. We use AdamW optimizer with an initial learning rate of 2 ∗ 10−5. Finally, our PATH-COHERENCE model is also based on DistilBERT. We set the batch size=64 and use AdamW optimizer with an initial learning rate of 2 ∗ 10−5. The accuracy of our classification model for this metric has reached over 90% 3https://huggingface.co/ Relation prediction dataset: When inheriting the edges in the static knowledge graph and filtering the training data of the relation prediction model, we removed some very unusual relationships, merged the relationships with similar semantics, and finally retained AtLocation, CapableOf, Causes, MotivatedByGoal, Desires, HasProperty, HasSubevent, HasPrerequisite, IsA, MadeOf, NotCapableOf, PartOf, UsedFor, ReceivesAction, HasA. Discrete-OTTers dataset: We use the key phrase in the source sentence as the start and the key phrase in the target sentence as the end to find two hop paths in the static ConceptNet. If all paths do not include the key phrase in the bridge sentence in the corpus, we consider this conversation to be a separate case. ## A.2 Training Details Of Baselines Training Concept-Predict leverages concept prediction strategy in(Zhong et al., 2021a). Following (Gupta et al., 2022) The input to the model is the context and target, and it predicts a single concept based on closeness to the target. The concept is then fed as input to a CRG model along with the context and target sentences. Training Static It is a commonly used method to obtain paths from a fixed knowledge graph. Specifically, for a sentence pair, we start with the keywords in the source sentence and end with the keywords in the target sentence to find paths in the ConceptNet. To ensure that all test cases can find paths in this way, we set the maximum path length to be no more than 4. Finally, we also filter the path into a final Commonsense Response Generator based on PPL and diversity. Training TBS-Path leverages the idea of generating implicit knowledge based on the context in (Zhou et al., 2022). Specifically, we use the path training data provided by(Gupta et al., 2022) because they all adopt the method of directly generating the path and use the pre-trained GPT2 to train the path generation model. The input to the model is the combination of source sentence and target sentence and the output of the model is the corresponding path. Finally, like our model, the path is sent to a Commonsense Response Generator for reply generation. For MultiGen and CODA, we adopted the training methods provided in Sevegnani et al. (2021) and Gupta et al. (2022) respectively. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 Limitations ✓ A2. Did you discuss any potential risks of your work? Section 7 Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✗ B1. Did you cite the creators of artifacts you used? Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A.1 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We mainly analyze and compare the performance metrics of the model. Further in-depth statistical analysis of the results will be conducted in future work. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.1 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4.3 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We trained the announcers in the form of meetings and displayed the main descriptions in section 4 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? section 4.3 ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The results of human evaluation are presented in tabular form ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No data collection ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? section 4.3
wu-etal-2023-chain
Chain of Thought Prompting Elicits Knowledge Augmentation
https://aclanthology.org/2023.findings-acl.408
The knowledge-augmented deep learning paradigm refers to a paradigm in which domain knowledge is identified and integrated into deep models. Conventional methods typically employ task-specific approaches to gather external knowledge from various sources. In contrast, large language models are extensively pre-trained and can serve as a comprehensive source of external knowledge. In this paper, we propose CoT-KA, a Chain-of-Thought-based method that augments knowledge for deep learning. CoT-KA avoids the need for additional knowledge retrieval or knowledge reasoning models, as required in conventional augmentation methods. Our results demonstrate that CoT-KA outperforms both pure CoT-based methods and the non-augmented method across the majority of eleven publicly available benchmarks for various reasoning tasks.
# Chain Of Thought Prompting Elicits Knowledge Augmentation Dingjun Wu1**, Jing Zhang**2 ∗ , Xinmei Huang2 1Tsinghua Shenzhen International Graduate School, Tsinghua University 2School of Information, Renmin University of China wudj20@mails.tsinghua.edu.cn {zhang-jing,huangxinmei}@ruc.edu.cn ## Abstract ![0_Image_0.Png](0_Image_0.Png) The knowledge-augmented deep learning paradigm refers to a paradigm in which domain knowledge is identified and integrated into deep models. Conventional methods typically employ task-specific approaches to gather external knowledge from various sources. In contrast, large language models are extensively pre-trained and can serve as a comprehensive source of external knowledge. In this paper, we propose CoT-KA, a Chain-of-Thought-based method that augments knowledge for deep learning. CoT-KA avoids the need for additional knowledge retrieval or knowledge reasoning models, as required in conventional augmentation methods. Our results demonstrate that CoT-KA outperforms both pure CoT-based methods and the non-augmented method across the majority of eleven publicly available benchmarks for various reasoning tasks 1. ## 1 Introduction The Knowledge-Augmented deep learning (KADL) (Cui et al., 2022) paradigm refers to the deep learning paradigm in which domain knowledge is identified and integrated into the deep model. Adding domain knowledge makes it possible to develop deep learning that is data-efficient, generalizable, and interpretable (Cui et al., 2022). For example, retrieving external knowledge from an external knowledge pool like Wikipedia is typically required for open domain question answering and dialog generation (Izacard and Grave, 2021; Zhang et al., 2023). Logical equivalence laws such as contraposition and transitive laws help extend the implicit logical information (Yu et al., 2019; Wang et al., 2022a). External knowledge is derived from various sources. For instance, commonsense knowledge can be extracted from commonsense knowledge ∗Corresponding author: Jing Zhang. 1Our code and data are available at https://github. com/RUCKBReasoning/CoT-KA bases like ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2019). Domain-specific knowledge can be retrieved from knowledge bases such as Wikipedia and Freebase (Bollacker et al., 2008). Logic knowledge, on the other hand, can be in the form of human-defined propositional or first-order logic, which is then utilized as rules for reasoning. In summary, existing knowledge augmentation methods typically involve either creating a retriever to gather relevant knowledge or developing a reasoner to leverage the logical rules within the external knowledge sources (Chen et al., 2017; Izacard and Grave, 2021; Wang et al., 2022a; Zhang et al., 2023). Recently, large language models (LLMs) (Zhao et al., 2023) have shown their potential as both the source and the retriever or reasoner of external knowledge. LLMs are pre-trained on a huge scale of datasets. Thus, they have already embedded a large amount of knowledge into their parameters, which can be considered a source of external knowledge. The reasoning ability of LLMs allows them to provide knowledge from their parameters without needing an extra retriever or a reasoner. The latest chain-of-thought (CoT) prompting technique 6519 (Wei et al., 2022), which elicits LLMs to generate a series of sentences that mimic the reasoning process for arriving at the answers, improves the reasoning ability of LLMs. It has proved to be remarkably effective in a variety of complex reasoning tasks such as math word problems and commonsense question answering (Wei et al., 2022). CoT prompting shows potential as a general technique to retrieve knowledge from LLMs. In this paper, we propose CoT-KA - a CoTbased method to retrieve knowledge from LLMs for Knowledge-Augmented deep learning. CoTKA utilizes an LLM as a knowledge source, leveraging CoT prompting to guide the LLM in providing knowledge that can serve as evidence to support downstream reasoning from the input to the answer. Unlike conventional KADL approaches, CoT-KA eliminates the need for additional knowledge retrieval or a separate knowledge reasoning model. Specifically, we begin by extracting CoTs as knowledge from the LLM using either few-shot (Wei et al., 2022) or zero-shot (Kojima et al., 2022) CoT prompting. The former involves providing a few demonstrations to guide the LLM's reasoning, while the latter employs a template such as "let's think step by step" to inspire the LLM. The extracted CoTs are then appended to the original inputs, marked by a special token, to create augmented text. Finally, we fine-tune a small taskrelevant pre-trained language model (PLM) on the dataset augmented with CoTs. We generate CoTs using the public GPT-3 (Brown et al., 2020) (175B parameters) API2. For NLU (Natural Language Understanding) tasks, we employ ALBERT (Lan et al., 2019) and DeBERTa (He et al., 2021) as the task-relevant models. T5 (Raffel et al., 2020) is utilized as the task-relevant model for NLG (Natural Language Generation) tasks. We evaluate models' performance using eleven benchmarks, including (i) commonsense reasoning (CSQA (Talmor et al., 2019), StrategyQA (Geva et al., 2021), Date Understanding, Sports Understanding (Srivastava et al., 2022)); (ii) arithmetic reasoning (AQUA-RAT (Ling et al., 2017), GSM8K (Cobbe et al., 2021), SVAMP (Patel et al., 2021), MultiArith (Roy and Roth, 2015), SingleEq (Koncel-Kedziorski et al., 2015), AddSub (Hosseini et al., 2014)); (iii) symbolic reasoning (Last Letter Concatenation (Wei et al., 2022)), where all commonsense reasoning benchmarks and AQUA- RAT are formulated as NLU tasks, and the other arithmetic reasoning benchmarks and Last Letter Concatenation are formulated as NLG tasks in this paper. Particularly, we convert all of the multichoice question answering tasks into NLU tasks. Extensive experimental results show that in the majority of tasks, CoT-KA outperforms the original fine-tuning results without the use of CoTs as augmented knowledge. CoT-KA also surpasses FewShot-CoT and Zero-Shot-CoT on LLMs, which directly parse answers from the generated CoTs. ## 2 Related Work Knowledge Augmented Technology. The integration of external knowledge into deep learning models through knowledge augmentation approaches has gained significant attention in various NLP tasks, including question answering (Chen et al., 2017; Izacard and Grave, 2021), dialogue generation (Zhang et al., 2023), and logical reasoning (Wang et al., 2022a). For instance, in the context of answering open-domain questions where supporting evidence is not explicitly provided (Izacard and Grave, 2021), Chen et al. (2017) utilized techniques such as bigram hashing and TFIDF matching to retrieve relevant documents from external knowledge sources. Similarly, Fusionin-Decoder (Izacard and Grave, 2021) employed methods like BM25 (Robertson et al., 1995) and DPR (Karpukhin et al., 2020) for evidence retrieval. By augmenting the questions with these retrieved pieces of evidence, the models can better reason and provide answers. Logic reasoning is another challenging task that requires a deep understanding of the logical structure within a given text to arrive at the correct answer. To facilitate such logic-level analysis, human-defined logic rules are introduced. Wang et al. (2022a) proposed LReasoner, a logicdriven context extension framework that extends implicit logical information by performing logical reasoning using these predefined rules. The framework enhances the original input by verbalizing and concatenating the implicit logical information, enabling subsequent answer reasoning. Fusion-in-Decoder and LReasoner inspire our work to extend the external knowledge into the original input. However, the knowledge in these knowledge augmentation methods is sourced from external knowledge bases or pre-defined logical rules, requiring a retriever for knowledge extraction or a reasoner for rule application in the process. In contrast, we utilize LLMs that eliminate the need for an additional retriever or reasoner to acquire knowledge for augmentation. Chain of Thought Prompting on LLMs. A CoT is a series of intermediate natural language reasoning steps that lead to the final output, inspired by how humans use a deliberate thinking process to perform complicated tasks. Experimental results using various LLMs, such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022), demonstrate that CoT prompting enhances performance across a range of arithmetic, commonsense, and symbolic reasoning tasks (Wei et al., 2022). Wei et al. (2022) initially propose Few-ShotCoT, which requires the manual design of a few demonstrations to facilitate the generation of reasoning paths. In contrast, Kojima et al. (2022) propose Zero-Shot-CoT, which employs a single zeroshot prompt that elicits CoTs from LLMs. By simply adding "Let's think step by step" before each answer, Zero-Shot-CoT demonstrates that LLMs are capable zero-shot reasoners without the need for any manually constructed few-shot examples. Furthermore, Wang et al. (2022b) introduce a new decoding strategy called self-consistency, which involves sampling multiple LLM outputs and aggregating them through majority voting. This strategy encourages the model to consider multiple CoTs when generating answers. However, to achieve optimal performance, a large number of reasoning paths (e.g., 40 paths) must be generated, leading to increased computational costs. All of these CoT prompting methods directly extract the answer from the CoTs. In contrast, our method utilizes the generated CoTs as supplementary knowledge to improve the fine-tuning of taskrelevant models. Moreover, our method demonstrates good performance even when a limited number of CoTs are provided, unlike self-consistency, which relies on generating a large number of CoTs. ## 3 Pilot Study In this section, we explore the effectiveness of CoTaugmented fine-tuning by simply appending one CoT to the original input. We assess the validity of this approach on two commonsense reasoning datasets, CSQA and StrategyQA. CoT-augmented Fine-tuning. To perform finetuning on ALBERT, we extend the original input text by adding a CoT. We utilize ALBERT-large- | Method/Dataset | CSQA | StrategyQA | |------------------------|--------|--------------| | Baseline (ALBERT) | 63.4 | 64.8 | | Zero-Shot-CoT (ALBERT) | 70.1 | 67.5 | | Few-Shot-CoT (ALBERT) | 76.2 | 73.1 | v2 for our experiments. Specifically, we generate CoTs using both few-shot and zero-shot CoT methods, known as Few-Shot-CoT and Zero-ShotCoT, respectively. Few-Shot-CoT employs the same demonstrations as described in (Wei et al., 2022). For Zero-Shot-CoT, we utilize the template "Let's think step by step". As the LLM, we employ GPT-3 with 175-billion parameters (text-davinci002). Subsequently, we extend the generated CoT into the input of each sample within the CSQA and StrategyQA datasets. Finally, we perform finetuning on ALBERT using the augmented datasets. The experiment results in Table 1 show that both the Zero-Shot-CoT and Few-Shot-CoT augmented fine-tuning significantly enhance the performance of the original fine-tuning method. The Impact of CoT as Additional Knowledge. Given that the answers within CoTs can potentially be incorrect, we hypothesize that this portion of the CoTs will have a negative effect on the fine-tuning and mislead the model's prediction. To further explore the effect of CoTs on fine-tuning, we compare the fine-tuning result of the PLMs before and after adding CoTs through a variety of data analyses. We investigate the extent to which the prediction results are altered when the model's input is expanded with a CoT. We perform fine-tuning on both the original samples (baseline) and the expanded samples (CoT-extended). Subsequently, we evaluate the fine-tuned models using the validation set. For each instance in the validation set, we compare its predictive result between the originally fine-tuned ALBERT and the CoT-augmented fine-tuning version. Additionally, we define three categories of CoTs during the process. - A CoT is labeled as a *positive CoT* if the addition of the CoT changes the prediction result from incorrect to correct. This indicates a beneficial ![3_image_0.png](3_image_0.png) ## Influence On The Model'S Prediction. - Conversely, a CoT is labeled as a *negative CoT* if the addition of the CoT changes the prediction result from correct to incorrect. This indicates a misleading effect on the model's prediction. - Furthermore, a CoT is labeled as a *neutral CoT* if the model's prediction result remains the same after the CoT is added. In such cases, it is not easy to judge the impact of this CoT on the model. The left figure in Figure 2 illustrates the ratio of positive, *neutral*, and *negative CoTs*. It is observed that among the model's prediction results that change after adding a CoT, the ratio is 36.2% (166 out of 458). Within this group, the ratio of positive CoTs is 61.4%, while the ratio of *negative* CoTs is 38.6%. These findings suggest that the model successfully resolves 63.3% (102/161, the number of positive CoTs divided by the number of incorrectly predicted samples in the baseline) of the data samples that were incorrectly predicted prior to adding a CoT. The second objective is to test our hypothesis that an incorrect CoT (the answer in the CoT is incorrect) may have a negative impact on the model and therefore mislead the prediction of the model. If an incorrect CoT is added to the original input text, what impact does it have on the model's prediction? As the right figure in Figure 2 shows, when an incorrect CoT is added to the original input, the model still has a high probability (17.1%) of not being misled by the incorrect CoT and making accurate predictions. Furthermore, we investigate the extent to which the model would mispredict when a correct CoT (the answer in the CoT is correct) is added. As shown in the figure on the right of Figure 2, the model has a low probability (5.0%) of making an incorrect prediction. In the case of StrategyQA, when the answer in the CoT is incorrect, the alignment ratio is 1 − *Ratio* (\#Not misled), which equals 82.9%; When the answer in the CoT is correct, the alignment ratio is 1 − *Ratio* (\#Not inspired), which equals 95.0%. The result demonstrates that CoT is a powerful feature, and the model's predictions tend to align closely with the answers provided in CoT. On the other hand, the fine-tuning strategy employed causes the model's predictions to treat CoT as a secondary feature of the original input, rather than strictly following it. In cases where the answer in CoT is correct, the model is likely to align its predictions with the answers in CoT. Conversely, when the answer in CoT is incorrect, there is a relatively high probability that the model will deviate from the answer in the CoT, preventing misleading from the incorrect CoT. In addition, our attempts to preserve the reasoning steps in the CoTs while removing the answers have resulted in a degradation in performance. We recognize that the presence of incorrect answers in some CoTs can have a negative impact. However, we also believe that the inclusion of correct answers in CoTs can yield positive effects, and the answers within CoTs are a more influential factor than the reasoning paths themselves. ## 4 Cot-Ka In this section, we propose CoT-KA - a CoT-based method for knowledge augmentation. Our method leverages multiple CoTs retrieved from LLMs to provide more auxiliary knowledge for KADL. CoTKA consists of three steps as shown in Figure 3: (1) CoT Generation: Generating multiple CoTs for each sample in the train, dev, and test sets. (2) Input Augmentation: Taking the generated CoTs as the additional knowledge into the original input text for each sample. (3) Task-relevant Model Training: Fine-tuning a task-relevant model using the CoTaugmented samples. ## 4.1 Cot Generation We try both Few-Shot-CoT and Zero-Shot-CoT prompting on LLM f to generate multiple CoTs. Formally, given an original samples (xi, yi), where 1. CoT Generation 1. CoT Generation ![4_image_0.png](4_image_0.png) x i is the original input and y i ∈ Y denotes the label. We generate a CoT set consisting of multiple CoTs based on the model f : $$C o T^{(i)}=f(d,x_{i})$$ $$(1)$$ where d denotes the CoT demonstrations that inspire model f to generate CoTs, and CoT ( 2 ) is the generated CoT set of the i -th sample, which consists of m CoTs: $$C o T^{(i)}=\{C o T_{1}^{(i)},C o T_{2}^{(i)},...,C o T_{m}^{(i)}\}\quad\mathrm{(2)}$$ For each sample, we independently generate m CoT outputs from f in each run. ## 4.2 Input Augmentation In the second step, we apply the generated CoTs as additional knowledge to enrich the input text of the original samples. The extended input text of each sample is a concatenation of an original input (e.g. a question), and the generated multiple CoTs. For each sample, we construct an extended input text as follows: $$\tilde{x}^{(i)}=c o n c a t(x^{(i)},C o T^{(i)})\qquad\qquad(3)$$ where x ( i ) is the i -th extended input text, x ( i ) is the i -th original input, and CoT ( i ) is the i -th generated CoT set. concat () is a concatenation function that concatenates the original input and the generated CoTs. More concretely: $$c o n c a t(x^{(i)},C o T^{(i)})$$ $concat(x^{(i)},Col^{(i)})$ $=x^{(i)}||\;[EXT]\;CoT_{1}^{(i)}...\;||\;[EXT]\;CoT_{m}^{(i)}$ (4) ... where [ EXT ] is the special token to denote a CoT, and || denotes the concatenation operator. 5 ## Experiments Experimental Setup 5.1 Tasks and Datasets. We evaluate CoT-KA on the following reasoning benchmarks 3 . 3 By default we use the train, dev, and test split of all the datasets if the labels are available for evaluation. For CSQA and StrategyQA, we only use the train and dev split. | Commonsense | Arithmetic | | | | | | | | |-------------------------------------|--------------|--------------|-------------------|-------------------|----------------|------|------|------| | Method/Dataset | CSQA | StrategyQA | Date | Sports | AQuA | | | | | Dev | Dev | Dev | Test | Dev | Test | Dev | Test | | | Zero-Shot-CoT | 64.6* | 54.8* | 67.5* | 52.4* | 33.5* | | | | | Few-Shot-CoT | - (73.5*) | 68.3 (65.4*) | 54.7/47.4 (52.1*) | 83.2/86.7 (82.4*) | -/37.9 (35.8*) | | | | | Self-Consistency (5 Zero-Shot-CoTs) | 71.2 | 64.6 | 29.2/35.6 | 57.6/58.9 | 33.2/37.0 | | | | | Self-Consistency (5 Few-Shot-CoTs) | 77.6 | 73.6 | 53.4/50.1 | 85.4/90.5 | 40.6/40.2 | | | | | Baseline (ALBERT) | 61.8 | 62.2 | 33.2 | 33.5 | 57.2 | 53.2 | 25.6 | 22.7 | | CoT-KA (5 Zero-Shot-CoTs, ALBERT) | 73.6 | 66.1 | 58.6 | 64.1 | 68.8 | 69.6 | 42.3 | 40.2 | | CoT-KA (5 Few-Shot-CoTs, ALBERT) | 78.8 | 75.7 | 74.2 | 76.6 | 89.9 | 89.8 | 46.9 | 47.6 | | Baseline (DeBERTa) | 84.2 | 68.8 | 73.6 | 72.7 | 84.5 | 82.8 | 27.8 | 26.5 | | CoT-KA (5 Zero-Shot-CoTs, DeBERTa) | 80.3 | 72.3 | 69.2 | 73.8 | 91.3 | 90.5 | 40.1 | 40.3 | | CoT-KA (5 Few-Shot-CoTs, DeBERTa) | 82.0 | 76.9 | 80.4 | 78.0 | 96.9 | 95.6 | 45.9 | 46.5 | Table 2: Accuracy on five NLU datasets from two categories of reasoning tasks. For CSQA and StrategyQA, we report the evaluation results of the dev set. For the other datasets in which the labels are available, we report the results of both the dev and test. * indicates the results comes from (Wei et al., 2022) and (Kojima et al., 2022). The results of baseline methods and CoT-KA are based on ALBERT-large-v2 and DeBERTa-v3-large. "Baseline" denotes the fine-tuning baseline with original data. "5 Zero-Shot-CoTs" and "5 Few-Shot-CoTs" denotes five CoTs used at Self-Consistency and CoT-KA. Bold denotes the best-performed results. For Few-Shot-CoT, the results before and after the "/" symbol indicate the results of directly parsing the answers from the CoT (from Wei et al. (2022)) for the dev and test set, respectively, under our data partitioning. For Self-Consistency, the results before and after the "/" symbol represent the results obtained by parsing the answer from multiple CoTs (We generated) in the dev and test set, respectively, under our data partitioning and then applying majority voting. - **Commonsense reasoning.** We evaluate our method on four commonsense reasoning tasks: CSQA (Talmor et al., 2019), StrategyQA (Geva et al., 2021) and two benchmarks from the BIGbench effort (Srivastava et al., 2022): Date Understanding and Sports Understanding. - **Arithmetic reasoning.** We use six arithmetic reasoning benchmarks: AQUA-RAT (Ling et al., 2017), GSM8K (Cobbe et al., 2021), SVAMP (Patel et al., 2021), MultiArith (Roy and Roth, 2015), SingleEq (Koncel-Kedziorski et al., 2015), AddSub (Hosseini et al., 2014). - **Symbolic Reasoning.** We use the Last Letter Concatenation from Wei et al. (2022). 4 ## Implementation. 4We do not use the Coin Flip dataset for the evaluation because it is a simple classification task for fine-tuning. This is because ALBERT-large-v2 and DeBERTa-v3-large can already achieve 100% accuracy in the evaluation phase. - **CoT Generation Models.** We use GPT-3 of the text-davinci-002 engine with 175-billion parameters to generate the CoTs used in CoT-KA. - **CoT Demonstrations.** For a fair comparison, we perform Few-Shot-CoT with the same demonstrations as in Wei et al. (2022) and use the same zero-shot prompt as in Kojima et al. (2022) to perform Zero-Shot-CoT. - **Sampling Scheme.** To generate diverse CoTs, we apply temperature sampling during the CoT generation. Specifically, we use the same T=0.7 as in (Wang et al., 2022b) for a fair comparison. - **Data Preprocessing.** For certain undivided datasets, we divide them into train, dev, and test sets for fine-tuning, following a ratio of 6:2:2. Further details regarding the dataset splits can be found in Appendix A.1. Additionally, as the original questions and demonstrations used for CoT generation may include option information (e.g., Answer Choices: *(a) ignore ...(e) avoid*), | Arithmetic | Symbolic | | | | | | | | | | | | |-------------------------------------|----------------|-------------------|-------------------|-------------------|-------------------|------------|------|------|------|------|------|------| | Method/Dataset | GSM8K | SVAMP | MultiArith | SingleEq | AddSub | Letter (4) | | | | | | | | Dev | Test | Dev | Test | Dev | Test | Dev | Test | Dev | Test | Dev | Test | | | Zero-Shot-CoT | 40.7* | 63.7* | 78.7* | 78.7* | 74.7* | 57.6* | | | | | | | | Few-Shot-CoT | -/46.5 (46.9*) | 69.2/69.0 (68.9*) | 85.8/90.0 (91.7*) | 82.4/87.3 (86.6*) | 79.7/65.8 (81.3*) | (59.0**) | | | | | | | | Self-Consistency (5 Zero-Shot-CoTs) | 51.7/52.2 | 70.0/73.4 | 81.7/96.4 | 64.8/92.0 | 79.7/73.7 | 66.3/60.2 | | | | | | | | Self-Consistency (5 Few-Shot-CoTs) | 55.7/56.6 | 74.7/75.5 | 94.8/95.7 | 88.5/91.9 | 86.8/73.9 | 59.0/60.5 | | | | | | | | Baseline (T5) | 5.3 | 4.4 | 8.0 | 8.5 | 12.5 | 8.3 | 5.9 | 2.9 | 6.3 | 6.3 | 30.0 | 26.0 | | CoT-KA (5 Zero-Shot-CoTs, T5) | 58.9 | 57.3 | 64.2 | 82.3 | 82.7 | 93.3 | 62.9 | 73.3 | 80.3 | 74.9 | 75.9 | 60.4 | | CoT-KA (5 Few-Shot-CoTs, T5) | 61.2 | 61.5 | 71.8 | 70.8 | 81.8 | 95.3 | 76.7 | 75.7 | 86.6 | 78.7 | 71.8 | 69.8 | Table 3: Accuracy on six NLG datasets from two categories of reasoning tasks. * indicates the results comes from (Wei et al., 2022) and (Kojima et al., 2022) and ** denotes the result comes from (Zhang et al., 2022). | Question: Would Siduri enjoy an unlimited buffet? Blink: Siduri is a character in the "Epic of Gilgamesh". She is an "alewife", a wise female divinity associated with fermentation (specifically beer and wine). Few Shot CoT: Siduri is a fairy in Irish mythology. She was known for her hospitality, so she would probably enjoy an unlimited buffet. So the answer is yes. | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | StrategyQA | Question: Will Fuller was perfect from the line? Blink: William Vincent Fuller V (born April 16, 1994) is an American football wide receiver for the Houston Texans of the National Football League (NFL). He was drafted by the Texans in the first round of the 2016 NFL Draft. He played college football at Notre Dame. Few Shot CoT: Will Fuller is a football player. Being perfect from the line is part of basketball, not football. So the answer is no. | | Sports | | Table 4: Knowledge augmentation examples from commonsense reasoning tasks. The first case comes from StrategyQA. In this case, the description of Siduri does not mention the relationship between Siduri and the unlimited buffet, which is the key to answering the question. The second case comes from Sports Understanding. In this case, we need to know that being perfect from the line is part of basketball, and Will Fuller is a football player, while the entity-knowledge can only provide the latter. the generated CoT will also contain option markers (e.g., the answer is (a)). To provide valuable information within the CoTs, we replace the option markers in the generated CoT with their corresponding textual content (e.g., the answer is "ignore"). - **Classifier Models.** We conduct the main experiments using two backbone PLMs: ALBERTlarge-v2 and DeBERTa-v3-large. The hyperparameters for the training process are reported in Appendix A.2. Baselines. We take three methods as the baselines: Zero-Shot-CoT, Few-Shot-CoT, and SelfConsistency. Furthermore, to demonstrate the extent to which the CoT knowledge elicits the KADL, we also compare our method with the original finetuning baselines, which solely employ the original text for fine-tuning. ## 5.2 Main Results Table 2 compares the accuracy across eleven datasets from three categories of NLU and NLG tasks. The Zero-Shot-CoT results are taken from Kojima et al. (2022), and the Few-Shot-CoT results are taken from Wei et al. (2022). For SelfConsistency (5 sampled CoTs), we report the result based on a majority vote. The CoT-KA results are averaged over at least five random runs (see Appendix for more details), where we use the different seeds to sample 5 CoTs from a CoT set containing 10 generated CoTs in each run. As shown in Table 2 and 3, the performance of CoT-KA surpasses all baselines on most tasks. We have made several findings: (1) The CoTs generated by Zero-Shot-CoT and Few Shot-CoT can be utilized with CoT-KA, resulting in significantly improved performance compared to the fine-tuning baselines. Additionally, the CoTs generated by Few-Shot-CoT exhibit better performance compared to Zero-Shot-CoT when they are used with CoT-KA. (2) CoT-KA achieves better performance on the NLU tasks than on the NLG tasks. (3) CoTKA shows different robustness on different models. While DeBERTa outperforms ALBERT on most tasks, CoT-KA is more robust on ALBERT and exhibits performance improvements across all tasks. ## 5.3 Knowledge Augmentation Comparison To compare CoT-KA with other knowledge augmentation methods, we employ BLINK (Wu et al., 2020) to enrich the entity knowledge in the question. BLINK is a two-stage entity linking approach based on BERT (Kenton and Toutanova, 2019). We use BLINK to link the entities mentioned in the question and retrieve their corresponding entity information. BLINK provides a short description for each entity, which we utilize as extensions to enrich the questions. | Method/Dataset | StrategyQA | Sports | | |--------------------|--------------|----------|------| | Baseline (ALBERT) | 62.2 | 57.2 | 53.2 | | BLink (ALBERT) | 58.0 | 81.3 | 77.4 | | CoT-KA (ALBERT) | 75.7 | 89.9 | 89.8 | | Baseline (DeBERTa) | 68.8 | 84.5 | 82.8 | | BLink (DeBERTa) | 67.7 | 92.5 | 87.5 | | CoT-KA (DeBERTa) | 76.9 | 96.9 | 95.6 | Table 5: Knowledge augmentation comparison. As shown in Table 5, the entity knowledge-based augmentation method improves performance on Sports Understanding but has a negative impact on StrategyQA, with both performing worse than our method. Additionally, we observe that approximately 29% of questions in StrategyQA and 3% in Sports Understanding could not have entities extracted. Furthermore, the average number of recognized entities in a Sports Understanding question is 1.095, while in StrategyQA, it is 0.928. Moreover, Table 4 demonstrates that entity information may not always include the specific information required by the questions. In contrast, our method can add more useful information, resulting in a more substantial improvement. ## 5.4 The Effect Of Cot Size To demonstrate the effect of the number of sampled CoTs, we vary the number of sampled CoTs (1, 2, 3, 4, 5) in CoT-KA and evaluate on StrategyQA. The results are shown in Figure 4. The experimental results indicate that as the number of CoTs increases, ![7_image_0.png](7_image_0.png) there is a general upward trend in the performance of CoT-KA . This trend becomes more pronounced when the CoTs are generated by Few-Shot-CoT. More results are reported in Appendix B. ## 5.5 Cot Selection Strategy CoT-KA can only extend a small number of CoTs due to the maximum length limitation of the input sequence that the language model can handle. Therefore, it is natural to consider designing a CoT selection strategy to choose higher-quality CoTs from the generated CoT set for KADL. Each CoT can be expressed as: ti ∈ {t1, t2*, ..., t*K} , where ti is the i-th token. We can get the *log prob* of each generated token when using GPT3 API to generate reasoning chains. The *log prob* refers to the natural logarithm of the probability that the token occurs next given the prompt. To select the 5 reasoning chains with higher confidence from the 10 generated CoTs, we score the generated CoTs using the following formula: $$\begin{array}{c}score(Cot_{j})=\frac{\sum_{i=1}^{K_{j}}\exp(\log p(t_{i}))}{K_{j}}\\ =\frac{\sum_{i=1}^{K_{j}}p(t_{i})}{K_{j}}\end{array}\tag{5}$$ where p(ti) denotes the probability of generating the i-th token, and log denotes the logarithm. and Kj is the total number of tokens in the j-th CoT. The results shown in Table 6 demonstrate that selecting CoTs from the generated set based on the probability of token generation in the sentence does not lead to a significant improvement in the performance of CoT-KA . ## 6 Conclusion And Future Work This paper introduces a CoT-based method to retrieve knowledge from LLMs for KnowledgeAugmented deep learning (CoT-KA) that elicits | Method | StrategyQA | |----------------------------------|--------------| | CoT-KA (ALBERT) | 75.7 | | CoT-KA (ALBERT) + CoT Selection | 75.9 | | CoT-KA (DeBERTa) | 76.9 | | CoT-KA (DeBERTa) + CoT Selection | 76.9 | Table 6: CoT selection strategy based on the *log prob* knowledge augmentation on a variety of NLU and NLG benchmarks. Unlike conventional knowledge augmentation approaches, our method does not require a retriever or a reasoner, yet it surpasses the performance of conventional knowledge-based methods and other CoT-based approaches across a range of public NLP tasks. In the future, it is worthwhile to investigate other methods that can provide insights from LLMs. Exploring new approaches for leveraging the capabilities of LLMs to enhance knowledge augmentation represents a promising area for future research. ## 7 Limitations One limitation of CoT-KA is that it performs finetuning based on the PLMs, and the input sequence length limit of the PLMs allows us to add only a limited number of CoTs. Therefore, it is important to explore and develop a CoT selection strategy in future research. A good CoT selection strategy would enable the identification of highly effective CoTs from a set of CoTs, enhancing the efficiency of KADL. ## Acknowledgments This work is supported by National Natural Science Foundation of China 62076245; CCF-Zhipu AI Large Model Fund. ## References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In *Proceedings of the 2008 ACM SIGMOD international conference on Management of* data, pages 1247–1250. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*. Zijun Cui, Tian Gao, Kartik Talamadupula, and Qiang Ji. 2022. Knowledge-augmented deep learning and its applications: A survey. *arXiv preprint* arXiv:2212.00017. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *Transactions of the* Association for Computational Linguistics, 9:346– 361. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *arXiv preprint arXiv:2111.09543*. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In *Proceedings of the 2014 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 523–533, Doha, Qatar. Association for Computational Linguistics. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Online. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. *Transactions of the Association for Computational Linguistics*, 3:585–597. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. *Nist Special Publication Sp*, 109:109. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pages 3027–3035. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *Thirty-first AAAI conference on* artificial intelligence. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In *Proceedings of NAACL-HLT*, pages 4149– 4158. Siyuan Wang, Wanjun Zhong, Duyu Tang, Zhongyu Wei, Zhihao Fan, Daxin Jiang, Ming Zhou, and Nan Duan. 2022a. Logic-driven context extension and data augmentation for logical reasoning of text. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1619–1629. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022b. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zeroshot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397–6407. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2019. Reclor: A reading comprehension dataset requiring logical reasoning. In *International Conference on Learning Representations*. Jing Zhang, Xiaokang Zhang, Daniel Zhang-Li, Jifan Yu, Zijun Yao, Zeyao Ma, Yiqi Xu, Haohua Wang, Xiaohan Zhang, Nianyi Lin, et al. 2023. Glmdialog: Noise-tolerant pre-training for knowledgegrounded dialogue generation. *arXiv preprint* arXiv:2302.14401. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. *arXiv preprint* arXiv:2210.03493. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. *arXiv preprint* arXiv:2303.18223. ## A Implementation Detail | Dataset | #Number of samples | We divide the dataset | | | |-------------|----------------------|-------------------------|------|-----| | Train | Dev | Test | | | | CSQA | 9741 | 1221 | 1140 | No | | StrategyQA | 1831 | 458 | 490 | No | | Date | 221 | 74 | 74 | Yes | | Sports | 600 | 200 | 200 | Yes | | AQUA | 5000 | 254 | 254 | Yes | | GSM8K | 5978 | 1495 | 1319 | Yes | | SVAMP | 600 | 200 | 200 | Yes | | MuitiArith | 360 | 120 | 120 | Yes | | Single Eq | 304 | 102 | 102 | Yes | | Add Sub | 237 | 79 | 79 | Yes | | Last Letter | 600 | 200 | 200 | Yes | ## A.1 Datasets Table 7: Summary of the datasets we use in this paper. For datasets that are not pre-divided into train, dev, and test sets, we conduct the division ourselves. For some undivided datasets used in this paper, we divide them into train, dev, and test sets for finetuning, following a ratio of 6:2:2. Table 7 shows the division details of each dataset. In the case of AQUA, the raw training set is too large (97467 samples). To mitigate the computational cost of generating multiple CoTs using the public GPT3 API, we select a subset of 5000 samples (the top 5000) from the raw train set as our train set. ## A.2 Hyper-Parameters For Fine-Tuning All experiments are conducted in a Linux environment with a single (24G) NVidia RTX 3090 GPU. The model is optimized using the AdamW optimizer. We do not perform an exhaustive hyperparameter search, but only adjust the learning rate prior to the formal experiment. For most experiments in this paper, a learning rate of 1e-5 is chosen as the final value for fine-tuning ALBERT and DeBERTa, except in the following cases for CSQA and StrategyQA: - CSQA: A learning rate of 2e-5 is used for CoT-KA (1 Zero-Shot-CoT, ALBERT). - StrategyQA: A learning rate of 5e-6 is used for CoT-KA (1 Zero-Shot-CoT, ALBERT), CoT-KA (1 Few-Shot-CoT, DeBERTa) and CoT-KA (5 Few-Shot-CoTs, both ALBERT and DeBERTa). More hyper-parameters are shown in Table 8. The random seed set utilized for experiments is [0, 10, 20, 30, 40, 50, 60, 70, 80, 90]. Table 8: Hyper-parameters for fine-tuning. | ALBERT/DeBERTa | T5 | | |--------------------|-------|-------| | Batch Size | 16 | 16 | | Peak Learning Rate | 1e-5 | 1e-5 | | Training Steps | 2000 | 2000 | | Warmup Proportion | 0.1 | 0 | | Weight Decay | 0 | 0 | | Adam ϵ | 1e-8 | 1e-8 | | Adam β1 | 0.9 | 0.9 | | Adam β2 | 0.999 | 0.999 | These seeds are used for both CoT sampling and fine-tuning. For the case of experimental results averaged over five runs, we use the top five seeds from the seed set. For NLU tasks, most experimental results in Table 2 are averaged over ten runs, except for the following cases: - CoT-KA (5 Zero-Shot-CoTs) on all NLU tasks are averaged over five runs. - CoT-KA (5 Few-Shot-CoTs) on AQUA is averaged over five runs. For NLG tasks, most results in Table 3 are averaged over ten runs, with the exception of CoTKA (5 Zero-Shot-CoTs) and CoT-KA (5 Few-ShotCoTs), which are averaged over five runs. The result for Blink in Table 5 are averaged over five runs. All the new results in Section 5.4 and Appendix B, where the number of sampled CoTs ranges from 1 to 4, are averaged over five runs. ## B More Results About The Effect Of Cot Size In Cot-Ka We vary the number of sampled CoTs (1, 5) in CoTKA and evaluate its performance on ten tasks, excluding StrategyQA. Figures from 5 to 14 indicate that in most of these tasks, increasing the number of CoTs from 0 to 1 significantly improves task performance. However, when using DeBERTa-v3large as the PLM, the performance gain in CoT-KA for CSQA, Date Understanding, and Sports Understanding is slight and even leads to a degradation. Furthermore, increasing the number of CoTs from 1 to 5 has a relatively small performance gain in CoTKA (DeBERTa), except for improved Date Understanding and continued degradation in CSQA. We observe that if the baseline, where the dataset is not augmented by a CoT, starts with a lower performance, the performance gain in CoT-KA becomes more significant as the number of CoTs increases. Accuracy ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) 5 75 ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) ![12_image_0.png](12_image_0.png) ![12_image_2.png](12_image_2.png) ![12_image_1.png](12_image_1.png) ![12_image_3.png](12_image_3.png) ![12_image_4.png](12_image_4.png) ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) ![13_image_3.png](13_image_3.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 5 Experiments, Section 7 Limitations. ✗ A2. Did you discuss any potential risks of your work? Our paper mainly used GPT3 and chain-of-thought prompting for application, the risk of large language models and chain-of-thought prompting were discussed in references we cited. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1 introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 Pilot Study, Section 5 Experiments. ✓ B1. Did you cite the creators of artifacts you used? Section 1 introduction, Section 3 Pilot Study, Section 5 Experiments. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We used the same publicly available dataset as in the existing work, and we did not discuss this matter specifically. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We used the same publicly available dataset as in the existing work, and we did not discuss this matter specifically. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used the same publicly available dataset as in the existing work, and we did not discuss this matter specifically. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We used the same publicly available dataset as in the existing work, and we did not discuss this matter specifically. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix: A.2 Datasets. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 3 Pilot Study, Section 5 Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix: A.1 Implementation. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 Experiments, Appendix: A.1 Implementation. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 Experiments, Appendix B. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 1 Introduction, Section 5 Experiments, Appendix: A.1 Implementation. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wu-etal-2023-tacr
{TACR}: A Table Alignment-based Cell Selection Method for {H}ybrid{QA}
https://aclanthology.org/2023.findings-acl.409
Hybrid Question-Answering (HQA), which targets reasoning over tables and passages linked from table cells, has witnessed significant research in recent years. A common challenge in HQA and other passage-table QA datasets is that it is generally unrealistic to iterate over all table rows, columns, and linked passages to retrieve evidence. Such a challenge made it difficult for previous studies to show their reasoning ability in retrieving answers. To bridge this gap, we propose a novel Table-alignment-based Cell-selection and Reasoning model (TACR) for hybrid text and table QA, evaluated on the HybridQA and WikiTableQuestions datasets. In evidence retrieval, we design a table-question-alignment enhanced cell-selection method to retrieve fine-grained evidence. In answer reasoning, we incorporate a QA module that treats the row containing selected cells as context. Experimental results over the HybridQA and WikiTableQuestions (WTQ) datasets show that TACR achieves state-of-the-art results on cell selection and outperforms fine-grained evidence retrieval baselines on HybridQA, while achieving competitive performance on WTQ. We also conducted a detailed analysis to demonstrate that being able to align questions to tables in the cell-selection stage can result in important gains from experiments of over 90{\%} table row and column selection accuracy, meanwhile also improving output explainability.
# Tacr: A Table-Alignment-Based Cell-Selection And Reasoning Model For Hybrid Question-Answering Jian Wu1∗, Yicheng Xu2∗, Yan Gao3**, Jian-Guang Lou**3 Börje F. Karlsson4**, Manabu Okumura**1 1Tokyo Institute of Technology 2Nanyang Technological University 3Microsoft Research Asia 4Beijing Academy of Artificial Intelligence wu.j.as@m.titech.ac.jp, yxu040@e.ntu.edu.sg, {yan.gao, jlou}@microsoft.com, borje@baai.ac.cn, oku@pi.titech.ac.jp ## Abstract Hybrid Question-Answering (HQA), which targets reasoning over tables and passages linked from table cells, has witnessed significant research in recent years. A common challenge in HQA and other passage-table QA datasets is that it is generally unrealistic to iterate over all table rows, columns, and linked passages to retrieve evidence. Such a challenge made it difficult for previous studies to show their reasoning ability in retrieving answers. To bridge this gap, we propose a novel Table-alignment-based Cell-selection and Reasoning model (TACR) for hybrid text and table QA, evaluated on the HybridQA and WikiTableQuestions datasets. In evidence retrieval, we design a table-question-alignment enhanced cellselection method to retrieve fine-grained evidence. In answer reasoning, we incorporate a QA module that treats the row containing selected cells as context. Experimental results over the HybridQA and WikiTableQuestions (WTQ) datasets show that TACR achieves stateof-the-art results on cell selection and outperforms fine-grained evidence retrieval baselines on HybridQA, while achieving competitive performance on WTQ. We also conducted a detailed analysis to demonstrate that being able to align questions to tables in the cell-selection stage can result in important gains from experiments of over 90% table row and column selection accuracy, meanwhile also improving output explainability. ## 1 Introduction Text-based question-answering datasets derive answers based on reasoning over given passages (Rajpurkar et al., 2016; Chen et al., 2017; Joshi et al., 2017; Yang et al., 2018), while table-based QA datasets collect tables from sources such as WikiTables (Pasupat and Liang, 2015a; Zhong et al., 2017; Chen et al., 2019). However, datasets combining textual passages and tables, like HybridQA (Chen *indicates equal contribution. et al., 2020b), OTT-QA (Chen et al., 2020a), and TAT-QA (Zhu et al., 2021) are more realistic benchmarks. As the answer to a given question may come from either table cells or linked passages, current hybrid QA models usually consist of two components, a retriever to learn evidence and a reasoner to leverage the evidence to derive answers. Such models retrieve evidence from different granularities, coarse-grained (e.g., row or column) or fine-grained (e.g., cell), and directly use a spanbased reading comprehension model to reason the answer. Kumar et al. (2021), for example, chooses coarse-grained regions as evidence, e.g., a table row. Chen et al. (2020b) and Eisenschlos et al. (2021), however, focus on fine-grained units, table cells and linked passages. To preserve the advantages and eliminate the disadvantages of differentgranularity evidence, Sun et al. (2021a) propose MuGER,2 which performs multi-granularity evidence retrieval and answer reasoning. Wang et al. (2022) conducts extensive experiments to prove that a coarse-grained retriever contributes less than a fine-grained retriever. Moreover, fine-grained methods, although giving an exact position of candidate cells, fail to illustrate why the selected cells are chosen, while our method is based on row and column selection probabilities. We thus further extend the fine-grained method by aligning questions with tables, letting our approach know which parts of questions are accounted for by which modalities. Intuitively, multi-hop questions in the text-table QA task usually contain two pieces of information from different modalities, tables and passages. Moreover, tables and passages are connected with evidence contained in tabular data. Our method implicitly decomposes the questions for different modalities to locate evidence and improve cell-selection accuracy. As illustrated in Figure 1, an example from the HybridQA dataset shows how humans work on ![1_image_0.png](1_image_0.png) multi-hop and multi-modal QA tasks. The original question *"What is the middle name of the player* with the second most National Football League career rushing yards ?" can be divided into two parts, *"What is the middle name of"* and "the player with the second most National Football League career rushing yards?" for passages and tables, respectively. Such sub-questions are connected with the evidence of a cell ( *"Walter Payton"*). For humans, we first locate who was the player in the second rank, which requires information from two columns: *"Rank"* and *"Player"*. After locating the cell, we can finally determine Walter Payton's middle name from the passage. Such reasoning process inspired us to develop TACR, a Table-alignmentbased Cell-selection and Reasoning model, which incorporates a fine-grained evidence-retrieval module that utilizes table-question-alignment to learn which parts of the question are used for retrieving evidence from different modalities and reasoning towards answers. To explicitly and correctly show the reasoning process in the text-table QA task, in the evidence retrieval stage, TACR first selects the golden cells and avoids redundant information in multi-granularity evidence that would lower the performance of the answer-reasoning module. The table-cellselection module of TACR is designed to navigate the fine-grained evidence for the reader by fusing well-learned information from the table-questionalignment module. Compared with current finegrained retrievers, the table-question-alignment module of TACR can help our model learn which parts of questions are used for reasoning in which modality, and which parts of tables contain candidate cells. Together with the alignment module, TACR preserves both high golden cell-selection accuracy and shows competitive performance on the HybridQA and WikiTableQuestions (WTQ) datasets, while providing improved explainability. Our contributions are as follows: (1) TACR is the first model able to explicitly show its reasoning process in the passage-table QA task; (2) We jointly train the cell-selection and table-question alignment modules to improve golden cell selection performance and preserve the QA reader's performance; and (3) We conduct extensive experiments on the HybridQA and WTQ datasets to demonstrate the effectiveness of TACR. ## 2 Related Work 2.1 Table Question Answering Table QA has gained much attention, as shown by benchmark datasets such as WikiTableQuestions (Pasupat and Liang, 2015b), WikiSQL (Zhong et al., 2018), SPIDER (Yu et al., 2018), and TABFACT (Chen et al., 2019). However, these datasets mainly focus on reasoning on tables and ignore important knowledge stored in the textual corpus. Consequently, QA covering both tabular and textual knowledge has gained increasing interest. Chen et al. (2020b) pioneered a passage-table QA benchmark, HybridQA, with Wikipedia tables linked to relevant free-form text passages (e.g., Wikipedia entity-definition pages). The OTT-QA (Chen et al., 2020a) benchmark extended HybridQA to the open domain setting, where a system needs to retrieve a relevant set of tables and passages first before trying to answer questions. Moreover, the links from the table and passage are not provided explicitly. ## 2.2 Table-Question Alignment There are several table-question-alignment methods. Schema-linking-based methods, such as RATSQL (Wang et al., 2019), introduce a relation-aware transformer encoder to improve the joint encoding of a question and schema. Liu et al. (2022) propose a similarity learning-based question-schemaalignment method to obtain a semantic schemalinking graph and observed how the pre-trained language model (PLM) embeddings for the schema items are affected. Zhao and Yang (2022) use the same words that appear in both the natural language statement and the table as weak supervised key points and design an interaction network to explore the correlation between the representations of the natural language statements and tables. ## 2.3 Hybrid Qa Studies on hybrid QA usually retrieve different granularities of evidence from heterogeneous data to retrieve the final answer. Hybrider, proposed by Chen et al. (2020b), is a two-phase pipeline framework to retrieve gold table cells as evidence and input their values and linked passages into a QA model to extract the final answer. Sun et al. (2021b) proposes Dochopper, an end-to-end multihop retrieval model that directly concatenates rows with related textual evidence as its inputs. Pan et al. (2020) explores an unsupervised multi-hop QA model, called MQA-QG, which can generate human-like multi-hop questions by building a reasoning graph from heterogeneous data resources. Kumar et al. (2021) propose MITQA, which applies multiple-instance training objectives to retrieve coarse-grained evidence. On the contrary, Eisenschlos et al. (2021) introduce a transformerbased model with row- and column-wise attentions for fine-grained evidence retrieval, e.g., table cells. Wang et al. (2022) propose a unified retriever that tries to preserve the advantages and eliminates the disadvantages of different-granularity evidence retrieval methods. TACR differs from the above models mainly in two aspects: (1) TACR focuses on providing an explicit reasoning process by aligning multi-hop questions to tables, so it learns which parts of multi-hop questions are accounted for by retrieving evidence from which modality; and (2) The table-question alignment can enhance the reasoning ability of the table cell selection module with the help of our generated hybrid alignment dataset. TACR shows competitive performance to that of other table QA models on the HybridQA and WTQ datasets on the basis of high row, column, and cell selection accuracy. To the best of our knowledge, no texttable QA system handles the challenge of explicitly showing its reasoning process and multi-hop question table alignment. ## 2.4 Table Cell Retrieval Jauhar et al. (2016) construct a multiple-choice table QA benchmark that includes over 9000 question-table pairs via crowd-sourcing and proposed a table-cell search model based on calculating all relevance scores between each cell and question. Such a model is reasonable and intuitive but time-consuming. TACR selects gold cells based on row and column selection. Suppose that a table contains n rows and m columns; the table cell search method must calculate n∗m scores for each cell, while TACR needs to calculates only n + m scores for each row and column, and selects the gold cell in the row and column with the highest score. Sun et al. (2016) focus on extracting entities from questions and building a row graph and then mapping the question to the pair of cells in the same row of a table. However, some entities may not appear in both questions and table cells, e.g., an entity of the question in Figure 1 that should be extracted is *National Football League*, but it cannot be mapped into any cells. ## 3 Framework As described in the previous section, both coarseand fine-grained approaches fail to provide a reasoning process showing which parts of multi-hop questions map to which modality and evidence. Here we describe the details of TACR and its three main components: (1) data augmentation for training the table-question alignment module; (2) a multi-task learning module for table-question alignment and table-cell-selection; and (3) a text-based multi-hop QA module for retrieving answers. Figure 2 shows the overall architecture of TACR. ## 3.1 Task Definition Given a question Q (a sequence of tokens) and N rows of table T together with linked passages P, where each table column has a header h i=M i=1 (M is 6537 ![3_image_0.png](3_image_0.png) the number of table headers), the task is to find a candidate cell ci,j that contains the answer α. ## 3.2 Data Construction We generate multi-hop questions from tables and linked passages, as well as table-question alignment labels from questions and table columns for training the table-question-alignment module. However, such supervision information is not offered in the HybridQA dataset and other text-table QA datasets, which makes the alignment task difficult. We use an unsupervised text-table QAgeneration method to generate questions as well as alignment labels. Alignment Generation. We follow the settings of the MQA-QG method (Pan et al., 2020), using a pre-trained Google T5 (Raffel et al., 2019), fine-tuned on the SQuAD dataset (Rajpurkar et al., 2018), to generate multi-hop questions from tables and passages based on a bridge entity, a table cell that contains the bridge entity, and a linked passage that describes the bridge entity. The bridge entity is critical in reasoning because it connects the tables and passages, which are difficult to locate in the original HybridQA dataset. Such bridge entity provides us with additional information to align table headers with generated questions based on the column containing golden cells and the column containing the bridge entity. We align the columns which contain bridge entities and answers to questions following two schema-linking alignment rules: Name-based Linking. This rule refers to exact or partial occurrences of the column/table names in the question, such as the occurrences of "player" in the question in Figure 1. Textual matches are the most explicit evidence of table-question alignment and, as such, one might expect them to be directly beneficial to the table-question alignment module. Value-based Linking. Table-question alignment also occurs when the question mentions any values that occur in the table and consequently participate in the table-cell selection, such as "the second most" in Figure 1. While it is common for examples to make the alignment explicit by mentioning the column name (e.g., "Rank"), many real-world questions do not (like in the example). Consequently, linking a value mentioned in the question to the corresponding column also requires background knowledge. ## 3.3 Passage Filtering In this stage, we aim to filter out linked passages unrelated to a question, namely keeping almost noisefree passages for the following modules. Moreover, the total number of tokens in passages linked to table cells can be large, exceeding the maximum input sequence length of current LMs. Thus, we utilize Sentence-BERT (Reimers and Gurevych, 2019) to obtain question and passage embeddings and rank the top-k sentences based on their text similarities. We expand the cells with the filtered top k-related sentences to both fit in the max input length of language models and to preserve the useful information from passages. More details on this stage are provided in Appendix A. ## 3.4 Table Alignment & Cell Selection In this stage, we jointly train a multi-task model with the objectives of selecting the expanded cell that contains the answer and table-question alignment to different modalities to enhance the previous objective. TACR accepts the full table as inputs and outputs the probabilities of selected cells based on the probabilities of row and column selection. ## 3.4.1 Table-Question Alignment Given a natural language question Q = q1*, ....q*|Q| , a table consisting of several column headers C = c1*....c*|C| , and the corresponding table-question alignment labels L = l1*, ...l*|C| where li ∈ [0, 1] (0 meaning the column header is unrelated to the question Q and 1 meaning the column header is related to Q). The goal of our table-question alignment module is to learn the relevance between table-column headers and questions. Table-question relations aid TACR by aligning column references in the question to the corresponding table columns. We first feed the questions and table columns into the pre-trained model and map them into hidden representations. The question and tablecolumn headers can be denoted as q1*, ....q*|Q| and c1*....c*|C| , respectively. Our goal is to induce a function f(qi, cj ) to capture the relevance of a question word qi has on the representation of column header cj . Figure 3 shows the structure of the alignment module. ![4_image_0.png](4_image_0.png) Specifically, we use ALBERT (Lan et al., 2019) as the encoder to learn the representations of tables and column headers. Here we concatenate column headers as a pseudo sentence. The representations of the question (hq) and the column headers sequence (hc) are first computed independently. The relevance where each column header ciis the target of the question is then given by using softmax. The respective equations are as follows: $$h_{q}=\text{BERT}(\text{Q}),\tag{1}$$ $$h_{c}=\text{BERT}(\text{C}),$$ (2) $$p(C_{i}\in C)=\text{softmax}(W(h_{q}*h_{c})+b).\tag{3}$$ ## 3.4.2 Table-Cell Selection Inspired by the previous idea of modeling the attention on rows and columns (Eisenschlos et al., 2021), we design a cell-selection module based on row and column selection. The probabilities of each row and column are given and the cells with the top-k highest scores are returned as the candidate answers, or to aid in locating the relevant passage. However, unlike in MATE (Eisenschlos et al., 2021), we can derive probabilities of candidate cells from the probabilities of row and column. We utilize the Row-Column-Intersection (RCI) model, designed for the single-hop table-QA task (Glass et al., 2021) (based on ALBERT (Lan et al., 2019)), as our backbone and decompose the table QA task into two subtasks: projection - corresponding to identifying columns; and selection - identifying rows. Every row and column identification is a binary sequence pair classification. We concatenate the question as the first sequence and the row or column as the second sequence. We feed concatenated two sequences, with standard separator tokens [CLS] and [SEP], as the input to the model. The representation of the final hidden state is sent to the linear layer, followed by a softmax to classify whether the column or row contains the answer or not. Each row and column is assigned a probability of containing the answer. This module finally outputs the top-k cells with the sum of row and column probabilities. Therefore, given a table T with N rows and M columns, we can obtain two sets of scores produced from the RCI model: Pr = p1*, ....p*N for rows and Pc = p1*, ....p*M for columns. We then calculate the overall probability score for each cell. The final training loss is the summation of tablequestion-alignment loss, table-row-selection loss, and table-column-selection loss: $L=$ L_row + L_column + $\sigma\times$ BCE(_pred_heads_, _target_headers_), where σ is a hyper-parameter to balance cellselection training and table-question-alignment training. The details of choosing the best σ are provided in Appendix C. ## 3.5 Passage Question-Answering Previous research often simply treat the answerreasoning task as a span-extraction task, considered the first span matching the answer text as the gold span, and use that for training. Such consideration is incorrect because the answer text may appear in multiple passages, but only one of them is right. Therefore, using all text matches for training span extraction may introduce a large amount of training noise. As not all instances are the gold answer text that has relations with questions, after obtaining the top-k cells from the cell-selection module, we train the text-based QA module to predict the final answer that also takes into account the cell-selection scores. Specifically, we select clean training instances where the gold answer text appears only once and train an initial QA model. In this stage, we use RoBERTa (Liu et al., 2019) as our backbone model. Other BERT variants, e.g., either SpanBERT (Joshi et al., 2019) or DeBERTa (He et al., 2020), could be also used in this module. Our goal is to obtain a span s in a given expanded table cell c with its filtered passage p and the input question q. We compute a span representation as follows: $$h_{start}=\text{RoBERTa}_{r}(q,c)[\text{START}(s)],\tag{5}$$ $$h_{end}=\text{RoBERTa}_{r}(q,c)[\text{END}(s)],$$ (6) $$S_{span}(q,p)=\text{MLP}([h\_start,h\_end]).\tag{7}$$ We also consider other cells in the same row as the retrieved candidate gold cells as the necessary context. We linearize and concatenate the row into a passage with the designed template: "The <column header> is <cell content>". We retrieve the top-k cells and thus have k samples. Since not all selected cells contain the gold answer text, we treat one sample as positive and the others as negative samples. For each data point, we generate k samples and match these with the answer text. Let K = {qi, Ai, P + i , P − i,1 , , P − i,k−1} k i=1 be the training data that consist of k instances, where k is the | Split | Train | Dev. | Test | Total | |------------|---------|--------|--------|---------| | In-Passage | 35215 | 2025 | 2045 | 39285 | | In-Table | 26803 | 1349 | 1346 | 29498 | | Compute | 664 | 92 | 72 | 864 | | Total | 62682 | 3466 | 3463 | 69611 | number of selected candidate cells. Each instance contains one question qi, the gold answer text Ai, and one correct (positive) passage text P + i , along with k − 1 wrong passages P − i,j . For positive samples, the answer is the text span of the passage, while for negative samples, the answers are -1. ## 4 Experiments 4.1 Datasets HybridQA (Chen et al., 2020b) is the first largescale multi-hop QA dataset that requires reasoning over hybrid knowledge, including tables and linked Wikipedia passages. The dataset contains 62,682 instances in the training set, 3,466 instances in the development set, and 3,463 instances in the test set. WikiTableQuestions (Pasupat and Liang, 2015a), WTQ for short, consists of 22033 complex questions and 2108 semi-structured Wikipedia tables. The questions are designed by crowdsourcing to contain a wide range of domains. The answers are derived from several operations such as table lookup, aggregation, superlatives, arithmetic operations, joins, and unions. To verify the performance of TACR, we first conduct experiments on HybridQA (Chen et al., 2020b), a dataset of multi-hop question-answering over tabular and textual data. The basic statistics of HybridQA are listed in Table 1. The dataset contains three partitions: 'In-Table', where the answer derives from table cell values; 'In-Passage', where the answer exists in a linked passage; and 'Compute', where the answer should be computed by executing numerical operations. We mainly focus on the first two types. We also provide results over WTQ to illustrate TACR's capabilities in tablefocused QA. ## 4.2 Baselines MQA-QG, proposed by (Pan et al., 2020), is an unsupervised question-generation framework that generates multi-hop questions from tables and linked passages, and uses the generated questions to train an HQA model. Table-Only (Chen et al., 2020b) only retrieves the tabular information to find an answer by parsing the question into a symbolic form and executing it. Passage-Only (Chen et al., 2020b) only retrieves answers from the table-linked passages. Hybrider (Chen et al., 2020b) addresses HQA using a two-stage pipeline framework to retrieve the gold table cell and extract an answer in its value or linked passages. Dochopper (Sun et al., 2021b) first converts a table with its hyperlinked passages into a long document then concatenates column headers, cell text, and linked passages in each row of tables as a paragraph. MATE (Eisenschlos et al., 2021) applies sparse attention to rows and columns in a table. To apply it to the HybridQA dataset, the authors propose a PointR module, which expands a cell using the description of its entities, selects the golden cells, then retrieves answers from them. MITQA (Kumar et al., 2021) designs a multiinstance training method based on distant supervision to filter the noisy information from multiple answer spans. ## 4.3 Quantitative Analysis We use exact match (EM) and F1 scores as evaluation metrics on the HybridQA dataset to compare the performance of TACR with that of previous baselines. As shown in Table 2, TACR outperforms most baselines and achieved competitive performance to state-of-the-art (SOTA) models (e.g., MITQA) in both EM and F1 scores over the HybridQA dataset. Table 3 reports the accuracy performance on WTQ. Though TACR is trained on a base model, it presents comparable accuracy to the large SOTA models and outperforms other base models. It is important to note that, besides both using much larger LMs than TACR (GPT-3 and BARTlarge respectively, versus RoBERTa-base), neither Binder nor Omnitab-large provide explainability. With the help of the table-question-alignment module, TACR boosts relative accuracy by +18.5% on the test set compared with RCI (Glass et al., 2021), which is also based on cell selection. This competitive performance is mainly based on the high cell selection along with table-question alignment. We further verified the effectiveness of the tablequestion-alignment module in an ablation study discussed in Section 4.5. ## 4.4 Qualitative Analysis We compare the cell-selection accuracy of TACR and baseline models, as shown in Table 4. The high cell selection accuracy is based on the high row- and column-selection accuracies shown in Table 6. On the HybirdQA dataset, TACR shows SOTA performance and 0.4% higher than that of MATE (Eisenschlos et al., 2021) in the top 3 cellselection accuracies due to its 89.3% row-selection accuracy and 98.3% column-selection accuracy, as shown in Table 6. Moreover, by achieving soft question decomposition (i.e., showing which parts of questions are connected to reasoning in the different modalities), TACR both improves the explainability of its results and provides valuable signals for future improvements. ## 4.5 Ablation Study To evaluate the impact of the table-questionalignment module, we conduct an ablation study, shown in Table 5. We test DeBERTa-base, ALBERT-base, and RoBERTa-base models as TACR backbones for generality. Different top-k results show that the alignment module consistently significantly improves results; with the best model based on ALBERT improving cell-selection accuracy by 2.5, 3.9, and 4.3% in top 1, 3, and 5 cell selection respectively; and mean reciprocal rank (MRR) improving by 3.7%. The results indicate that the table-question-alignment module has an important role in the table-question-reasoning stage to select the most related cells that support the answer to the question. ## 4.6 Case Study To illustrate TACR can successfully learn which parts of tables contain golden cells and which parts of questions are required for reasoning in the different modalities, we choose two examples from the HybridQA development set. Appendix B includes Figures 4 and 5 showing their word relevances heatmap and analysis. The question in Case 1 is *"Who is the athlete in* a city located on the Mississippi River ?". The concatenated table headers string for the corresponding table is *"Year Score Athlete Place"*. The tablequestion-alignment module helps TACR learn that header terms *"Athlete"* and *"Place"* have higher relevance to the question than the headers of other columns, thus guiding cell-selection. Figure 4 shows its relevance heatmap. TACR again learns | Model | Dev. | Test | | | | | | | | | | | |-----------------------|------------|--------|----------|------------|-------|------|------|------|------|------|------|------| | In-Table | In-Passage | Total | In-Table | In-Passage | Total | | | | | | | | | EM | F1 | EM | F1 | EM | F1 | EM | F1 | EM | F1 | EM | F1 | | | Table-Only | 14.7 | 19.1 | 2.4 | 4.5 | 8.4 | 12.1 | 14.2 | 18.8 | 2.6 | 4.7 | 8.3 | 11.7 | | Passage-Only | 9.2 | 13.5 | 26.1 | 32.4 | 19.5 | 25.1 | 8.9 | 13.8 | 25.5 | 32.0 | 19.1 | 25.0 | | Hybrider (τ=0.8) | 54.3 | 61.4 | 39.1 | 45.7 | 44.0 | 50.7 | 56.2 | 63.3 | 37.5 | 44.4 | 43.8 | 50.6 | | PointR + SAT | 66.5 | 71.8 | 60.3 | 69.2 | 61.2 | 68.7 | 64.6 | 70.1 | 59.6 | 68.5 | 60.1 | 67.4 | | PointR + TAPAS | 68.1 | 73.9 | 62.9 | 72.0 | 63.3 | 70.8 | 67.8 | 73.2 | 62.0 | 70.9 | 62.7 | 70.0 | | PointR + TABLEETC | 36.0 | 42.4 | 37.8 | 45.3 | 36.1 | 42.9 | 35.8 | 40.7 | 38.8 | 45.7 | 36.6 | 42.6 | | PointR + LINFORMER | 65.5 | 71.1 | 59.4 | 69.0 | 60.8 | 68.4 | 66.1 | 71.7 | 58.9 | 67.8 | 60.2 | 67.6 | | PointR + MATE | 68.6 | 74.2 | 62.8 | 71.9 | 63.4 | 71.0 | 66.9 | 72.3 | 62.8 | 71.9 | 62.8 | 70.2 | | MQA-QG (unsupervised) | - | - | - | - | - | - | 36.2 | 40.6 | 19.8 | 25.0 | 25.7 | 30.5 | | Dochopper | - | - | - | - | 47.7 | 55.0 | - | - | - | - | 46.3 | 53.3 | | MITQA | 68.1 | 73.3 | 66.7 | 75.6 | 65.5 | 72.7 | 68.5 | 74.4 | 64.3 | 73.3 | 64.3 | 71.9 | | MuGER2 | 58.2 | 66.1 | 52.9 | 64.6 | 53.7 | 63.6 | 56.7 | 64.0 | 52.3 | 63.9 | 52.8 | 62.5 | | TACR (ours) | 66.7 | 70.3 | 63.4 | 72.5 | 64.5 | 71.6 | 64.1 | 69.6 | 65.4 | 70.7 | 66.2 | 70.2 | | Human | 88.2 | 93.5 | | | | | | | | | | | Table 2: EM and F1 results of models on the HybridQA dataset. In-Table and In-Passage subsets refer to the location of answers. | Model | Dev | Test | |-------------------------------------------------------|-------|--------| | TAPEX-Large (Liu et al., 2021) | 57.0 | 57.5 | | Binder (Cheng et al., 2022) | 65.0 | 64.6 | | OmniTab-Large (Jiang et al., 2022) | 62.5 | 63.3 | | TAPAS_base (pre-trained on SQA) (Herzig et al., 2020) | - | 48.8 | | UnifiedSKG (Xie et al., 2022) | 50.7 | 49.3 | | TaBERT_base (Yin et al., 2020) | 51.6 | 51.4 | | RCI (Glass et al., 2021) | 45.3 | 41.7 | | TACR_RoBERTa-base (ours) | 58.9 | 60.2 | Table 3: Execution-accuracy results of models on WTQ Table 4: Comparison of cell-retrieval results on HybridQA dataset (dev set) which parts of the question account for retrieving evidence in tables. The question in Case 2 is *"What is the middle* name of the player with the second most National Football League career rushing yards ?". The concatenated table headers string for it is *"Rank* Player Team(s) by season Carries Yards Average". The table-question-alignment module helps TACR learn that the sub-question *"the player with the second most National Football League career rushing* yards" has a higher relevance to the table headers than that of other parts of the original question, thus guiding modality relevance. Figure 5 shows | Model | Hits@1 | Hits@3 | Hits@5 | |---------------------------------|----------|----------|----------| | TABLEETC (Ainslie et al., 2020) | 51.1 | 72.0 | 78.9 | | LINFORMER (Wang et al., 2020) | 77.1 | 86.5 | 90.0 | | MATE (Eisenschlos et al., 2021) | 80.1 | 86.2 | 90.5 | | TACR (ours) | 83.3 | 87.8 | 91.2 | ## Its Relevance Heatmap. 4.7 Error Analysis To further analyze TACR, we also calculate statistics for error cases in the model predictions. The error statistics are based on the development set of HybridQA. Through the cell-selection accuracy statistics in Table 4, we find there are 347 tables whose cells are incorrectly selected. To better understand the advantages and disadvantages of table-question alignment-based cell selection, we manually sample and examined 20 such error cases (i.e., where TACR does not provide the correct answer in the correct row, column, and cell position). Out of the 20 samples, we find that five error cases (25%) are due to requiring numerical reasoning operations that cross several cells (which is out of scope for TACR). The majority of errors, 13 of the remaining incorrect cases, are in the same column with a correct answer while in the wrong row. Only one case is from a different row but the same column with the correct answer and only one incorrect case is in a completely different row and column to the correct answer. ## 5 Conclusion This paper presents TACR, a Table question Alignment-based cell selection and Reasoning model for hybrid text and table QA, evaluated on the HybridQA and WikiTableQuestions datasets. When answering questions given retrieved table Model MRR Hits@1 Hits@3 Hits@5 TACR-DeBERT_base w/o alignment 78.9 74.9 79.4 83.7 TACR-Roberta_base w/o alignment 80.7 74.3 82.6 84.4 TACR-ALBERT_base w/o alignment 80.1 77.1 82.8 85.4 TACR-DeBERTa_base w/ alignment 82.4 78.3 83.4 86.2 TACR-RoBERTa_base w/ alignment 82.5 76.5 85.5 88.9 TACR-ALBERT_base w/ alignment **83.8 79.6 86.7 89.7** Table 5: Ablation study of table-question-alignment module impact. Experiment results of cell-retrieval on HybridDQA (dev set) show the effectiveness of this module in the table-cell-selection stage. Table 6: Performance of TACR with different backbone models. Top-k rows and columns selection accuracies on HybridQA and WTQ datasets, where k=1, 3, 5. Results demonstrate the effectiveness of TACR. cells and passages, TACR attempts to align multihop questions to different modalities for correct evidence retrieval. To enhance the QA module with better table cell-selection and table-questionalignment ability, we construct a hybrid alignment dataset generated from the HybridQA dataset. TACR shows state-of-the-art performance in retrieving intermediate gold table cells and competitive performance on the HybridQA and WikiTableQuestions datasets, while improving output explainability. ## 6 Limitations | Model | HybridQA | WTQ | | | |-------------------|------------|-------|------|------| | Row | Col | Row | Col | | | top 1 | | | | | | TACR_DeBERTa_base | 85.1 | 95.3 | 53.2 | 93.9 | | TACR_ALBERT_base | 86.7 | 96.1 | 56.8 | 94.4 | | TACR_RoBERTa_base | 86.0 | 96.2 | 52.3 | 94.7 | | top 3 | | | | | | TACR_DeBERTa_base | 86.2 | 96.2 | 57.6 | 94.2 | | TACR_ALBERT_base | 88.3 | 97.1 | 62.4 | 95.1 | | TACR_RoBERTa_base | 87.9 | 97.3 | 59.3 | 94.9 | | top 5 | | | | | | TACR_DeBERTa_base | 87.5 | 97.8 | 59.1 | 94.8 | | TACR_ALBERT_base | 89.9 | 98.3 | 68.1 | 95.4 | | TACR_RoBERTa_base | 89.3 | 98.4 | 64.5 | 95.2 | In this paper, we focus on the hybrid QA task, where the answers to most questions can be extracted from cell values in tables and linked passages using a reading comprehension model. Although TACR performs well in cell selection, one of its limitations is that it lacks numerical reasoning ability across different cells, such as counting and comparing. To enable TACR to answer numerical questions, we will further develop its numerical reasoning capabilities in future work. Another limitation of TACR is that it shows a strong ability in column selection while performing relatively worse in row selection. For future work, we plan to try to improve its row-selection accuracy. ## References Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding long and structured inputs in transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 268–284, Online. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Yang Wang, and William W. Cohen. 2020a. Open question answering over tables and text. *ArXiv*, abs/2010.10439. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A largescale dataset for table-based fact verification. *arXiv* preprint arXiv:1909.02164. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Wang. 2020b. HybridQA: A dataset of multi-hop question answering over tabular and textual data. *arXiv preprint* arXiv:2004.07347. Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, R.K. Nadkarni, Yushi Hu, Caiming Xiong, Dragomir R. Radev, Marilyn Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022. Binding language models in symbolic languages. *ArXiv*, abs/2210.02875. Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, and William W. Cohen. 2021. MATE: Multiview attention for table transformer efficiency. In Conference on Empirical Methods in Natural Language Processing. Michael R. Glass, Mustafa Canim, A. Gliozzo, Saneem A. Chemmengath, Rishav Chakravarti, Avirup Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021. Capturing row and column semantics in transformer based question answering over tables. In *North American Chapter of* the Association for Computational Linguistics. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decodingenhanced BERT with disentangled attention. *ArXiv*, abs/2006.03654. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. *ArXiv*, abs/2004.02349. Sujay Kumar Jauhar, Peter Turney, and Eduard Hovy. 2016. Tables as semi-structured knowledge for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 474–483, Berlin, Germany. Association for Computational Linguistics. Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. 2022. Omnitab: Pretraining with natural and synthetic data for few-shot tablebased question answering. In *NAACL*. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In *Annual Meeting of the Association for Computational Linguistics*. Vishwajeet Kumar, Saneem A. Chemmengath, Yash Gupta, Jaydeep Sen, Samarth Bharadwaj, and Soumen Chakrabarti. 2021. Multi-instance training for question answering across table and linked text. ArXiv, abs/2112.07337. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for selfsupervised learning of language representations. ArXiv, abs/1909.11942. Aiwei Liu, Xuming Hu, Li Lin, and Lijie Wen. 2022. Semantic enhanced text-to-sql parsing via iteratively learning schema linking graph. *Proceedings of the* 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Qian Liu, Bei Chen, Jiaqi Guo, Zeqi Lin, and JianGuang Lou. 2021. TAPEX: Table pre-training via learning a neural SQL executor. *ArXiv*, abs/2107.07653. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *ArXiv*, abs/1907.11692. Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, and William Yang Wang. 2020. Unsupervised multi-hop question answering by question generation. In North American Chapter of the Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015a. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015b. Compositional semantic parsing on semi-structured tables. In Annual Meeting of the Association for Computational Linguistics. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In *Annual Meeting of the Association for* Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. *arXiv preprint* arXiv:1606.05250. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. ArXiv, abs/1908.10084. Haitian Sun, William W. Cohen, and Ruslan Salakhutdinov. 2021a. End-to-end multihop retrieval for compositional question answering over long documents. ArXiv, abs/2106.00200. Haitian Sun, William W. Cohen, and Ruslan Salakhutdinov. 2021b. Iterative hierarchical attention for answering complex questions over long documents. Huan Sun, Hao Ma, Xiaodong He, Wen tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. Proceedings of the 25th International Conference on World Wide Web. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2019. Rat-sql: Relation-aware schema encoding and linking for textto-sql parsers. In *Annual Meeting of the Association* for Computational Linguistics. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. *ArXiv*, abs/2006.04768. Yingyao Wang, Junwei Bao, Chaoqun Duan, Youzheng Wu, Xiaodong He, and Tiejun Zhao. 2022. MuGER2: Multi-granularity evidence retrieval and reasoning for hybrid question answering. *ArXiv*, abs/2210.10350. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. UnifiedSKG: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. ArXiv, abs/2201.05966. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Pengcheng Yin, Graham Neubig, Wen tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. *ArXiv*, abs/2005.08314. Tao Yu, Rui Zhang, Kai-Chou Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Z Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Conference on Empirical Methods in Natural Language Processing. Guangzhen Zhao and Peng Yang. 2022. Table-based fact verification with self-labeled keypoint alignment. In *International Conference on Computational Linguistics*. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Seq2sql: Generating structured queries from natural language using reinforcement learning. ArXiv, abs/1709.00103. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. *arXiv preprint arXiv:2105.07624*. ## A Passage Filtering Passage filtering plays an important role in cell selection as well as answer extraction. Pre-trained language models such as BERT, RoBERTa, and LLMs have the limitation of max input sequence length. Passage filtering ensures that it is unlikely to lose information relevant to the questions, while fitting model input limits. We used the well-trained DistilBert-based model to obtain question and passage embeddings to rank and filter relevant passages.1 ## B Alignment Analysis Here we provide example heatmaps showing the relevance of questions and table headers. The relevance is in the [0,1] range, where the higher relevance between words from questions and column headers is shown in the warmer colors and vice versa. Figure 4 shows that the column headers "athlete" and "place" have more relevance to the question, which helps TACR identify which columns contain potential gold cells. In Figure 5, the words "player with second most national football league" from the question have more relevance to columns, which help TACR learn which parts of the question better use to retrieve gold cells. ## C Implementation Details Of Cell Selection And Alignment TACR is implemented using Pytorch version 1.13 and the Huggingface transformers (Wolf et al., 1https://huggingface.co/sebastian-hofstaetter/distilbertdot-tas_b-b256-msmarco ![11_image_0.png](11_image_0.png) 2020) library. We trained TACR using two NVIDIA A6000 GPUs. The cell selection and table–question-alignment modules are trained for four epochs and we selected the best model based on the dev fold performance. AdamW is used as optimizer algorithm with a learning rate of 5×10-5 and a batch size of 32. We set the per-GPU train batch size to 16 while training the span-based QA model. Final answers are evaluated using EM and F1 scores. We also automatically iterated through increments of 0.1 in the range [0, 1] to select the best σ to balance the multi-task training. Hyper-parameter Details: We tune hyperparameters based on the loss on the development set and use the following range of values for selecting the best hyper-parameters: - Batch size: [8, 16, 32, 64] - Learning rate: [1e-3, 1e-4, 1e-5, 1e-6, 3e-3, 3e-4, 3e-5, 3e-6, 5e-3, 5e-4, 5e-5, 5e-6] - σ : [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] ![12_image_0.png](12_image_0.png) SERVISION STORIES AND STORIES 0.8 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 6 ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? section a ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? references ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? appendix ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 3 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
shaikh-etal-2023-modeling
Modeling Cross-Cultural Pragmatic Inference with Codenames Duet
https://aclanthology.org/2023.findings-acl.410
Pragmatic reference enables efficient interpersonal communication. Prior work uses simple reference games to test models of pragmatic reasoning, often with unidentified speakers and listeners. In practice, however, speakers{'} sociocultural background shapes their pragmatic assumptions. For example, readers of this paper assume NLP refers to Natural Language Processing, and not {``}Neuro-linguistic Programming.{''} This work introduces the Cultural Codes dataset, which operationalizes sociocultural pragmatic inference in a simple word reference game. Cultural Codes is based on the multi-turn collaborative two-player game, Codenames Duet. Our dataset consists of 794 games with 7,703 turns, distributed across 153 unique players. Alongside gameplay, we collect information about players{'} personalities, values, and demographics. Utilizing theories of communication and pragmatics, we predict each player{'}s actions via joint modeling of their sociocultural priors and the game context. Our experiments show that accounting for background characteristics significantly improves model performance for tasks related to both clue-giving and guessing, indicating that sociocultural priors play a vital role in gameplay decisions.
# Modeling Cross-Cultural Pragmatic Inference With Codenames Duet Omar Shaikh⋆† Caleb Ziems⋆† William Held ‡ **Aryan J. Pariani** ‡ Fred Morstatter ⋄ **Diyi Yang** † †Stanford University, ‡Georgia Institute of Technology, ⋄USC Information Sciences Institute {oshaikh, cziems, diyiy}@stanford.edu {wheld3, apariani3}@gatech.edu fred@isi.edu ## Abstract Pragmatic reference enables efficient interpersonal communication. Prior work uses simple reference games to test models of pragmatic reasoning, often with unidentified speakers and listeners. In practice, however, speakers' sociocultural background shapes their pragmatic assumptions. For example, readers of this paper assume NLP refers to "Natural Language Processing," and not "Neuro-linguistic Programming." This work introduces the CULTURAL CODES dataset, which operationalizes sociocultural pragmatic inference in a simple word reference game. CULTURAL CODES is based on the multi-turn collaborative two-player game, *Codenames* Duet. Our dataset consists of 794 games with 7,703 turns, distributed across 153 unique players. Alongside gameplay, we collect information about players' personalities, values, and demographics. Utilizing theories of communication and pragmatics, we predict each player's actions via joint modeling of their sociocultural priors and the game context. Our experiments show that accounting for background characteristics significantly improves model performance for tasks related to both clue giving and guessing, indicating that sociocultural priors play a vital role in gameplay decisions. ## 1 Introduction "*Most of our misunderstandings of other* people are not due to any inability to... understand their words... [but that] we so often fail to understand a speaker's intention." ## - **George Armitage Miller** (1974) Certain pragmatic inferences can only be interpreted by individuals with shared backgrounds. ⋆Equal contribution. ![0_image_0.png](0_image_0.png) Figure 1: **An example interaction where difference** in sociocultural background results in misinterpretation. Steps 1-5 outline high-level gameplay tasks. THE CLUE GIVER targets the words *fall* and *drop*, giving the hint *slip*. THE GUESSER misinterprets *slip* as a piece of paper, guessing *reciept* and *check*. For example, what researchers call fun may not be fun for kindergartners. Theories from sociolinguistics, pragmatics, and communication aim to explain how sociocultual background affects interpersonal interaction (Schramm, 1954)— especially since variation occurs across several dimensions: class (Bernstein, 2003; Thomas, 1983), age (Labov, 2011), gender (Eckert and McConnellGinet, 2013), race (Green, 2002), and more. Rigorously modeling how culture affects pragmatic inference on all axes is understandably challenging. The board game *Codenames Duet* offers a more restricted setting of turn-based word reference between two players. In each round, THE CLUE GIVER provides a single-word clue; then THE GUESSER must interpret this clue to select the intended word references on the game board. Ideal inferences come from the players' common ground—the set of shared beliefs between them (Clark, 1996). In practice, however, a player's behavior can be idiosyncratic. Each player has knowledge and experience that shape how they interpret clues and make guesses. When players' backgrounds differ, they may be more likely to misinterpret their partner, as seen in Figure 1. Inspired by the above, we model the role of sociocultural factors in pragmatic inference with a new task and a series of ablation experiments. First, we describe the CULTURAL CODES dataset of cross-cultural *Codenames Duet* gameplay, with relevant background information from the players' demographics, personalities, and political and moral values (§3). Then, we deconstruct each action in a game into a distinct modeling task, taking inspiration from work on cross-cultural pragmatics (§4). Finally, we model each task with/without sociocultural priors, and highlight how player background improves model performance (§6). Our dataset and code is released publicly at https: //github.com/SALT-NLP/codenames ## 2 Related Work Cross-Cultural Pragmatics and NLP Pragmatics describes the nonliteral meaning that comes from context and social inference (Purpura, 2004; Thomas, 1983; Hatch et al., 1992). Although some pragmatic categories are universal (e.g., politeness), they can be expressed differently in sociocultural contexts (Taguchi, 2012; Shoshana et al., 1989; Gudykunst and Kim, 1984). When an intended meaning is misinterpreted, this is known as 'pragmatic failure' (Thomas, 1983)—often the result of misaligned reference frames or differences in common ground (Stadler, 2012; Crawford et al., 2017). Especially relevant to Codenames are communal lexicons, where common ground manifests in shared community vocabulary (Clark, 1998). Another axis of difference is between low/highcontext cultures (Hofstede, 2001); high-context cultures rely more on shared background. Pragmatics also differs by age (Saryazdi et al., 2022), region, ethnicity, politics, and class (Thomas, 1983), as does theory of mind (Fiske and Cox, 1979; Miller, 1984; Shweder, 1984; Lillard, 1998, 1999). Outside of work on politeness (Sperlich et al., 2016; Fu et al., 2020), sarcasm (Joshi et al., 2016), and irony (Karoui et al., 2017), the NLP subfield has not closely considered cross-cultural pragmatics. While there is work on understanding the role of individual culture—for example, learning demographic word vectors (Garimella et al., 2017), identifying deception/depression (Soldner et al., 2019; Loveys et al., 2018), or improving translation (Specia et al., 2016)—modeling **cross**-cultural pragmatic inference in communication remains a challenge (Hershcovich et al., 2022). Still, a culture-free pragmatics has played a central role in various NLP tasks, from instructionfollowing (Fried et al., 2018), image captioning (Andreas and Klein, 2016), persona-consistent dialogue (Kim et al., 2020), and summarization (Shen et al., 2019). Much of this work is grounded in Bayesian models of cognition (Griffiths et al., 2008), with models like *Bayesian Teaching* (Eaves Jr et al., 2016), *Naive Utility Calculus* (Jara-Ettinger et al., 2016; Jern et al., 2017), and the *Rational Speech Acts* (RSA) model (Goodman and Frank, 2016; Franke and Jäger, 2016) that integrate language, world knowledge, and context to explain ideal pragmatic reasoning (Noveck, 2018) and grounded reference (Monroe et al., 2017). Instead of modeling socioculture in isolation, we model pragmatic inference, highlighting the role of culture in general interpersonal interaction. Games as Testbeds for AI A significant body of work focuses on modeling optimal *strategy* across a wide set of games, including Go (Silver et al., 2016), Chess (Schrittwieser et al., 2020), Poker (Brown and Sandholm, 2017), Diplomacy (, FAIR), D&D (Callison-Burch et al., 2022; Zhou et al., 2022), and Mafia (Ibraheem et al., 2022). Reference games are growing in popularity as testbeds for AI. Tests for artificial pragmatic reasoning often rely on sequential language games, where two players leverage private knowledge either to compete Yao et al. (2021) or coordinate towards a common goal (Potts, 2012; Khani et al., 2018; Hawkins et al., 2015). In this vein, recent works have considered *Codenames* (Koyyalagunta et al., 2021; Kim et al., 2019; Jaramillo et al., 2020), *Connector* (Ashok Kumar et al., 2021; Kumar et al., 2021; Kovacs et al., 2022) *InfoJigsaw* (Khani et al., 2018), and image-based games (Bao et al., 2022). Word association games have been used in psychology to study semantic associations in cultural (Korshuk, 2007) and religious (Tikhonova, 2014) contexts. We utilize games to model the effect of cross-cultural interactions on pragmatic inference. ## 3 The Cultural Codes **Dataset** This study has been approved by the Institutional Review Board (IRB) at the authors' institution. The purpose of the CULTURAL CODES dataset is to understand how measurable social factors influence dyadic communication *in English*. By collecting relevant participant background information, we aim to understand how these factors affect linguistic reasoning in a collaborative reference game. ## 3.1 Codenames Duet **Game Overview** Codenames Duet is a collaborative variant of *Codenames* (Vlaada, 2015) designed for 2 players. The players share a 5 × 5 board of 25 common words. Each player has a distinct (but sometimes partially overlapping) map from words on the board to the following objectives: goal, **neutral**, and avoid. One player's map is hidden from the opposing player. The objective of the game is for both players to guess all of their partner's **goal** words without guessing any of their partner's **avoid** words, as doing so results in an immediate loss. CULTURAL CODES uses an adapted version of Codenames Duet. With each turn, players alternate between the THE CLUE GIVER and THE GUESSER roles. To begin the turn, THE CLUE GIVER (1) selects one or more associated **goal** words as targets. Next, THE CLUE GIVER (2) provides a single word clue that relates to the associated target(s). This clue is displayed to THE GUESSER, along with the number of targets she should find. The THE CLUE GIVER also (3) provides a justifying *rationale* for the clue, describing the relationship between the clue and the target(s). This *rationale* is not displayed to the partner. Using the clue and the number of target words THE GUESSER (4) guesses targeted words. For each guess, THE GUESSER (5) provides a justifying *rationale* for the guess. After ending the turn, players alternate roles and continue until all **goal** words are selected for both sides, or players are eliminated for guessing an avoid word. An overview of roles is illustrated in Figure 1. In §4, we formalize actions **(1)-(4)** as distinct modeling tasks. ## 3.2 Selecting Board Game Words All experiments are run on a strategically filtered subset of the 400 words from *Codenames Duet*. We select the 100 most abstract and semantically ambiguous board game words to elicit diverse responses from players. Since the *polysemy* (Ravin and Leacock, 2000) of a word—the number of related senses it includes—predicts the expected diversity of player responses, we retain only nouns with two or more senses in WordNet (Miller, 1992). Next, we rank polysemous words with Brysbaert et al. (2014)'s concreteness list, selecting the 100 most abstract words according to the mean of their human concreteness scores (finalized list can be found in Appendix A.) When a player starts a game, we initialize the board with a random subset of 25 words from the filtered 100. For each player, 9 words are randomly mapped to **goal**, 3 are **avoid**, and 13 are **neutral**. ## 3.3 Gameplay Data To collect gameplay data, we modified an opensource implementation of *Codenames Duet*, 1automatically pairing individuals who visited the game website. To source players, we relied on Amazon's Mechanical Turk. We provided MTurkers with an initial instruction video detailing rules and how to play. To be eligible for the task, Turkers had to get ≥ 80% questions right on a qualifying quiz about Codenames rules and gameplay (Appendix D.1). Average game length was around 17.4 minutes, and MTurkers were paid $2.50 for every game. Gameplay Attributes For each completed turn, we collected the following game state information from THE CLUE GIVER. Elements marked in gray were hidden from THE GUESSER. Clue: THE CLUE GIVER's clue c (e.g. c could be "*transport*" for the target "car"). Target Word(s): (Hidden) The target words tn (e.g. "car") that THE CLUE GIVER intended THE GUESSER to guess. Target Word(s) Rationale(s): (Hidden) A free-text phrase rn, that describes the relationship between each target word tn and the clue c (e.g. "*a car is a mode of transport*"). To summarize, each turn from THE CLUE GIVER results in a clue c and at least one target-rationale pair (tn, rn). On the other hand, we collect the following for THE G**UESSER**. Guesses: The guesses gn that THE GUESSER selected for THE CLUE GIVER's clue c. 1https://github.com/jbowens/ codenamesgreen ![3_image_0.png](3_image_0.png) Rationale for Each Guess: A free-text phrase rn that relates the guess gn to the clue c Manual inspection revealed a wide range of rationales. To prevent models from exploiting variance, we instructed GPT-3 to normalize text, removing pronouns and determiners.2 We provided few-shot examples of reformatted rationales and manually inspected normalized outputs. Additional preprocessing information can be found in Appendix B. ## 3.4 **Sociocultural Priors And Worker Diversity** Because we aim to understand the role of sociocultural priors on gameplay, we asked Turkers to complete the standardized surveys below, which cover three broad dimensions: *demography, personality, and morality*. Demographic Data (Figure 2) comes from both the annotation UI and in the task's qualifying questionnaires. In the UI, we asked Turkers for their numeric age, their country of origin, and whether English is their native language. These were required features, so we will denote them as DemoReq. In the qualifier, we included an extended demographic survey with *age range, level of education, marital status*, and *native language* (Appendix D.2.1), which we will denote as DemoAll. We find that our annotator demographics are moderately diverse, mirroring Moss et al. (2020). Reported gender across annotators are evenly split: 53% identify as women, 47% identify as men, and 0% as other. Additional details are in Figure 2 and Appendix D.2.1. Personality (Figure 3) surveys also offer insight into interpersonal interactions. We administer the Big 5 Personality Test (John et al., 1991), measuring a range of personality dimensions on a 5 point 2We use the text-davinci-003 variant from OpenAI. Without GPT-3 normalization, we find that model performance is artificially inflated. ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) Likert Scale. Features include Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Definitions are in Appendix D.2.2. Moral and Political Leaning (Figure 4) also influences decision making processes. Therefore, we asked annotators to self-report their political leaning (liberal, conservative, libertarian, etc). While political leaning captures broad elements of annotator values, Haidt and Graham (2007)'s widely adopted Moral Foundations Theory (MFT) deconstructs values into individual foundations (Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation). Differences in each foundation can stem from cultural variation (Haidt, 2012). To record annotator leaning on MFT, we administer an abridged version of the Moral Foundations Questionnaire (Graham et al., 2008), which reports each dimension on a 5 point Likert scale (see Appendix D.2.3). Later, we refer to all recorded features as **Morality**. | Agent | Task Description | Input | Output | N | |------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|---------------------------------------|-----------|-------| | CLUE GIVER | (1) Target Words Generate, from the goal pi words, a subset of targets ti. Targets are used to generate a single clue word. | {goal} | {targets} | 7,961 | | = {BOS, p1, p2, ..., pn, EOS} | = {BOS, t1, t2, ..., tm, EOS} | | | | | (2) Generating a Clue Generate a one word clue c1 that relates selected target words while avoiding avoid ai and neutral ni words. | {avoid, neutral, targets} = {BOS, AVO, a1, a2, . . . , ao, NEU, n1, ..., nn TGT, t1, t2, . . . , tm, EOS} | {clue} | 7,703 | | | = {BOS, ci, EOS} | | | | | | (3) Framing a Clue Generate reasoning r | that | | | | | frames a candidate clue word ci w.r.t. a target ti word from the set of targets. | {targets, clue, target} | {rationale} | 9,519 | | | = {BOS, TGTS, t1, ..., tn, | = {BOS, r, EOS} | | | | | CLUE, ci, TGT, ti, EOS} | | | | | | GUESSER | (4) Selecting Guess Words Generate a series of guesses {g1, ..., gm} from the unselected words given a clue ci. | {unselected, clue} | {guesses} | 7,703 | | = {BOS, UN, u1, ..., un, | = {BOS, g1, g2, ..., gm, EOS} | | | | | CLUE, ci, EOS} | | | | | | (5) Framing Guesses Generate reasoning r | that | | | | | frames a guess gi (from all guesses) w.r.t. clue ci | {guesses, clue, guess} | {rationale} | 9,382 | | | = {BOS, GUESSES, g1, ..., gn, | = {BOS, r, EOS} | | | | | CLUE, ci, GUESS, gi, EOS} | | | | | | BOTH | Predict Correct Guess Classify if CLUE GIVER message (using target, rationale, and clue) is correctly interpreted by the GUESSER | {unselected, target, rationale, clue} | {T, F} | 9,519 | | = {BOS, UN, g1, ..., gn, TR, ti, ri, CLUE, ci, EOS} | | | | | | Table 1: Tasks associated with a turn in Codenames. THE CLUE GIVER starts by selecting information to encode | | | | | Table 1: **Tasks associated with a turn in Codenames.** THE CLUE GIVER starts by selecting information to encode (in the form of a clue), and THE GUESSER decodes clues through guesses. In our experiments, we evaluate models with and without sociocultural priors. Task formulation (generation/classification) is underlined. ## 3.5 General Dataset Statistics In total, we collect 794 games, with a total of 199 wins and 595 losses.3 Games lasted an average of 9.7 turns, resulting in 7,703 total turns across all games. THE CLUE GIVER targeted an average of 1.24 words per turn. For all collected games, both players provided DemoReq. For 54% of games, both players completed all background surveys; for the remaining 46% of games, at least one player completed all surveys. There were no games with no background information. ## 4 Tasks And Modeling To investigate the role of sociocultural factors in pragmatic inference, we propose a set of tasks (Table 1) associated with THE CLUE GIVER (§4.1) and THE GUESSER (§4.2) roles. Concretely, we formalize each action into a conditional generation problem instead of classification, since outputs in CULTURAL CODES are unconstrained: actions and outputs depend on a changing board state. ## 4.1 Modeling The Clue G**Iver** 4.1.1 Selecting Target Words To start, THE CLUE GIVER identifies target word(s) (1) on a board, which are later used to construct a target clue for the inference. Clues will target salient words, where salience is at least partially determined by the speaker's cultural background (Wolff and Holmes, 2011). Each set of targets is a subset of the remaining **goal** words for a given turn (targets ⊆ **goal**)—we enforce this restriction in our annotation UI. ## 4.1.2 Giving A Clue After selecting target words, THE CLUE GIVER must generate a common clue word across the targets (2). Here, THE CLUE GIVER must select a prototypical word across the targets. Because cultural background plays a role in inference (Thomas, 1983), a clue should lie in players' common ground. Furthermore, the clue word should not lead the | Priors | Model | Target R-1 | Guess R-1 | Priors | Model | Clue R-1 | fastText cos | |-----------------------------|-----------------------------|--------------|-------------|----------|---------|------------|----------------| | Random | 0.60 | 0.65 | | | | | | | k-NN fastText | N/A | 58.04 | | | | | | | T5 | 32.57 | 64.96 | | | | | | | BART | 31.82 | 63.30 | Random | 0.08 | 5.76 | | | | k-1 fastText | 0.00 | 10.33 | | | | | | | No Priors | T5 | 23.86 | 40.38 | | | | | | BART | 23.00 | 40.97 | | | | | | | ↓ With Sociocultural Priors | ↓ With Sociocultural Priors | | | | | | | | DemoReq | T5 | 25.47 | 42.91 | | | | | | BART | 20.64 | 38.91 | | | | | | | DemoAll | T5 | 25.74 | 42.07 | | | | | | BART | 21.45 | 39.45 | | | | | | | Personality | T5 | 24.13 | 41.00 | | | | | | BART | 23.32 | 41.49 | | | | | | | Morality | T5 | 26.54 | 43.31 | | | | | | BART | 23.59 | 41.39 | | | | | | | All | T5 | 26.27 | 44.03 | | | | | | BART | 24.40 | 41.60 | | | | | | | DemoReq | T5 | 32.71 | 67.25 | | | | | | BART | 29.45 | 65.18 | | | | | | | DemoAll | T5 | 33.14 | 65.24 | | | | | | BART | 32.27 | 66.02 | | | | | | | Personality | T5 | 33.61 | 65.56 | | | | | | BART | 28.55 | 63.14 | | | | | | | Morality | T5 | 34.58 | 64.60 | | | | | | BART | 31.32 | 65.09 | | | | | | | All | T5 | 33.38 | 66.31 | | | | | | BART | 30.17 | 64.78 | | | | | | guesser to pick a avoid ni or **neutral** ei word, since these words can end the game or turn (see §3.1). Therefore, we also include **avoid** and remaining neutral words in our input. ## 4.1.3 Framing The Target Rationales The relationship between the target and clue word plays a critical role in communication—how information is *framed* with respect to common ground can influence pragmatic success (Crawford et al., 2017). To this end, we model THE CLUE GIVER's framing of the rationale r for a specific target word t (3), connecting the target t to the clue (c.f., §3.3). Because the framing is constructed in relation to every target word (if multiple are provided), we also encode all targets in the input. ## 4.2 Modeling The G**Uesser** 4.2.1 Selected Guesses With the clue word, the THE GUESSER pragmatically infers THE CLUE GIVER's targets, selecting a sequence of corresponding guesses (4). For this task, we model the sequence of all selected guesses, regardless of correctness. We input all *unselected*4 words at the start of each turn for THE GUESSER, along with the provided clue. Like with Target Word Selection, guesses must be a subset of the unselected words (guesses ⊆ unselected); we enforce this during annotation. ## 4.2.2 Framing Guess Choice Finally, THE GUESSER also provides framing rationale for their respective guesses, framing clues with respect to their guess (5). ## 4.3 Predicting Pragmatic Success So far, our tasks focus on *replicating* elements of a game turn: the Selected Guesses task (§4.2.1), for example, models both incorrect and correct guesses. However, we also wish to understand if an entire turn sequence results in a **successful** inference; differences in cross-cultural inferences can result in pragmatic failures (Thomas, 1983). We formulate this as binary classification. Importantly, we only consider a guess correct if it is *intentional*. A guess is intentional *if and only if* the clue giver listed it as a target. If THE GUESSER selects a **goal** word that is not a target word, we count it as "incorrect." Like with guess generation, we encode unselected words in the input. Because we are not predicting the guess itself, we include game continues. See §3.1. | Target Framing | Guess Framing | | | | | | | | | | | |-----------------------------|-----------------|-------|-------|-------|-------|--------|-------|-------|-------|-------|--------| | Priors | Model | R-1 | R-2 | R-L | BLEU | BScore | R-1 | R-2 | R-L | BLEU | BScore | | Random | 14.08 | 3.80 | 13.88 | 3.46 | 86.88 | 8.31 | 1.01 | 8.07 | 0.80 | 85.88 | | | SBERT | 53.14 | 23.10 | 49.13 | 20.04 | 92.24 | 40.49 | 10.82 | 33.57 | 10.53 | 89.31 | | | No Priors | T5 | 69.22 | 36.82 | 64.13 | 34.11 | 94.52 | 54.67 | 19.65 | 47.22 | 17.40 | 91.25 | | BART | 66.20 | 31.85 | 59.84 | 30.09 | 93.72 | 52.36 | 17.27 | 44.49 | 14.72 | 90.85 | | | ↓ With Sociocultural Priors | | | | | | | | | | | | | DemoReq | T5 | 70.15 | 37.86 | 64.81 | 35.05 | 94.61 | 57.26 | 23.19 | 48.32 | 23.31 | 91.63 | | BART | 67.16 | 34.52 | 60.97 | 31.47 | 94.00 | 54.55 | 19.11 | 45.69 | 17.62 | 90.95 | | | DemoAll | T5 | 70.40 | 38.14 | 64.98 | 35.07 | 94.60 | 57.22 | 23.14 | 48.36 | 21.05 | 91.59 | | BART | 66.14 | 32.21 | 59.72 | 31.36 | 93.88 | 52.43 | 16.51 | 43.52 | 13.23 | 90.78 | | | Personality | T5 | 69.68 | 38.31 | 64.74 | 35.27 | 94.47 | 57.41 | 23.08 | 48.72 | 21.37 | 91.61 | | BART | 67.12 | 34.36 | 61.34 | 32.10 | 93.88 | 52.89 | 18.85 | 45.07 | 15.55 | 90.92 | | | Morality | T5 | 69.82 | 37.96 | 64.35 | 34.53 | 94.63 | 58.06 | 23.67 | 48.85 | 22.62 | 91.76 | | BART | 67.78 | 34.47 | 61.49 | 32.25 | 94.25 | 53.46 | 18.49 | 45.73 | 14.95 | 90.93 | | | All | T5 | 70.39 | 38.27 | 65.49 | 34.01 | 94.66 | 57.64 | 23.13 | 48.79 | 22.22 | 91.68 | | BART | 67.66 | 34.45 | 62.28 | 31.59 | 93.95 | 52.12 | 18.13 | 44.51 | 15.96 | 90.92 | | | Priors | Random | BERT | RoBERTa | XLNet | |-----------------------------|----------|--------|-----------|---------| | None | 0.50 | 0.57 | 0.57 | 0.57 | | ↓ With Sociocultural Priors | | | | | | DemoReq | - | 0.52 | 0.55 | 0.52 | | DemoAll | - | 0.59 | 0.63 | 0.62 | | Personality | - | 0.57 | 0.67 | 0.64 | | Morality | - | 0.57 | 0.64 | 0.61 | | All | - | 0.57 | 0.65 | 0.63 | target and rationale from THE CLUE GIVER. ## 4.4 Augmenting With Sociocultural Priors We hypothesize that players' backgrounds influence Codenames gameplay. To this end, we encode background player information for each task. For each dimension described in §3.4, we encode an attribute/answer pair (e.g. age: 22) for each survey question. Then, we prepend all attributes to the encoded strings for each outlined task (§4), using a unique token to delimit attributes for THE CLUE GIVER and THE GUESSER. $\mathbf{in_{socio}=\{BOS,GIVER,Clue\;Giver_{Attr:A},}$ GUESSER, Guesser_Attr:A} + int If a player did not respond to a specific attribute, we replace the attribute/answer pair with None. From our sociocultural priors (§3.4), we have 5 ablations: DemoReq, DemoAll, Personality, Morality, and All (concatenating and modeling all ablations). We additionally use no priors as a baseline, using in instead of in*socio* to test our hypothesis. ## 5 Experiment Setup Baselines and Dataset Splits For generation baselines, we use two Seq2Seq models: T5 (Raffel et al., 2020) and BART (Lewis et al., 2020). We optimize the associated language modeling objective across our tasks. Additionally, we experiment with two retrieval baselines for all generation tasks: (1) randomly selecting a generation from the train set and (2) selecting the nearest k-N inputs using pretrained SentenceBERT (Reimers and Gurevych, 2020) or fastText (Bojanowski et al., 2017). Retrieval baselines yield insight into how well offthe-shelf pretrained models capture sociocultural diversity. For classification, we experiment with BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019). Models are base variants, and results are averaged over 5 runs. For each task, we split *clue givers* into 80-10-10 train/val/test, since all tasks depend on initial clue giver choices. Importantly, **a single clue giver's** data is not distributed across splits, since clue givers may reuse clues/strategies. Evaluation Metrics We use a range of metrics to generation tasks. Rationale generation tasks (Target §4.1.3 & Guess §4.2.2) output entire sentences; therefore, we report F-1 scores from ROUGE-(1, 2, L) (Lin, 2004), BLEU (Papineni et al., 2002), and BERTScore (Zhang et al., 2020). For tasks that generate a single or set of words where order does not matter, (Guess Selection §4.2.1; Clue Generation §4.1.2), we report only ROUGE-1 and averaged word vector (fastText) cosine similarity. ## 6 Generation Results & Discussion Including cultural priors improves modeling performance across all tasks. For generation problems, T5 generally outperforms BART, and our retrieval baselines lag behind more complex models. Finally, we conduct *a qualitative analysis* of 20 random samples from each task. Picking Targets and Guesses From our results (Table 2), we find that selecting guesses is an easier modeling task than picking target words, likely because the input for selecting a guess contains the clue word. Intuitively, selecting target words is more arbitrary than selecting a guess from a clueespecially since our generation task does not enforce guess correctness. Our models reflect this observation. Guess Selection has R-1 scores that are, on average, twice as good as Target Word Selection (Target 34 vs. Guess 66). Furthermore, Guess Selection only requires demographics (DemoReq) to maximize performance, unlike **Morality** for Target Words. Regardless, both tasks see R-1 increase by ≈ 2 points over no prior baselines. Looking at model outputs between the **None** and **Morality**, we observe that models generate words like Well/*Grace* instead of Death/*Poison* and vice versa, depending on player background. Generating a Clue for Targets Moving to our clue generation models, we again find that including sociocultural priors improves model performance (Table 3). Highest R-1 scores (26.54) occur when using **Morality** as a prior, resulting in a ≈ 2 pt. R-1 and 4 pt. cos-similarity increase when compared to a no prior baseline. We also suspect that selecting target words and generating a hint are interrelated processes: annotators are likely thinking about clues/targets in parallel. Therefore, the same Morality prior results in maximized performance. While there are themes related to Morality in clue differences for a target word (accident → death vs. lucifer; or fair → equal vs. good), we also find that generations are *more specific* given sociocultural priors. Consider these generated target → clue pairs ✓ with and ✗ without priors: - match → ✗ game ✓ cricket - bond → ✗ connection ✓ james - undertaker → ✗ funeral ✓ wrestler Each ✓ example generates a clue that relies on shared cultural background: specifically, knowing that cricket is a sport; that James Bond is a popular character; and that the Undertaker is a wrestler. More details can be found in Appendix C, Table 6. Clue Generation Errors Across Sociocultural Subtypes Despite jointly modeling cross-cultural information, our performance is far from perfect. Generating successful clues is a core element of Codenames; however, our exact match accuracy on clue generation is only ≈ 26%. To understand errors, we sample 100 generated clues from the Clue Generation Task, and identify errors and differences between (socioculturally) generated clues and the ground truth label. For 43 samples, we notice that sociocultural priors have *no effect* on clue generation; the output is identical to the *no prior* model for the given target word. In these instances, we suspect that our models fail to exploit common ground between a giver/guesser, yielding the same clue as without sociocultural priors. Upon further analysis, we observe that these errors occur frequently (37 samples) when *both* the clue giver and guesser are white or from North America. Because these demographics are already over-represented in our dataset, we suspect that the model simply ignores over-informative sociocultural priors. Errors also occur because clues are over (20 instances, e.g. "guevera" instead of "overthrow") or underspecified (13 instances, e.g. "supernatural" instead of "monster") compared to the gold clue. In 21/33 of these instances, there is a demographic mismatch between the clue-giver and guesser: the clue-giver and guesser do not share race/country demographics. In contrast to having no effect, we suspect that models mispredict the common ground between guesser/giver. We also judge 18 generation errors to be of similar specificity to the target word—prefixes/suffixes of the gold label—or completely unrelated to the gold clue (6 instances). Rationalizing Targets and Guesses Beyond generating target words and guesses, we ask models to explain how a target or guess is related to a clue word (e.g. James Bond is a movie character). Again, we find that providing contextual priors improves performance (Table 4). For Target Rationale Generation, models see maximized performance when all priors are included, while Guess Rationale generation sees improvements for **Morality.** Like with Clue Generation, we find that improvements in Guess Rationale are from increased specificity (e.g. "actors are cast" → "actors are part of a cast"; "money is center" → "money is the center of everything"). While qualitative differences are clear for Guess Rationale, Target Rationale results are more subtle: improvements stem from minor variations in the type of framing ("a kind of" vs. "a type of") used by the annotator. Additional generations can be found in Appendix C, Table 7. Classifying Pragmatic Failure We find that classification performance across each architecture is maximized when using sociocultural priors during training (Table 5). While BERT sees reduced improvement (an increase of only +0.02 F-1 over a no-prior baseline), XLNet and RoBERTa see maximum increases of +0.07 and +0.10 respectively. Both XLNet and RoBERTa see these improvements across the same **Personality** setting. Sociocultural priors improve performance across mirroring and evaluating pragmatic inference. A Word on Word Vector Baselines Surprisingly, retrieving nearest words using a word vector approach (fastText) performs poorly for both Clue and Guess Generation (Tables 2 & 3). We suspect that pretrained vectors fail to capture sociocultural inference in word association tasks. ## 7 Conclusion Language is grounded in rich sociocultural context. To underscore this context, we propose a setting that captures the diversity of pragmatic inference across sociocultural backgrounds. With our Codenames Duet dataset (7K turns across 156 players), we operationalize cross-cultural pragmatic inference. Across our experiments, we detail improvements in mirroring/evaluating inferences when using sociocultural priors. Our work highlights how integrating these priors can align models toward more socially relevant behavior. ## 8 Limitations Cross-Cultural Inference Beyond Codenames Our work explores sociocultural pragmatic inference in a very limited setting, using a core vocabulary of just 100 words. Despite this limitation, we find significant diversity in our dataset; furthermore, our models successfully capture these diverse inferences. While a limitation of our work is its focus on a single setting, we expect domains outside of Codenames to see similar variance. Understanding and highlighting miscommunication in dialog—due to culture-dependent misinterpretation—is one such extension. These domains are likely much nosier than Codenames; we urge future work to further investigate them. Spurious Correlations across Sociocultural Factors Across all tasks but one (Target Rationale Generation §4.1.3), jointly modeling all sociocultural priors does not result in the highest performing model. Because our sociocultural factors already correlate with each other (§3.4), we suspect that modeling all features may be redundant, adding spurious correlations and resulting in overfitting. Improved modeling methodology and careful regularization may address these issues; we leave these experiments for future work. Bigger Models and Task Specific Modeling Currently, we evaluate small Seq2Seq models due to computational constraints; however, evaluation of 0-shot and few-shot performance on larger language models (e.g. GPT-3) is necessary. Given the changing state of the Codenames board—along with evidence that LLMs struggle with theory-ofmind-esque perspective taking (Sap et al., 2022)— our dataset can serve as a challenging benchmark for sociocultural understanding. However, successfully encoding game state into prompts for LLMs may require experimentation. Finally, our current task formulation and modeling setup are straightforward: we simply encode all information *in-context* and do not assume recursive reasoning like in RSA (Goodman and Frank, 2016). Future work can explore these directions. Human Evaluations Our evaluation is limited to automatic metrics and qualitative analysis. Evaluating cross cultural generation *depends* on the evaluator's own culture. Each generation depends on the player's sociocultural background; finding evaluators who match the player may be prohibitive. ## 9 Ethics Broadly, our work models user background to determine the choices they make. While we focus on a fairly harmless setting (Codenames), our operationalization can be used in harmful ways (e.g. tracking and modeling user behavior without consent). Future work that uses sociocultural information should only be applied to settings where there is no foreseeable harm to end-users. Furthermore, learning sociocultural associations can introduce positive and negative stereotypes; documenting and reducing harmful stereotypes is an important avenue for future work. Finally, we emphasize that our work is not evidence for *linguistic determinism*: sociocultural variation in language can influence but not **determine** thought. ## Acknowledgements We are thankful to the members of SALT Lab for their helpful feedback on the draft. We are also thankful for the helpful feedback from Jing Huang and Rishi Bommasani. Caleb Ziems is supported by the NSF Graduate Research Fellowship under Grant No. DGE-2039655. This research was supported, in part, by MURI-ONR-N00014-20-S-F003 on Persuasion, Identity, and Morality in SocialCyber Environments, as well as a DARPA grant HR00112290103/HR0011260656. ## References Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1173– 1182, Austin, Texas. Association for Computational Linguistics. Abhilasha Ashok Kumar, Ketika Garg, and Robert Hawkins. 2021. Contextual flexibility guides communication in a cooperative language game. In *Proceedings of the Annual Meeting of the Cognitive Science* Society, volume 43. Yuwei Bao, Sayan Ghosh, and Joyce Chai. 2022. Learning to mediate disparities towards pragmatic communication. In *Proceedings of the 60th Annual Meeting* of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2829–2842. Basil Bernstein. 2003. *Class, codes and control: Applied studies towards a sociology of language*, volume 2. Psychology Press. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Noam Brown and Tuomas Sandholm. 2017. Libratus: Beating top humans in no-limit poker. In *Neural* Information Processing Systems. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904–911. Chris Callison-Burch, Gaurav Singh Tomar, Lara J Martin, Daphne Ippolito, Suma Bailis, and David Reitter. 2022. Dungeons and dragons as a dialog challenge for artificial intelligence. *ArXiv preprint*, abs/2210.07109. Herbert H Clark. 1996. *Using language*. Cambridge university press. Herbert H Clark. 1998. 4 communal lexicons. Tonia Crawford, Sally Candlin, and Peter Roger. 2017. New perspectives on understanding cultural diversity in nurse–patient communication. *Collegian*, 24(1):63–69. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Baxter S Eaves Jr, Naomi H Feldman, Thomas L Griffiths, and Patrick Shafto. 2016. Infant-directed speech is consistent with teaching. *Psychological* review, 123(6):758. Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press. Meta Fundamental AI Research Diplomacy Team (FAIR)†, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by combining language models with strategic reasoning. *Science*, 378(6624):1067–1074. Susan T Fiske and Martha G Cox. 1979. Person concepts: The effect of target familiarity and descriptive purpose on the process of describing others 1. *Journal of Personality*, 47(1):136–161. Michael Franke and Gerhard Jäger. 2016. Probabilistic pragmatics, or why bayes' rule is probably important for pragmatics. *Zeitschrift für sprachwissenschaft*, 35(1):3–44. Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Unified pragmatic models for generating and following instructions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1951–1963, New Orleans, Louisiana. Association for Computational Linguistics. Liye Fu, Susan Fussell, and Cristian Danescu-NiculescuMizil. 2020. Facilitating the communication of politeness through fine-grained paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5127–5140, Online. Association for Computational Linguistics. Aparna Garimella, Carmen Banea, and Rada Mihalcea. 2017. Demographic-aware word associations. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2285–2295, Copenhagen, Denmark. Association for Computational Linguistics. Noah D Goodman and Michael C Frank. 2016. Pragmatic language interpretation as probabilistic inference. *Trends in cognitive sciences*, 20(11):818–829. Jesse Graham, Brian A Nosek, Jonathan Haidt, Ravi Iyer, Koleva Spassena, and Peter H Ditto. 2008. Moral foundations questionnaire. *Journal of Personality and Social Psychology*. Lisa J Green. 2002. *African American English: a linguistic introduction*. Cambridge University Press. Thomas L Griffiths, Charles Kemp, and Joshua B Tenenbaum. 2008. Bayesian models of cognition. William B Gudykunst and Young Yun Kim. 1984. *Communicating with strangers: An approach to intercultural communication*. Addison Wesley Publishing Company. Jonathan Haidt. 2012. *The righteous mind: Why good* people are divided by politics and religion. Vintage. Jonathan Haidt and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. *Social Justice Research*, 20(1):98–116. Evelyn Hatch et al. 1992. Discourse and language education. Cambridge University Press. Robert XD Hawkins, Andreas Stuhlmüller, Judith Degen, and Noah D Goodman. 2015. Why do you ask? good questions provoke informative answers. In *CogSci*. Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, et al. 2022. Challenges and strategies in cross-cultural nlp. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6997–7013. Geert H Hofstede. 2001. *Culture's consequences: Comparing values, behaviors, institutions and organizations across nations*. sage. Samee Ibraheem, Gaoyue Zhou, and John DeNero. 2022. Putting the con in context: Identifying deceptive actors in the game of mafia. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 158–168, Seattle, United States. Association for Computational Linguistics. Julian Jara-Ettinger, Hyowon Gweon, Laura E Schulz, and Joshua B Tenenbaum. 2016. The naïve utility calculus: Computational principles underlying commonsense psychology. *Trends in cognitive sciences*, 20(8):589–604. Catalina Jaramillo, Megan Charity, Rodrigo Canaan, and Julian Togelius. 2020. Word autobots: Using transformers for word association in the game codenames. In *Proceedings of the AAAI Conference on* Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages 231–237. Alan Jern, Christopher G Lucas, and Charles Kemp. 2017. People learn other people's preferences through inverse decision-making. *Cognition*, 168:46– 64. Oliver P John, Eileen M Donahue, and Robert L Kentle. 1991. Big five inventory. *Journal of Personality and* Social Psychology. Aditya Joshi, Pushpak Bhattacharyya, Mark Carman, Jaya Saraswati, and Rajita Shukla. 2016. How do cultural differences impact the quality of sarcasm annotation?: A case study of Indian annotators and American text. In Proceedings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 95–99, Berlin, Germany. Association for Computational Linguistics. Jihen Karoui, Farah Benamara, Véronique Moriceau, Viviana Patti, Cristina Bosco, and Nathalie AussenacGilles. 2017. Exploring the impact of pragmatic phenomena on irony detection in tweets: A multilingual corpus study. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 262–272, Valencia, Spain. Association for Computational Linguistics. Fereshte Khani, Noah D. Goodman, and Percy Liang. 2018. Planning, inference and pragmatics in sequential language games. *Transactions of the Association* for Computational Linguistics, 6:543–555. Andrew Kim, Maxim Ruzmaykin, Aaron Truong, and Adam Summerville. 2019. Cooperation and codenames: Understanding natural language processing via codenames. In *Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital* Entertainment, volume 15, pages 160–166. Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2020. Will I sound like me? improving persona consistency in dialogues through pragmatic selfconsciousness. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 904–916, Online. Association for Computational Linguistics. Alena Korshuk. 2007. Learning more about cultures through free word association data. Collin J Kovacs, Jasper M Wilson, and Abhilasha A Kumar. 2022. Fast and frugal memory search for communication. In *Proceedings of the Annual Meeting of the Cognitive Science Society*, volume 44. Divya Koyyalagunta, Anna Sun, Rachel Lea Draelos, and Cynthia Rudin. 2021. Playing codenames with language graphs and word embeddings. Journal of Artificial Intelligence Research, 71:319–346. Abhilasha A Kumar, Mark Steyvers, and David A Balota. 2021. Semantic memory search and retrieval in a novel cooperative word game: A comparison of associative and distributional semantic models. *Cognitive Science*, 45(10):e13053. William Labov. 2011. Principles of linguistic change, volume 3: Cognitive and cultural factors, volume 3. John Wiley & Sons. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Angeline Lillard. 1998. Ethnopsychologies: cultural variations in theories of mind. *Psychological bulletin*, 123(1):3. Angeline Lillard. 1999. Developing a cultural theory of mind: The ciao approach. Current Directions in Psychological Science, 8(2):57–61. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv preprint*, abs/1907.11692. Kate Loveys, Jonathan Torrez, Alex Fine, Glen Moriarty, and Glen Coppersmith. 2018. Cross-cultural differences in language markers of depression online. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 78–87, New Orleans, LA. Association for Computational Linguistics. George A Miller. 1974. Psychology, language, and levels of communication. In *Human communication*. John Wiley. George A. Miller. 1992. WordNet: A lexical database for English. In *Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York,* February 23-26, 1992. Joan G Miller. 1984. Culture and the development of everyday social explanation. Journal of personality and social psychology, 46(5):961. Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5:325–338. Aaron J Moss, Cheskie Rosenzweig, Jonathan Robinson, and Leib Litman. 2020. Demographic stability on mechanical turk despite covid-19. *Trends in cognitive sciences*, 24(9):678–680. Ira Noveck. 2018. *Experimental pragmatics: The making of a cognitive science*. Cambridge University Press. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Christopher Potts. 2012. Goal-driven answers in the cards dialogue corpus. In Proceedings of the 30th West Coast Conference on Formal Linguistics, pages 1–20. Cascadilla Proceedings Project. James E Purpura. 2004. *Assessing grammar*, volume 8. Cambridge University Press. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Yael Ravin and Claudia Leacock. 2000. *Polysemy: Theoretical and computational approaches*. OUP Oxford. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics. Maarten Sap, Ronan LeBras, Daniel Fried, and Yejin Choi. 2022. Neural theory-of-mind? on the limits of social intelligence in large lms. *ArXiv preprint*, abs/2210.13312. Raheleh Saryazdi, Joanne Nuque, and Craig G Chambers. 2022. Pragmatic inferences in aging and humanrobot communication. *Cognition*, 223:105017. Wilbur Schramm. 1954. How communication works. The process and effects of mass communication, 3:26. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020. Mastering atari, go, chess and shogi by planning with a learned model. *Nature*, 588(7839):604–609. Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text generation. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4060–4067, Minneapolis, Minnesota. Association for Computational Linguistics. Blum-Kulka Shoshana, Juliane House, and Gabriele Kasper. 1989. Cross-cultural pragmatics: Requests and apologies. *Grazer Linguistische Studien*. Richard A Shweder. 1984. Anthropology's romantic rebellion against the enlightenment, or there's more to thinking than reason and evidence. *Culture theory:* Essays on mind, self, and emotion, pages 27–66. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489. Felix Soldner, Verónica Pérez-Rosas, and Rada Mihalcea. 2019. Box of lies: Multimodal deception detection in dialogues. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1768–1777, Minneapolis, Minnesota. Association for Computational Linguistics. Lucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers*, pages 543–553, Berlin, Germany. Association for Computational Linguistics. Darcy Sperlich, Jaiho Leem, and Eui-Jeen Ahn. 2016. The interaction of politeness systems in Korean learners of French. In Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers, pages 163–171, Seoul, South Korea. Stefanie Stadler. 2012. Cross-cultural pragmatics. The Encyclopedia of applied linguistics, pages 1–8. Naoko Taguchi. 2012. Context, individual differences and pragmatic competence. In *Context, Individual* Differences and Pragmatic Competence. Multilingual Matters. Jenny Thomas. 1983. Cross-cultural pragmatic failure. Applied linguistics, 4(2):91–112. EV Tikhonova. 2014. Linguistic diagnosing of religious relationships through word association responses. In Conference proceedings of international multidisciplinary scientific conference on social sciences and arts, volume 3, pages 505–516. Chvátil Vlaada. 2015. Codenames - rules - czech games edition | boardgame publisher. Phillip Wolff and Kevin J Holmes. 2011. Linguistic relativity. Wiley Interdisciplinary Reviews: Cognitive Science, 2(3):253–265. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754–5764. Yuan Yao, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xiaozhi Wang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. 2021. Adversarial language games for advanced natural language intelligence. In Proceedings of AAAI. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, and Prithviraj Ammanabrolu. 2022. An ai dungeon master's guide: Learning to converse and guide with intents and theory-of-mind in dungeons and dragons. *ArXiv* preprint, abs/2212.10060. Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 3755–3773. ## A Finalized Codenames Word List We sample from the following list of 100 words: luck, grace, soul, fair, life, pass, revolution, change, charge, degree, force, code, genius, compound, time, wake, plot, draft, ghost, play, part, spell, well, point, link, mass, disease, sub, state, alien, space, mine, ray, millionaire, agent, bond, unicorn, figure, war, cycle, boom, sound, trip, centaur, death, club, crash, angel, cold, center, spring, round, date, press, cast, day, row, wind, fighter, embassy, beat, leprechaun, comic, pitch, mount, march, fall, undertaker, green, switch, strike, king, superhero, capital, slip, lead, check, lap, mammoth, air, match, spy, roulette, contract, witch, stock, light, drop, spot, novel, vacuum, cover, scientist, tag, conductor, field, racket, poison, ninja, opera. ## B Reformatting Rationales Using Gpt-3 Some annotators wrote verbose rationales (*I think* fall happens after you slip), while other annotators were more succinct (*fall after slip*). To prevent models from learning grammar variation across annotators, we normalize our text using GPT-3. We use the following prompt, using hand-written few-shot examples. Some of the examples are unchangedwe include them in the prompt to demonstrate positive examples to the model. Normalize the text, removing determiners like "the" and "a" at the start of a sentence, along with any pronouns. Correct spelling and grammar mistakes. If possible, the final text should be formatted with the clue first and the target last or the target first and the clue last. clue: "sub" target: "sandwich" text: "you can make a sub, which is a type of sanwich" output: "sub is a type of sandwich" clue: "die" target: "cliff" text: "you may die if you fall off a cliff" output: "die if fall off a cliff" clue: "explosion" target: "boom" text: "it makes sound" output: "explosion makes boom" clue: "superman" target: "superhero" text: "most famous superhero" output: "superman is most famous superhero" clue: "night" target: "club" text: "i love night club" output: "night club is a kind of club" clue: "horn" target: "air" text: "an air horn is a type of horn" output: "air horn is a type of horn" clue: "ivy" target: "poison" text: "poison ivy is a well known plant" output: "poison ivy is a well known plant" clue: "month" target: "march" text: "march is a month" output: "march is a month" clue: "{clue}" target: "{target}" text: "{text}" output: " ## C Example Generations Here, we include example generations for a subset of our tasks, illustrating the influence of sociocultural factors on generated Codenames gameplay. ## C.1 Clue Generation Below, we highlight more clues generated with- /without sociocultural priors. Note how some of the without generations are euro-centric: space → 6563 nasa, {revolution, king} → war; adding priors creates more specific clues. However, this isn't always true: target words {pass, check} → leads to poker instead of overtake when conditioned on priors. We suspect that the average player in our pool is not aware of how {pass, check} are associated with poker, resulting in a more generic generation. | Target | Without | With | Gold | |------------------|-----------|---------|-----------| | revolution, king | war | guevara | overthrow | | check | mate | inspect | examine | | space | nasa | galaxy | universe | | compound | wall | house | together | | pass, check | overtake | poker | go | Table 6: Clue generations with/without sociocultural priors, given target words on the board ## C.2 Clue Framing Additional generations can be found in Table 7. Again, we observe that adding sociocultural priors increases relation specificity. ## D Annotation Task Details D.1 Qualification Test To qualify for the HIT, workers were required to complete a consent form detailing dataset collection and release; and were expected to watch an instructional video outlining game rules. Then they had to pass the following qualifying test, answering at least 6 out of 7 questions correctly. 1. **True or False:** "angry dog" is an example of a clue you could give. [*Answer*: **False**] 2. **True or False:** you and your partner have different lists of black (assassin) words. [*Answer*: True] 3. **True or False:** it is possible to skip a turn without guessing. [*Answer*: **False**] 4. **True or False:** the tan "down" arrow indicates that you guessed the word wrong, while the tan "up" arrow indicates that your partner guessed it wrong. [*Answer*: **True**] 5. **Multiple Choice:** Which of the following kinds of phrases does not follow from our list of target rationales types? [*Answer*: (b)] (a) "a computer has a mouse" (b) "a doctor is smart" (c) "a dog is a kind of animal" (d) "a disease causes people to be sick" 6. **Multiple Choice:** How many guesses do you get (assuming there are still more words left to guess) [*Answer*: (d)] (a) you get three guesses each turn (b) the number of guesses you get is the same as the number of target words your partner's clue (c) as long as you keep picking green words, you can keep guessing, up to the number of target words in your partner's clue (d) as long as you keep picking green words, you can keep guessing without any limit, even if you guess more than the number of target words in your partner's clue 7. **Multiple Choice:** During the 8th timer token in the video, it looked like my grid froze and I couldn't make any more guesses. Why did this happen? [*Answer*: (b)] (a) I guessed an assassin word (b) I already guessed all my partner's words correctly (c) I clicked the "end game" button (d) My partner left the game ## D.2 Demographic, Personality, And Moral Questionnaires Before starting any HITs, workers also had to complete three standardized surveys about their moral foundations, personality, and demographic information. The survey questions and worker statistics are given as follows. ## D.2.1 Worker Demographics Questionnaire. Please answer these 8 questions about yourself. 1. With what gender do you identify? {Woman, Man, Transgender, Non-binary / nonconforming, Other} 2. What is your age? {0-17 years old, 18-22 years old, 22-30 years old, 30-45 years old, 45+} | Target | Clue | Without | With | Gold | | | |----------|------------|----------------------|----------------------|-------------------------|------|----| | explode | boom | explode causes boom | bomb explodes with a | explosions | make | a | | boom | boom sound | | | | | | | horse | unicorn | a unicorn is a horse | unicorn is a type of | unicorns are similar to | | | | horse | horses | | | | | | | racket | tennis | tennis has racket | a racket is used in tennis | tennis uses a racket | | | | day | month | day is month | month has many days | 30 days in a month | | | Table 7: Example Rationales for Clues, with/without background priors. With priors, we observe that rationales become more specific, mentioning explicit relations between the target and clue. 3. Which best describes your race or ethnicity? {African-American/Black, Asian, Latino or Hispanic, Native American, Native Hawaiian or Pacific Islander, White / Caucasian} 4. In which continent are you located? {North America, Central / South America, Europe, Africa, Asia, Australia} 5. What is your highest level of education? {Some High School / No Diploma, High School Diploma, Associate's Degree / Trade School, Master's Degree, Doctorate Degree} 6. What is your marital status? {Single and never married, Married or in a domestic partnership, Widowed, Divorced, Separated} 7. Which of the following would you consider your native language {English, Arabic, French, Mandarin, Spanish, Other} 8. If applicable, please specify your religion {*Buddhism, Catholicism/Christianity, Hinduism, Islam, Judaism, Other*} Results. Of the 153 unique players, 124 are from the U.S, 12 are from India, 8 are from Brazil, 3 from the U.K, 2 from Canada, and the rest are single players from the following 7 countries: Indonesia, Costa Rica, France, South Africa, Germany, and Portugal. ## D.2.2 Worker Personality Big 5 Personality Questionnaire. Please answer these 10 questions about yourself on the following scale: [-2] Strongly Disagree; [-1] Disagree; [0] Neutral; [1] Agree; [2] Strongly Agree. 1. I see myself as someone who does a thorough job. 2. I see myself as someone who is reserved. 3. I see myself as someone who is outgoing, sociable. 4. I see myself as someone who gets nervous easily. 5. I see myself as someone who has few artistic interests. 6. I see myself as someone who is relaxed, handles stress well. 7. I see myself as someone who tends to find fault with others. 8. I see myself as someone who is generally trusting. 9. I see myself as someone who tends to be lazy. 10. I see myself as someone who has an active imagination. ## D.2.3 Moral Foundations And Political Leaning. Moral Foundations Theory. Following Haidt and Graham (2007), we use the five-foundation theory of moral reasoning to understand our players' values and leanings. This theory does not give explicit definitions for the five foundations, but following recent work by Ziems et al. (2022), we can assume the following definition sketches: 1. **Care:** wanting someone or something to be safe, healthy, and happy. Harm: wanting someone or something to suffer physically, emotionally, socially, intellectually, or spiritually. 2. **Fairness:** wanting to see individuals or groups treated equally or equitably Cheating: wanting to see unfairness, injustice, bias, exclusion, or discrimination. 3. **Loyalty:** wanting unity and seeing people keep promises or obligations to an in-group. Betrayal: wanting to see people lie, abandon an in-group, or become isolated and divided. 4. **Authority:** wanting to respect social roles, duties, privacy, peace, and order. Subversion: wanting to see people disrespect, disobey or cause disorder, challenge the statusquo, and do what they do not have permission to do. 5. **Sanctity:** wanting people and things to be clean, pure, innocent, and holy. Degradation: wanting people to follow selfish or crude desires and do things that make them or others dirty, corrupt, sick, repulsive, or perverted. Moral Foundations Questionnaire We use the associated Moral Foundations Questionnaire, which we shortened to 12 questions as follows. Please answer 12 questions about "right" and "wrong." The prompts are the same in each case, but the considerations are different. 1. When you decide whether something is right or wrong, to what extent are the following considerations relevant to your thinking? Use the following scale: [0] Not at all relevant (It has nothing to do with my judgments of right and wrong); [1] Not very relevant; [2] Slightly relevant; [3] Somewhat relevant; [4] Very relevant; [5] Extremely relevant (It is one of the most important factors when I judge right and wrong) (a) Whether or not someone suffered emotionally. (b) Whether or not some people were treated differently than others. (c) Whether or not someone's action showed love for his or her country. (d) Whether or not someone showed a lack of respect for authority. (e) Whether or not someone violated standards of purity and decency. (f) Whether or not someone was good at math. (g) Whether or not someone cared for someone weak or vulnerable. (h) Whether or not someone acted unfairly. (i) Whether or not someone did something to betray his or her group. (j) Whether or not someone conformed to the traditions of society. 2. Which of the following best describes your political views? (a) Liberal (b) Moderate Liberal (c) Moderate Conservative (d) Conservative (e) Libertarian ## D.3 Instructions For Writing Rationales We explain that rationales should use at least 3 words to describe the connection between the clue and the target. Annotators were encouraged to be creative while trying to use one of the structures below. We imposed these structures for the sake of regularity. 1. MERONYM x has y (a) a dog has a tail (b) the pacific ocean has water 2. HYPERNYM x is a kind of y (a) bunkbed is a kind of bed (b) whisper is a kind of communication 3. SYNONYM x means the same thing as y (a) car means the same thing as automobile (b) sluggish means the same thing as slow 4. ANTONYM x means the opposite of y (a) civilian means the opposite of soldier (b) fast means the opposite of slow 5. ADJECTIVE x describes y (a) brave describes a firefighter (b) scary describes a clown 6. AGENT x does y (a) a star does twinkle police do make an arrest 7. CAUSE x causes y (a) a bed causes people to sleep (b) an oven causes food to bake (c) a disease causes people to be sick 8. PATIENT x acts on y (a) a wrench acts on a bolt (b) a doctor acts on a patient 9. LOCATION x has an environment y (a) a star has an environment firmament ## E Training And Hyperparameters For our generation tasks, we perform use 5e-5 as our initial learning rate and perform a hyperparameter search over {1...20} epochs. For classification, we use the same splits and perform a hyperparameter sweep over learning rates ({1e-4, 5e-4, 1e-5, 5e-5, 1e-6, 5e-6}) and epochs ({1...15}). All models were trained on an NVIDIA A100 GPU. Across all experiments, GPU compute time was around 4-5 days. ## F Artifact Details We use several models in our paper for their intended retrieval or generation task. Each model has its own license and number of parameters, listed below: 1. T5 (Raffel et al., 2020), 220M parameters, is under the Apache 2.0 License. 2. BART (Lewis et al., 2020), 140M, is under the Apache 2.0 License. 3. fastText (Bojanowski et al., 2017) is under the MIT License. 4. SentenceBERT (Reimers and Gurevych, 2020), 33M variant, is under the Apache 2.0 License. 5. BERT (Devlin et al., 2019) base, 110M, is under the Apache 2.0 License. 6. XLNet (Yang et al., 2019) base, 110M, is under the Apache 2.0 License. 7. RoBERTAa (Liu et al., 2019) base, 123M, is under the Apache License 2.0. We plan on releasing CULTURAL CODES and corresponding code under Creative Commons Attribution Share Alike 4.0 International. While our released dataset has extensive demographic information, we do not collect any identifiers that can uniquely isolate a person (e.g. name, MTurk ID, etc.) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract + Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 for our introduced dataset, and we cite all baseline models (Section 5) ✓ B1. Did you cite the creators of artifacts you used? Section 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix Section E ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes, Section 9 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix E ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 and Section 5 ## C ✓ **Did You Run Computational Experiments?** Yes, Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D and E The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 and Appendix D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Our results are averaged across 5 runs; Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Yes, Section 3.3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Yes, Section 3.3 and Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3.3 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix C.1 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 3 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3.4
lai-etal-2023-werewolf
Werewolf Among Us: Multimodal Resources for Modeling Persuasion Behaviors in Social Deduction Games
https://aclanthology.org/2023.findings-acl.411
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset can be found at https://persuasion-deductiongame. socialai-data.org. The codes and models are available at \url{https://github.com/SALT-NLP/PersuationGames}.
# Werewolf Among Us: Multimodal Resources For Modeling Persuasion Behaviors In Social Deduction Games Bolin Lai1∗ Hongxin Zhang2∗ Miao Liu3∗ Aryan Pariani1∗ **Fiona Ryan**1 Wenqi Jia1 Shirley Anugrah Hayati4 James M. Rehg1 **Diyi Yang**5 1Georgia Institute of Technology 2Shanghai Jiao Tong University 3Meta AI 4University of Minnesota 5Stanford University {bolin.lai, apariani3, fkryan, wenqi.jia, rehg}@gatech.edu icefox@sjtu.edu.cn, miaoliu@meta.com hayat023@umn.edu, diyiy@cs.stanford.edu ## Abstract Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpora. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26, 647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset can be found at https://persuasion-deductiongame. socialai-data.org. The codes and models are available at https://github.com/ SALT-NLP/PersuationGames. ## 1 Introduction As humans, from childhood, we develop the ability to attribute mental belief states to ourselves and others (Premack and Woodruff, 1978). Moreover, we constantly exhibit persuasive behaviors to influence and even reshape the belief states of others during our daily social interactions (Lonigro et al., 2017). An automatic system with the ability to understand human persuasion strategies and deduce human belief states may enable more proactive humancomputer interaction, and facilitate collaborative decision-making processes. * denotes equal contribution. Prior works targeted at understanding the persuasion strategies utilized on online forums like Reddit, crowd-funding platforms (Yang et al., 2019; Chen and Yang, 2021; Atkinson et al., 2019), and in 1-on1 dialogues under simulated scenarios through the Amazon Mechanical Turk platform (Wang et al., 2019; Chawla et al., 2021). However, the persuasive behaviors during naturalistic group discussions with face-to-face conversation remain unexplored. More importantly, daily human social interaction is multimodal by nature. Both verbal communication (*e.g.* language and audio) and non-verbal communication (*e.g.* gesture and gaze behavior) are essential for analyzing persuasive behavior. Moreover, resources for understanding how persuasion strategies affect decision and deduction outcomes during social interactions are missing from the language technologies community. To bridge these gaps, we introduce the first multimodal benchmark dataset for modeling persuasive behaviors during multi-player social deduction games. As shown in Fig. 1, our dataset is captured in a naturalistic setting where groups of participants play social deduction games1. Our dataset contains both video recordings and the corresponding dialogue transcriptions. The video data is sourced from both the Ego4D Social dataset (Grauman et al., 2022) and YouTube videos. Our dataset also has annotations for persuasion strategy at the utterance level and the voting outcome of each participant during the social deduction game. We benchmark our dataset by providing comprehensive experimental results and analyzing the role 1We consider two games in our dataset: One Night Ultimate Werewolf and The Resistance: Avalon. Appendix A describes the rules of these two games. ![1_image_0.png](1_image_0.png) of the video modality and contextual cues in designing computational models for persuasion behavior prediction. We also provide results to show how different computational models generalize across different data sources and different games. Our contributions are summarized as follows: - We present the first multimodal dataset for persuasion modeling. Our dataset is collected in naturalistic social game scenarios with intensive face-to-face group conversations. - We conduct comprehensive experiments to show the importance of context and visual signals for persuasion strategy prediction. - We provide additional experimental results to investigate model generalization on the persuasion modeling task and discuss how persuasion strategy influences the game voting outcome. ## 2 Related Work Persuasive Behaviors Understanding. A few previous works introduce datasets for the computational modeling of persuasion (Yang et al., 2019; Chen and Yang, 2021; Chawla et al., 2021; Wang et al., 2019; Luu et al., 2019; Atkinson et al., 2019). As summarized in Table 1, existing works collectpersuasion language data from online platforms | Prior Works | Interaction Modalities | Setting | | |-------------------------|--------------------------|----------------------|-------------| | (Yang et al., 2019) | Online | Text | Loan | | (Chen and Yang, 2021) | Online | Text | Request | | (Chawla et al., 2021) | 1 on 1 | Text | Negotiation | | (Wang et al., 2019) | 1 on 1 | Text | Charity | | (Luu et al., 2019) | 1 on 1 | Text | Debate | | (Atkinson et al., 2019) | Online | Text | Reddit | | Ours | Group | Text+Video Deduction | | | Discussion | +Audio | Game | | where real-time communication is not available. Moreover, these datasets mainly contain 1-on-1 conversations and lack conversations among multiple speakers. In contrast to these prior efforts, our dataset targets capturing persuasive behaviors in a social group setting of 4 to 6 people. Multimodal Social Interaction. A rich set of literature has addressed the problem of multimodal sentiment analysis (Xu et al., 2021b; Li et al., 2020; Lu et al., 2019; Xu et al., 2021a). We refer to a recent survey (Kaur and Kautish, 2022) for a more detailed discussion on this topic. The Ego4D Social benchmark (Grauman et al., 2022) includes the tasks of identifying who is looking at and talking to the camera wearer using video and audio. Bara et al. (2021) adopts a multimodal approach to understanding dialogue behavior in a simulated setting. The most relevant work is Bai et al. (2021), which adopts a multimodal method to predict the debate outcome from a TV show. In contrast, we address the challenging tasks of predicting the utterance-level persuasion strategy and the deduction outcome from a naturalistic conversation, which requires a richer understanding of high-level social behaviors. Computational Modeling of Deduction Games. Prior works have investigated computational models for social deduction games. One stream of work seeks to analyze strategies and develop AI agents that play deduction games using a game theory approach (Nakamura et al., 2016; Serrino et al., 2019; Chuchro, 2022; Braverman et al., 2008; Bi and Tanaka, 2016). These works focus on models of the state of the game alone and do not address understanding the dialogue and persuasive behaviors that often occur while playing. More relevantly, Chittaranjan and Hung (Chittaranjan and Hung, 2010) developed a model for predicting Werewolf game outcomes from player speaking and interrupting behaviors. Recently, (FAIR) introduced a game agent–CICERO, that achieves human-level performance in the Diplomacy game by leveraging a language model with planning and reinforcement learning algorithms. In contrast to these prior works, we present the first work for understanding persuasive behaviors in a group setting from a multimodal perspective. ## 3 Dataset 3.1 Data Collection To benchmark the generalization ability of the computational models, we collect our data from different sources, as detailed in this section. This work was approved by an Institutional Review Board. Ego4D Dataset We first leverage a subset of Ego4D Social dataset (Grauman et al., 2022) for our study. This subset captures videos of groups of participants playing social deduction games. This subset contains 7.3 hours of videos with 40 games of One Night Ultimate Werewolf and 8 games of The Resistance: Avalon. Note that the Avalon data has a relatively small scale, and therefore is only used to evaluate the cross-domain game generalization ability of our models. To ensure all participants are visible in the frame, we use third-person videos instead of the first-person videos from Ego4D for visual representation learning and transcription. YouTube Video We retrieve the top search results for YouTube videos using the keywords of "one night ultimate werewolf" and "ultimate werewolf". We manually select from the searched videos to make sure the they adopt a similar game setup as the Ego4D data. Specifically, we filter all results with more than 5 players or fewer than 4 players, and those using game roles from the expansion package. We finally collect a final set of 14.8 hours of videos with 151 clips of completed games that adopt the same game setup as the Ego4D dataset and have fully visible game outcomes. We will release the YouTube URLs for the selected videos. ## 3.2 Data Annotation Video Annotation Most Ego4D and YouTube videos contain multiple games. Therefore, we first annotate the starting time (when the game narration voice begins) and the ending time (right before the voting stage) of each game. We then ask the annotators to look through each game clip and annotate the starting role, ending role, and the voting outcome of each player. Transcription We use an automatic transcription service *rev.com* to generate the transcript of each game clip. We further ask annotators to carefully examine the alignment of the videos and transcripts, and manually correct any errors in the transcripts. Please refer to Appendix B for more details. Persuasion Strategy Annotation Inspired by prior psychology studies and other works on predicting persuasion strategies (Chawla et al., 2021; Carlile et al., 2018; Yang et al., 2019; Chen and Yang, 2021), we propose six persuasion tactics that are frequently adopted in social deduction games. - **Identity Declaration**: *State one's own role or* identity in the game. This is a game-specific persuasion tactic. - **Accusation**: *Claim someone has a specific identity or strategic behavior*. Accusation, similar to Undervalue-Partner (Chawla et al., 2021), is a generic proself behavior. - **Interrogation**: *Questions about someone's identity or behavior*. Interrogation, is a proself strategy related to individual preferences. 6572 | Label | Example | Ego4D | YouTube | | | | | |--------------------------------------------|-----------------------------------------------------------------------------------|---------|-----------|------|------|-------|------| | Count | AUL | α | Count | AUL | α | | | | Identity | "I'll just come out and say I was a villager, so I have no idea what's going on." | 293 | 9.87 | 0.90 | 1066 | 10.43 | 0.87 | | Declaration Accusation | "So James might be the werewolf." | 669 | 11.28 | 0.74 | 2830 | 11.06 | 0.67 | | Interrogation | "Who did you rob?" | 695 | 7.56 | 0.80 | 3407 | 7.66 | 0.90 | | Call for Action | "We shouldn't vote to not kill anyone. | | | | | | | | And then there could also be no werewolf." | 236 | 9.99 | 0.78 | 1163 | 9.53 | 0.71 | | | Defense | "I think that you accused me of being a Werewolf very quickly." | 570 | 10.04 | 0.62 | 2696 | 9.75 | 0.80 | | Evidence | "If you swapped these two, he is not the werewolf." | 489 | 11.45 | 0.75 | 1740 | 9.80 | 0.60 | - **Call for Action**: *Encourage people to take an* action during the game. Call for Action relates to the coordination for persuasion (Chawla et al., 2021), which is a generic prosocial behavior. - **Defense**: *Defend oneself or someone else against* an accusation or defend a game-related argument. An utterance demonstrates Defense when the persuader tries to use credentials to earn others' trust or justify their earlier decisions. - **Evidence**: Provide a body of game-related fact or information. Evidentiality is a general persuasion tactic that has been widely studied in previous works (Carlile et al., 2018). Following previous work (Chawla et al., 2021), we annotate the persuasion strategy at the utterance level. We provide a website annotation tool adopted from Hayati et al. (2020) for our annotation task (see Appendix C for details) to the annotators. To properly train the annotators, we first ask all three annotators to annotate the same subset of dialogues and compute inter-annotator agreement using the nominal form of Krippendorff's alpha (Krippendorff, 2018). We then discuss with the annotators on their disagreements and come up with a general rule to address the disagreements during the annotation process. We repeat the above process until the annotators reached a Krippendorff's alpha greater than 0.6 for each category. Despite the subjectivity of persuasion strategies, the previous work (Chawla et al., 2021) suggests that annotations from 3 people are reasonable enough to most humans if they have a Krippendorff's alpha greater than 0.6. In Table 2, we report the per-class Krippendorff's alpha value for the final round of inter-annotator agreement calculation. After the annotator training phase is completed, we ask the three annotators to independently annotate the rest of the Ego4D and YouTube data. Annotation Statistics Our dataset has 5, 815 utterances from the Ego4D data and 20, 832 utterances from the YouTube data. More than 49.2% of Ego4D utterances are labeled as no strategy because of the naturalistic social setting, while only 37.9% of YouTube utterances are labeled as no strategy since players from the YouTube videos are more proficient at the game and focused more on gameplay. Furthermore, as shown in Table 2, the adopted persuasion strategies have an imbalanced distribution, where "Accusation", "Interrogation", and "Defense" are the most frequent strategies for both the Ego4D and YouTube videos. The annotators are recruited from a Startup Data Platform dedicated to research projects. All annotators are paid hourly at a rate above the federal minimum. ## 4 Strategy Prediction Given an utterance and its corresponding video segment, we seek to predict the persuasion strategies adopted in the utterance. We first leverage a pretrained language model (Devlin et al., 2019; Liu et al., 2019) as the text encoder to obtain the utterance embedding, and a vision transformer (Fan et al., 2021) to obtain the visual embedding. We then concatenate the textual and visual features to predict the persuasion strategy. Additionally, we study the impact of textual context by including prior utterances as input. ![4_image_0.png](4_image_0.png) ## 4.1 Methodology In our dataset, an utterance may be labeled with multiple persuasion strategies. For instance, "I'm a villager and she is the werewolf." is labeled as both identity declaration and *accusation*. Therefore, we formulate this task as a binary classification problem for each strategy and consider an utterance as non-strategic if it gets negative labels in all strategies. The most straightforward approach to solving this task is fine-tuning a pre-trained language model, which is referred to as *Base* model. In addition, we consider the following approaches: ## Modeling With Context Embedding. Since Some persuasion strategies cannot be easily recognized from one single utterance, we further consider a model with additional context (prior utterances) for each utterance. This is denoted as *Base + C*. Modeling with Video Representation. We further leverage the non-verbal signals by combining video features with the text representation for persuasion modeling. We directly use a pre-trained Vision Transformer to extract video representations, and fuse the video and text representations before feeding them into the classification layer as shown in Fig. 2. We refer to this model as *Base + V*. Late fusion of Video and Context. Finally, we adopt a late fusion model (*Base + C + V*) that incorporates both video features and context cues for persuasion strategy prediction. ## 4.2 Model Details We perform our experiments using both **BERT** (Devlin et al., 2018) and **RoBERTa** (Liu et al., 2019) as backbones for the text encoder. We use the bert-base-uncased and roberta-base models from Huggingface (Wolf et al., 2020) in our implementation. We adopt MViT-B-24 (Fan et al., 2021) pretrained on Kinetics-400 (Kay et al., 2017) as the video encoder. Moreover, we also implement a multi-task model (Chawla et al., 2021) as an additional baseline, referred to as **MT-BERT**. Context and video features are incorporated into MT-BERT in the same way as BERT and RoBERTa. Base Model. For base models, we obtain the textual input T from the current utterance only. Then we input T into a text encoder ϕ followed by a classifier to get the strategy prediction. Base + C. We first concatenate the k previous utterances C1, C2, · · · , Ck with an *[EOS]* token to get context C, and then concatenate this with the current utterance U using a *[SEP]* token to get the final input T . Formally, we have $$\begin{array}{l}{C=C_{1}\;[\mathrm{EOS}]\;C_{2}\;[\mathrm{EOS}]\cdot\cdot\cdot C_{k},}\\ {\mathcal{T}=C\;[\mathrm{SEP}]\;U.}\end{array}$$ (1) (2) $\frac{1}{2}$ Base + V. We use video encoder ψ to extract visual representation ψ(V) of the corresponding video clip V. During training, video features are concatenated with the text representation ϕ(T ) and fed into a fusion layer, which uses a linear mapping function WT F and an activation function *T anh*(·). Finally, we apply a linear classifier WT P to obtain the prediction logits, which can be formulated as $$l o g i t s=W_{P}^{T}\cdot T a n h\left(W_{F}^{T}\cdot\left(\phi(T)\oplus\psi({\mathcal{V}})\right)\right),\tag{3}$$ where ⊕ denotes the concatenation of two vectors. Note that we fix the parameters of the video encoder during training. Please refer to Appendix D for more details on visual representation extraction. Base + C + V. We further late fuse + C with + V. Formally, we denote the probability predictions of the two models after softmax as PC and PV . Then the output after linear combination is formulated as $$P_{C,V}=(1-\lambda)P_{C}+\lambda P_{V},\qquad\qquad(4)$$ where $\lambda$ is a scalar that balances $P_{C}$ and $P_{V}$. ## 4.3 Training Details All models are trained using cross-entropy loss. For training hyper-parameters, we do a grid search ![5_image_0.png](5_image_0.png) of learning rates in {1e − 5, 3e − 5, 5e − 5} and batch sizes in {16, 8} for *Base* models. We then fix the optimal hyper-parameters for subsequent models incorporating context or videos. We train all models with the optimal learning rates and batch sizes for 10 epochs using AdamW (Loshchilov and Hutter, 2017) as the optimizer. We run all the experiments with three random seeds and report the average score and the standard deviation. ## 4.4 Experiment Results Evaluation Metrics. Following (Chawla et al., 2021), we report the F1 score for each persuasion strategy category, the average F1 score of all categories, and Joint Accuracy. Note that the prediction is considered as correct when all the categories are predicted correctly in Joint Accuracy. Ablations on Additional Context. We first present a systematic ablation study of how incorporating textual context may improve the performance of persuasion strategy prediction. Specifically, we feed a fixed length of previous utterances together with the current utterance into the backbone language encoders for classification. As shown in Fig. 3, the additional context can boost the performance of all baseline models. However, setting the context length too long may confuse the model, especially for categories that can be reliably predicted with the current utterance (*e.g.* Identity Declaration, and Interrogation). We present the per-class performance in Appendix E. Our empirical finding is that a context length of 5 can consistently improve the performance of all three baseline models. Therefore, we adopt a context length of 5 as a default setting for the rest of our experiments. Modeling with Video Representation. We further study how incorporating video representation improves the performance of persuasion modeling. The results are summarized in Table 3. Importantly, video features can improve the BERT model by 0.8% on both the Ego4D dataset and YouTube dataset. However, RoBERTa+V only beats the RoBERTa model by 0.2% on the YouTube dataset. This may be because the YouTube dataset has more training data which enables the RoBERTa model to learn a robust representation without video feature embedding. Interestingly, including video features has a larger performance boost on predicting "Accusation", "Interrogation", and "Call for Action", which is likely due to the more frequent non-verbal communication (*e.g.* pointing to someone, raising hands, turning the head) during these persuasive behaviors. Off-the-shelf GPT-3 Inference. Prompting Large Language Models off-the-shelf to solve NLP tasks has received increasing attention (Brown et al., 2020). Here, we experiment with GPT-3-175B on our benchmark under three settings: zero-shot, one-shot and five-shot. Specifically, we use the text-davinci-002 engine from OpenAI's API2 with temperature 0 to produce a deterministic answer. The detailed templates for different settings are shown in Appendix G. The result is shown in Table 4. Using GPT-3 off-the-shelf achieves a nontrivial performance (Joint-A of 52.0 v.s. 38.8 for majority on YouTube data). Adding more examples further boosts performance, though it is still inferior to the fine-tuned models. Data Domain Generalization. We conduct additional experiments to show the generalization ability of language models on persuasion prediction. To begin with, we use the model trained on the | Method | Identity | Accusation | Interrogation | Call for Action | Defense | Evidence | Avg F1 | Joint-A | | |-----------------|------------|--------------|-----------------|-------------------|-----------|------------|----------|-----------|----------| | BERT | 82.6±1.1 | 48.8±4.8 | 82.8±0.2 | 39.4±9.6 | 29.3±5.5 | 54.2±2.5 | 56.2±2.5 | 65.1±1.6 | | | BERT + C | 79.9±1.6 | 52.0±3.3 | 81.0±1.1 | 49.5±3.2 | 33.8±0.5 | 57.1±1.6 | 58.9±0.6 | 65.0±0.2 | | | BERT + V | 81.5±3.5 | 52.1±1.9 | 83.3±1.6 | 42.4±3.8 | 28.4±5.1 | 52.8±1.0 | 56.7±1.2 | 64.5±1.2 | | | BERT + C + V | 84.5±4.6 | 52.8±2.0 | 82.7±0.4 | 47.3±3.4 | 34.5±1.7 | 54.9±1.1 | 59.4±1.6 | 66.5±0.3 | | | RoBERTa | 81.7±2.6 | 51.7±0.9 | 83.4±0.9 | 43.3±8.7 | 33.1±2.2 | 51.7±2.1 | 57.5±1.4 | 63.4±0.5 | | | RoBERTa + C | 81.5±0.7 | 59.4±2.4 | 83.5±1.1 | 43.7±3.7 | 33.0±3.1 | 52.4±2.9 | 58.9±1.2 | 64.6±0.7 | | | RoBERTa + V | 79.8±0.6 | 51.4±1.0 | 82.8±2.1 | 50.1±5.3 | 31.3±3.1 | 54.6±3.2 | 58.3±0.7 | 64.0±0.9 | | | RoBERTa + C + V | 82.7±0.2 | 58.5±2.3 | 83.8±1.2 | 46.1±4.5 | 35.4±3.4 | 53.4±3.3 | 60.0±0.8 | 66.1±0.9 | | | MT-BERT | 80.9±1.3 | 51.5±3.3 | 83.0±1.3 | 56.6±2.3 | 25.9±2.0 | 53.6±1.3 | 58.6±0.3 | 65.5±0.8 | | | MT-BERT + C | 79.8±2.2 | 54.4±0.8 | 83.2±0.7 | 50.8±7.2 | 36.5±2.8 | 61.5±2.2 | 61.0±1.1 | 66.3±1.4 | | | MT-BERT + V | 79.9±1.6 | 51.9±0.8 | 84.8±2.4 | 53.9±4.5 | 35.4±2.2 | 53.3±1.0 | 59.8±0.7 | 62.1±3.4 | | | MT-BERT + C + V | 80.7±1.9 | 55.2±0.9 | 83.6±0.6 | 50.0±0.8 | 36.1±2.7 | 60.5±1.0 | 61.0±0.3 | 66.3±1.0 | | | Ego4D | BERT | 80.2±1.6 | 64.7±1.1 | 89.6±0.4 | 77.2±2.5 | 43.5±1.0 | 58.3±0.7 | 68.9±0.0 | 64.6±0.8 | | BERT + C | 82.6±0.7 | 66.7±1.0 | 89.6±1.5 | 78.1±2.4 | 45.7±1.1 | 59.7±1.1 | 70.4±0.3 | 64.4±1.0 | | | BERT + V | 82.4±0.5 | 65.4±1.4 | 89.7±0.1 | 78.0±0.8 | 45.3±2.8 | 58.4±1.3 | 69.9±0.4 | 66.2±0.5 | | | BERT + C + V | 83.6±0.1 | 67.2±1.2 | 90.2±1.0 | 78.5±1.6 | 46.6±1.1 | 59.9±1.0 | 71.0±0.2 | 66.7±0.5 | | | RoBERTa | 84.3±0.1 | 67.2±0.6 | 89.4±0.1 | 78.2±0.8 | 44.3±0.4 | 59.0±1.7 | 70.4±0.2 | 64.8±0.7 | | | RoBERTa + C | 82.4±0.3 | 67.0±1.1 | 90.2±0.0 | 77.1±1.0 | 46.1±0.7 | 59.9±0.7 | 70.5±0.3 | 64.7±0.6 | | | RoBERTa + V | 83.4±0.4 | 66.4±0.3 | 89.5±0.1 | 78.7±2.0 | 46.6±0.6 | 59.0±1.0 | 70.6±0.1 | 65.3±1.2 | | | RoBERTa + C + V | 83.7±0.6 | 67.4±0.4 | 89.8±0.3 | 78.5±1.2 | 48.2±0.7 | 60.4±0.8 | 71.3±0.2 | 66.4±0.7 | | | MT-BERT | 80.7±0.4 | 65.1±1.5 | 88.5±0.8 | 76.2±2.2 | 42.3±1.5 | 57.4±1.3 | 68.4±0.3 | 65.6±1.1 | | | MT-BERT + C | 83.1±1.1 | 65.0±1.5 | 90.1±0.3 | 74.6±2.4 | 46.5±0.8 | 59.2±0.3 | 69.7±0.6 | 66.7±0.5 | | | MT-BERT + V | 82.8±0.6 | 68.5±1.0 | 89.3±0.7 | 75.6±2.8 | 47.8±0.3 | 59.6±0.8 | 70.6±0.8 | 66.9±0.4 | | | MT-BERT + C + V | 84.4±0.6 | 68.4±1.0 | 89.5±0.6 | 76.5±2.1 | 47.3±0.5 | 60.6±0.2 | 71.1±0.5 | 68.1±0.2 | | | YouTube | | | | | | | | | | Setting Ego4D YouTube Avg F1 Joint-A Avg F1 Joint-A Majority 0 52.5 0 38.8 Zero-Shot 35.4 58.5 40.3 52.0 One-Shot 40.7 56.3 47.2 53.2 Five-Shot 47.0 59.7 49.6 53.7 Table 4: GPT-3 results on Ego4D and YouTube data. YouTube data to make predictions on the Ego4D Werewolf testing data without any fine-tuning. As shown in Fig. 4, the resulting model achieves better performance than models trained only on Ego4D in most cases, due to the larger amount of available training data from the YouTube dataset. This also suggests that, for the text modality, the domain gap between the Ego4D and the YouTube data is small. We further fine-tune the model trained on the YouTube data with the Ego4D training data, and the resulting model performs even better. These results suggest promise in leveraging the large body of videos available online as a pre-training source for persuasion modeling in naturalistic social interactions. Another finding from our experiments is that the multi-task setting (MT-BERT) may compromise the model's generalization ability. We also find that including video representation cannot improve the model generalization ability (see more details in Appendix D), suggesting that the video modality domain gap between the two data sources is much larger than the text modality. Game Domain Generalization. We also study the model generalization ability on another social deduction game - Avalon. Werewolf and Avalon are vastly different in the game rules and winning conditions, especially because Werewolf has only one voting round per game, while Avalon has multiple rounds per game. Therefore, the persuasion strategies adopted in Avalon have a different distribution from Werewolf (see Appendix H). We run inference on the Avalon data using models trained only on the Ego4D Werewolf data without fine-tuning. Results are shown in Figure 5. Despite the large domain gap between the two games, our models achieve decent performance on the Avalon data. However, we find incorporating additional context has marginal performance improvements, and may even compromise the performance of the RoBERTa ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) model. The detailed results of data and game domain generalization are shown in Appendix F. ## 5 Game Outcome Deduction In addition to predicting persuasion strategies, we further model the human deduction process by predicting the voting outcomes of each pair of players, i.e., whether player A (*voter*) votes for player B (*candidate*). Therefore, in a game of n players, there are C 2n (Combinations) of player pairs, corresponding to P 2 n (Permutations) data points. We merge all data points from the Ego4D Werewolf data and YouTube data to enlarge the dataset size, and split the resulting data into 2741/427/827 samples for train/val/test sets. Since each player is only allowed to vote for one player, the resulting data has an imbalanced distribution, with 20.4% of the samples being positive (positive indicates the voter votes for the candidate). ## 5.1 Method For deduction modeling, we encode the input feature with three embeddings: a 7 × 1 vector representing the persuasion strategy distribution (including non-strategy) adopted by the voter; a 7 × 1 vector representing the persuasion strategy distribution adopted by the candidate; and a 12×1 one-hot vector representing starting role of the voter (One Night Werewolf has 12 roles in total). Therefore, the input for deduction is a 26 × 1 vector. We use a simple logistic regression model for deduction modeling. To address the class imbalance of positive and negative samples, we train the model with weighted binary classification loss. ## 5.2 Experiment Results Our model achieves an F1 of 32.7% and an AUC of 54.7%, outperforming random prediction, which obtains F1 and AUC of 28.6% and 50.0%, respectively. These results show the effectiveness of persuasion strategy usage and role as predictors for game-level outcomes. To analyze the contribution of persuasion strategy embedding and role embedding, we consider another model that only takes the persuasion strategy embeddings as inputs. This model achieves an F1 of 32.2% and an AUC of 54.6%. Overall we find that the persuasion strategy embedding is more informative for the predicting game outcomes than the role embedding. We visualize the weights of logistic regression in Fig. 6. Interestingly, for positive prediction (the voter votes for the candidate), the weights of the candidate are higher than the voter. It indicates that a player's voting choice depends more on the candidate's behaviors. This confirms our intuition that players make their decisions based on candidates' arguments. As for the negative prediction, we see that evidence is the most important strategy for the candi- Voter **Candidate** The voter doesn't vote ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) date to negate suspicion. It confirms that players are inclined to trust those who provide more information and evidence to find the werewolf. ## 6 Conclusion And Future Work In this work, we introduce the first persuasion modeling dataset with multiple modalities and rich utterance-level persuasion strategy annotations. We design a computational model that leverages both textual and visual representations for understanding persuasion behaviors in social deduction games. Our experiments show that visual cues benefit model performance on persuasion strategy prediction. We encourage future work to explore the role of the audio modality in persuasion modeling and to investigate joint learning of multimodal representations for the social persuasion setting. ## Limitations We only use pre-trained video transformers off-theshelf to encode the videos, while more nuanced and specific utilization of other models can be explored to further improve the performance. There are also valuable egocentric videos and demographic statistics along with the Ego4D dataset that we have not yet incorporated in our approach. Due to the difficulty and cost of collecting videos with transcriptions and voting outcome annotations, the total number of games is insufficient to train a deep neural network for voting outcome deduction, though data augmentation techniques can be explored to mitigate this limitation. ## Ethics Statement How humans use persuasion strategies in their communication has been long studied in psychology, communication, and NLP (Hovland et al., 1953; Crano and Prislin, 2006; Petty and Cacioppo, 1986; Yang et al., 2019; Wang et al., 2019; Chen and Yang, 2021). We recognize that persuasion skills could be used for good and bad purposes. In this study, our goal is to study persuasive behaviors by multiple speakers through social deduction games. While we recognize that in both games of One Night Ultimate Werewolf and Avalon games players could use persuasion strategies for behaviors that perhaps are considered morally wrong, such as deception, bias, and emotional manipulation, our study does not encourage such behaviors. Instead, we aim to understand people's behavior in a group setting when persuasion happens. Having these persuasion skills could benefit people to perform well in their workplace, such as pitching their ideas, or advocating for peace-making (Simons, 1976). For our data collection and annotation process, this study has been reviewed and approved by our institution's internal review board. We obtain consent from the players who are recorded and deidentify personally identifiable information (PII), as part of the Ego4D efforts. Moreover, to mitigate potential risks of harmful usage of this dataset in the future, we ask any users to sign an online agreement before using our resources for their research as follows: "*I will not use this dataset for* malicious purposes (but not limited to): deception, impersonation, mockery, discrimination, manipulation, targeted harassment, and hate speech." ## Acknowledgements We are thankful to the members of SALT Lab for their helpful feedback on the draft. This research was supported, in part, by NSF CCRI Research Infrastructure CNS-2308994. ## References David Atkinson, Kumar Bhargav Srinivasan, and Chenhao Tan. 2019. What gets echoed? understanding the "pointers" in explanations of persuasive arguments. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2911–2921, Hong Kong, China. Association for Computational Linguistics. Chongyang Bai, Haipeng Chen, Srijan Kumar, Jure Leskovec, and VS Subrahmanian. 2021. M2p2: Multimodal persuasion prediction using adaptive fusion. IEEE Transactions on Multimedia. Cristian-Paul Bara, Sky CH-Wang, and Joyce Chai. 2021. MindCraft: Theory of mind modeling for situated dialogue in collaborative tasks. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1112–1125, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xiaoheng Bi and Tetsuro Tanaka. 2016. Human-side strategies in the werewolf game against the stealth werewolf strategy. In International Conference on Computers and Games, pages 93–102. Springer. Mark Braverman, Omid Etesami, and Elchanan Mossel. 2008. Mafia: A theoretical study of players and coalitions in a partial information environment. The Annals of Applied Probability, 18(3):825–846. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Winston Carlile, Nishant Gurrapadi, Zixuan Ke, and Vincent Ng. 2018. Give me more feedback: Annotating argument persuasiveness and related attributes in student essays. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621–631, Melbourne, Australia. Association for Computational Linguistics. Kushal Chawla, Jaysa Ramirez, Rene Clever, Gale Lucas, Jonathan May, and Jonathan Gratch. 2021. CaSiNo: A corpus of campsite negotiation dialogues for automatic negotiation systems. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3167–3185, Online. Association for Computational Linguistics. Jiaao Chen and Diyi Yang. 2021. Weakly-supervised hierarchical models for predicting persuasive strategies in good-faith textual requests. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12648–12656. Gokul Chittaranjan and Hayley Hung. 2010. Are you awerewolf? detecting deceptive roles and outcomes in a conversational role-playing game. In *2010 IEEE* International Conference on Acoustics, Speech and Signal Processing, pages 5334–5337. IEEE. Robert Chuchro. 2022. Training an assassin ai for the resistance: Avalon. *arXiv preprint arXiv:2209.09331*. William D Crano and Radmila Prislin. 2006. Attitudes and persuasion. *Annual review of psychology*, 57:345. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Meta Fundamental AI Research Diplomacy Team (FAIR)†, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by combining language models with strategic reasoning. *Science*, 378(6624):1067–1074. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. 2021. Multiscale vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6824–6835. Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. 2022. Ego4d: Around the world in 3,000 hours of egocentric video. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995–19012. Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. INSPIRED: Toward sociable recommendation dialog systems. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 8142–8152, Online. Association for Computational Linguistics. Carl Iver Hovland, Irving Lester Janis, and Harold H Kelley. 1953. *Communication and persuasion.* Yale University Press. Ramandeep Kaur and Sandeep Kautish. 2022. Multimodal sentiment analysis: A survey and comparison. Research Anthology on Implementing Sentiment Analysis Across Multiple Disciplines, pages 1846–1870. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Klaus Krippendorff. 2018. *Content analysis: An introduction to its methodology*. Sage publications. Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. 2020. Hero: Hierarchical encoder for video+language omni-representation pretraining. In Conference on Empirical Methods in Natural Language Processing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Antonia Lonigro, Roberto Baiocco, Emma Baumgartner, and Fiorenzo Laghi. 2017. Theory of mind, affective empathy, and persuasive strategies in schoolaged children. *Infant and Child Development*, 26(6):e2022. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, and Jun Xiao. 2019. Debug: A dense bottom-up grounding approach for natural language video localization. In Conference on Empirical Methods in Natural Language Processing. Kelvin Luu, Chenhao Tan, and Noah A. Smith. 2019. Measuring online debaters' persuasive skill from text over time. *Transactions of the Association for Computational Linguistics*, 7:537–550. Noritsugu Nakamura, Michimasa Inaba, Kenichi Takahashi, Fujio Toriumi, Hirotaka Osawa, Daisuke Katagami, and Kousuke Shinoda. 2016. Constructing a human-like agent for the werewolf game using a psychological model based multiple perspectives. In 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1–8. IEEE. Richard E Petty and John T Cacioppo. 1986. The elaboration likelihood model of persuasion. In *Communication and persuasion*, pages 1–24. Springer. David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? *Behavioral and* brain sciences, 1(4):515–526. Jack Serrino, Max Kleiman-Weiner, David C Parkes, and Josh Tenenbaum. 2019. Finding friend and foe in multi-agent games. Advances in Neural Information Processing Systems, 32. Herbert W Simons. 1976. Persuasion. *Reading:* Addison-Wesley, 21. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635–5649, Florence, Italy. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, and Luke Zettlemoyer. 2021a. Vlm: Task-agnostic video-language model pretraining for video understanding. *arXiv preprint* arXiv:2105.09996. Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, and Florian Metze Luke Zettlemoyer Christoph Feichtenhofer. 2021b. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In *Conference on Empirical Methods in* Natural Language Processing. Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019. Let's make your request more persuasive: Modeling persuasive strategies via semisupervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3620–3630, Minneapolis, Minnesota. Association for Computational Linguistics. ## A Game Rules One Night Werewolf. In this game, players are divided into two teams - the team of villagers and the team of werewolves. In each game, players close their eyes in the night phase and take some actions (*e.g.* swapping cards) depending on their roles. Players' roles might be changed during the night, but they don't know their new roles except for a few special roles. Then all players open their eyes. The villager team needs to find the werewolf through communication and negotiation. The werewolf team must mislead the others and try to hide their identities. At the end of the game, everyone has to point out the most suspicious player. If at least one werewolf is voted out, the villager team wins the game. Otherwise, the werewolf team wins. We refer to https://en.wikipedia.org/wiki/ Ultimate_Werewolf\#One_Night_roles for detailed explanations of game rules and roles. The Resistance: Avalon. In this game, players are divided into two teams - the team of Minions and the team of Loyal Servants of Arthur. After shuffling and distributing cards to players, they secretly check their role cards and place them face down on the table. Each player will take turns serving as the Leader. In each round, the Leader proposes a Team to do a Quest, and all players are involved in discussing if the Team assignment is passed or rejected. After the Team Building phase, the approved Team will decide if the Quest is successful or not. In the Quest phase, the Good Team can only use the Quest Success card, and the Evil Team can use either Success or Fail card. The Good Team wins when three successful Quests are made, while the Evil Team wins when three failed Quests are made or the Evil players identify Merlin in the Good Team. We refer to https://en.wikipedia.org/wiki/The_ Resistance_(game)\#Avalon_variant for detailed explanations of game rules and roles. ## B Transcription Interface We provide a screenshot of the transcription tool (*rev.com*) in Fig 7. We upload video clips to this online platform for transcription. We also provide player names and roles involved in each game to make the transcription more accurate. As illustrated in Fig 7, they return the transcription of each utterance, the name of the speaker and the corresponding timestamp. Then we ask annotators to watch videos again and examine the alignment of videos and transcripts. Annotators also correct errors in speakers' names and texts. ## C Annotation Interface We provide a screenshot in Fig. 8 of our interface used by annotators to annotate utterance-level persuasion strategies. ## D Details Of Video Representation We now introduce video representation extraction. Given an utterance Ui, We first localize the corresponding video segment using the utterance timestamp ti, and then approximate the duration of the utterance by di = ti+1 − ti. We have an average duration of 2 seconds on both the Ego4D and Youtube datasets. To tolerate some misalignment of videos and transcripts, we set a 2-second time window for utterances shorter than 2 seconds, and hence the final duration of an utterance Uiis d′i = max(di, 2). Then we sample N frames out of the corresponding video segment with equal spacing, i.e., V = {V1, V2*, . . . ,* VN }. All videos in our dataset have an aspect ratio of 16:9. Hence we make three square crops on the left, center, and right of each frame to cover the entire view. Correspondingly, The visual embedding from the vision encoder is composed of three parts, i.e., Vi = {Vlef t i, V center i, V right i}. We input the left crops, center crops, and right crops into the video encoder separately and obtain the corresponding representations, i.e., ψ(V) = {ψ(V lef t), ψ(V center), ψ(V right)}. The three video representations are flattened as a single vector when we concatenate them with the text representation. In our experiments, we adopt the 24-layer multiscale vision transformer (MViT) (Fan et al., 2021) pretrained on Kinetics-400 as the video encoder. The number of sampled frames N is set as 32 in our experiments. Note that we don't finetune the video encoder on our datasets because the performance of some models drops after finetuning due to overfitting. In the experiments on Ego4D Werewolf data and Youtube data, the video features improve the performance prominently. However, the video feature does not necessarily help with model generalization. When we apply the model *Base+V* trained on Youtube data to Ego4D data, the average F1 of BERT and RoBERTa drop by 0.8% and 3.0%, respectively, while the F1 of MT-BERT increases by 1.1% after involving video features. This suggests a domain gap exists in the videos of the two datasets which is caused by the differences in the camera locations, angles of views, brightness in the room and etc. Video models are sensitive to these visual differences, resulting in limited performance in generalization on different datasets. In contrast, players communicate in a similar way in different conditions, so the pure text model generalizes better to other data. ## E Per-Class Results For Experiments With ![12_Image_0.Png](12_Image_0.Png) Different Context Lengths We showcase our experimental results including per-strategy scores on incorporating additional conversational context, in full detail in Table 5. ## F **Experiments Of Domain Generalization** We demonstrate the detailed experiment results of data domain generalization (training models on Youtube data and testing on Ego4D Werewolf test set), as well as game domain generalization (training models on Ego4D Werewolf data and testing on Avalon data). Results are reported in Table 6 and Table 7, respectively. ## G Detailed Prompt Templates Used For Gpt-3 For the prompt templates, we use the guideline and the persuasion strategy definitions provided to the annotators under the zero-shot setting, and append one/five more examples under one/five-shot setting. The detailed prompt template we used for GPT-3 inference is shown in Table 8. ## H Persuasion Strategy Annotation On Avalon Games Adjacent pie-charts comparing the distributions of annotated utterance-level persuasion strategies for One Night Ultimate Werewolf games and Avalon games in Ego4D are shown in Fig. 9. We can observe a different distribution of adopted persuasion strategies in the two games, suggesting a large game domain gap. | Method | Identity | Accusation Interrogation Call for Action Defense Evidence | Avg F1 | Joint-A | | | |-----------------------------|------------|-------------------------------------------------------------|-----------|-------------------------------------|-------------------------------------|-------------------------------------| | BERT | 82.6±1.1 | 48.8±4.8 | 82.8±0.2 | 39.4±9.6 | 29.3±5.5 54.2±2.5 56.2±2.5 65.1±1.6 | | | BERT + context1 | 80.4±0.8 | 49.0±3.6 | 82.6±0.4 | 46.4±8.3 | 29.1±2.8 53.8±3.6 56.9±0.7 64.0±0.9 | | | BERT + context3 | 81.7±1.3 | 51.8±4.7 | 81.1±2.6 | 45.7±4.6 | 32.5±2.4 52.2±0.9 57.5±1.6 64.8±1.0 | | | BERT + context5 | 79.9±1.6 | 52.0±3.3 | 81.0±1.1 | 49.5±3.2 | 33.8±0.5 57.1±1.6 58.9±0.6 65.0±0.2 | | | BERT + context7 | 80.7±3.1 | 47.4±5.4 | 80.7±1.7 | 38.6±12.0 | 34.7±2.2 55.3±0.7 56.3±2.4 63.5±1.0 | | | BERT + context9 | 77.9±0.1 | 47.5±2.7 | 78.5±1.9 | 43.0±5.3 | 31.8±0.9 54.1±3.5 55.5±0.7 63.7±0.9 | | | RoBERTa | 81.7±2.6 | 51.7±0.9 | 83.4±0.9 | 43.3±8.7 | 33.1±2.2 51.7±2.1 57.5±1.4 63.4±0.5 | | | RoBERTa + context1 79.9±2.8 | 53.1±0.6 | 82.1±0.9 | 41.8±7.9 | 34.1±1.4 55.2±2.9 57.7±1.4 64.1±0.7 | | | | RoBERTa + context3 81.7±1.0 | 53.9±3.9 | 82.3±1.6 | 39.6±9.0 | 35.4±2.9 54.0±3.8 57.8±2.1 64.1±1.5 | | | | RoBERTa + context5 81.5±0.7 | 59.4±2.4 | 83.5±1.1 | 43.7±3.7 | 33.0±3.1 52.4±2.9 58.9±1.2 64.6±0.7 | | | | RoBERTa + context7 78.6±1.7 | 55.5±0.5 | 80.6±0.4 | 38.2±4.0 | 30.1±4.9 51.9±3.2 55.8±1.2 62.4±2.3 | | | | RoBERTa + context9 80.2±1.8 | 56.0±2.2 | 83.0±1.4 | 42.5±10.4 | 32.0±2.4 53.5±1.7 57.9±1.8 63.0±0.6 | | | | MT-BERT | 80.9±1.3 | 51.5±3.3 | 83.0±1.3 | 56.6±2.3 | 25.9±2.0 53.6±1.3 58.6±0.3 65.5±0.8 | | | MT-BERT + context1 79.2±2.2 | 53.3±2.3 | 84.3±0.6 | 52.9±2.9 | 31.1±5.8 55.0±3.2 59.3±0.4 66.0±2.0 | | | | MT-BERT + context3 77.4±2.2 | 52.6±3.8 | 83.2±2.1 | 46.2±2.4 | 35.1±2.3 56.1±2.7 58.4±0.1 65.1±0.7 | | | | MT-BERT + context5 79.8±2.2 | 54.4±0.8 | 83.2±0.7 | 50.8±7.2 | 36.5±2.8 61.5±2.2 61.0±1.1 66.3±1.4 | | | | MT-BERT + context7 78.5±2.5 | 54.7±3.3 | 82.6±1.2 | 47.9±2.5 | 33.5±2.2 53.4±1.4 58.4±1.1 65.0±0.9 | | | | MT-BERT + context9 78.2±2.2 | 54.7±1.6 | 82.1±0.4 | 47.8±3.8 | 30.5±5.8 56.3±1.0 58.3±0.5 64.8±0.8 | | | | Ego4D | BERT | 80.2±1.6 | 64.7±1.1 | 89.6±0.4 | 77.2±2.5 | 43.5±1.0 58.3±0.7 68.9±0.1 64.6±0.8 | | BERT + context1 | 81.2±1.1 | 66.5±0.5 | 90.2±0.3 | 77.7±0.3 | 43.6±2.7 59.5±0.6 69.8±0.3 65.1±0.8 | | | BERT + context3 | 82.6±0.7 | 65.9±0.5 | 90.1±0.7 | 77.4±1.2 | 43.0±1.5 60.4±0.9 69.9±0.4 64.4±0.7 | | | BERT + context5 | 82.6±0.7 | 66.7±1.0 | 89.6±1.5 | 78.1±2.4 | 45.7±1.1 59.7±1.1 70.4±0.3 64.4±1.0 | | | BERT + context7 | 81.8±0.6 | 67.2±1.2 | 90.5±0.2 | 77.7±0.5 | 45.0±0.7 60.2±1.2 70.4±0.5 64.8±1.0 | | | BERT + context9 | 80.6±1.1 | 66.7±0.4 | 90.3±0.2 | 77.0±1.2 | 42.2±2.0 59.6±0.2 69.4±0.4 64.0±1.3 | | | RoBERTa | 84.3±0.1 | 67.2±0.6 | 89.4±0.1 | 78.2±0.8 | 44.3±0.4 59.0±1.7 70.4±0.2 64.8±0.7 | | | RoBERTa + context1 83.3±0.2 | 67.0±0.3 | 89.9±0.2 | 78.4±0.9 | 43.4±2.7 59.7±0.5 70.3±0.5 65.7±1.0 | | | | RoBERTa + context3 82.7±1.5 | 67.8±0.1 | 90.3±0.4 | 77.4±0.4 | 43.1±1.5 61.0±1.9 70.4±0.2 65.5±0.4 | | | | RoBERTa + context5 82.4±0.3 | 67.0±1.1 | 90.2±0.0 | 77.1±1.0 | 46.1±0.7 59.9±0.7 70.5±0.3 64.7±0.6 | | | | RoBERTa + context7 83.5±0.8 | 66.0±0.6 | 90.2±0.4 | 77.8±0.3 | 46.6±1.5 58.4±1.1 70.4±0.3 65.1±1.3 | | | | RoBERTa + context9 82.9±2.0 | 66.6±0.7 | 90.6±0.2 | 75.5±0.5 | 46.8±0.9 58.9±1.1 70.2±0.3 64.9±0.9 | | | | MT-BERT | 80.7±0.4 | 65.1±1.5 | 88.5±0.8 | 76.2±2.2 | 42.3±1.5 57.4±1.3 68.4±0.3 65.6±1.1 | | | MT-BERT + context1 82.9±0.9 | 67.2±1.2 | 88.7±1.5 | 77.8±1.6 | 43.4±0.6 59.0±0.8 69.8±0.3 67.3±0.9 | | | | MT-BERT + context3 80.5±2.5 | 65.9±1.5 | 89.9±0.2 | 75.2±1.4 | 44.9±1.9 58.3±0.4 69.1±0.6 65.8±0.9 | | | | MT-BERT + context5 83.1±1.1 | 65.0±1.5 | 90.1±0.3 | 74.6±2.4 | 46.5±0.8 59.2±0.3 69.7±0.6 66.7±0.5 | | | | MT-BERT + context7 82.1±0.8 | 67.3±0.1 | 89.4±1.1 | 76.0±0.9 | 43.2±1.3 57.8±0.5 69.3±0.3 67.6±0.7 | | | | MT-BERT + context9 81.0±2.0 | 67.8±0.2 | 89.6±0.6 | 72.8±3.8 | 44.3±2.1 58.8±1.0 69.1±0.8 66.5±0.5 | | | | YouTube | | | | | | | Table 5: Experimental Results on incorporating the conversational context of different lengths for persuasion strategy prediction w.o. Fine Tuning w. Fine Tuning Method Identity Accusation Interrogation Call for Action Defense Evidence Avg F1 Joint-A BERT 82.0±1.2 53.9±1.6 84.1±0.9 53.0±4.1 33.9±0.5 53.5±3.9 60.1±0.7 65.6±1.0 BERT + C 83.6±0.8 55.7±1.1 85.6±1.0 46.5±2.7 34.5±2.5 60.9±3.2 61.1±0.9 65.6±1.3 RoBERTa 86.9±0.9 57.0±1.4 85.0±2.0 53.5±3.6 31.5±1.3 55.0±1.3 61.5±0.8 66.0±0.7 RoBERTa + C 82.5±2.4 56.3±2.7 86.2±0.6 50.7±4.1 37.6±1.8 59.6±1.8 62.2±1.3 67.6±0.6 MT-BERT 80.6±2.9 50.4±3.6 83.5±1.7 45.3±10.6 34.7±0.7 55.2±3.2 58.3±2.7 64.8±2.1 MT-BERT + C 81.8±2.1 53.8±2.7 83.4±1.9 44.1±8.2 35.9±1.7 53.5±2.1 58.7±2.3 66.3±2.0 BERT 82.0±1.4 54.9±0.9 82.8±0.6 53.0±1.9 29.9±1.0 61.7±0.7 60.7±0.2 68.1±0.2 BERT + C 84.1±0.1 55.6±3.5 86.1±0.4 49.9±2.2 32.6±1.5 61.5±2.4 61.6±1.1 69.0±0.8 RoBERTa 86.7±1.3 56.6±1.4 85.3±1.6 54.8±3.9 29.4±2.5 57.3±2.0 61.7±1.4 67.4±0.9 RoBERTa + C 84.0±1.5 58.9±1.6 84.9±0.2 52.4±4.4 38.0±2.3 62.5±2.4 63.4±1.7 69.1±0.6 MT-BERT 81.9±1.4 54.7±1.6 83.0±0.6 60.2±5.3 25.6±1.6 59.2±2.3 60.8±1.0 68.5±0.6 MT-BERT + C 83.7±0.9 54.3±2.4 84.5±1.1 53.8±3.2 33.8±3.4 58.1±2.8 61.4±1.0 70.0±1.5 Table 6: Data domain generalization experiments. We train models on Youtube data and test on Ego4D testing set. Then we fine-tune the models on Ego4D training set and test again. Label the Persuation Strategy for the Dialogues during Social Deduction Game ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) Aryan Step 2: Upload the appropriately formatted text file (.txt) of your desired transcript here: Choose File player2_Game9_1.txt Step 3: (Optional) If you saved your annotation progress specific to the uploaded transcript file above in your last session and wish to continue from where you left off, please upload the annotations csv (.csv) file that you had downloaded and saved in your last session here: Choose File No file chosen Step 4: Choose appropriate persuasion strategy(ies) or "No Strategy" for the utterances in the dialogs ![14_image_2.png](14_image_2.png) | Method | Identity | Accusation | Interrogation | Call for Action | Defense | Evidence | Avg F1 | Joint-A | |-------------|------------|--------------|-----------------|-------------------|-----------|------------|----------|-----------| | BERT | 66.1±3.7 | 37.2±4.9 | 73.3±3.7 | 20.9±7.3 | 23.5±6.3 | 26.6±5.3 | 41.3±3.3 | 62.9±0.9 | | BERT + C | 60.4±4.5 | 41.7±2.2 | 73.2±0.3 | 34.4±4.8 | 25.0±4.3 | 18.2±2.9 | 42.1±1.4 | 63.9±0.6 | | RoBERTa | 63.0±3.2 | 45.9±3.0 | 73.1±2.8 | 33.0±12.1 | 33.6±5.7 | 27.5±1.2 | 46.0±1.8 | 64.3±0.5 | | RoBERTa + C | 46.1±9.3 | 40.3±2.3 | 71.8±2.9 | 35.8±2.8 | 28.1±3.0 | 22.5±8.5 | 40.8±1.7 | 63.5±0.6 | | MT-BERT | 56.3±3.6 | 38.2±4.6 | 73.1±1.9 | 39.3±5.7 | 23.5±8.0 | 25.3±1.0 | 42.6±1.7 | 63.4±0.2 | | MT-BERT + C | 61.9±2.9 | 42.2±2.5 | 71.2±2.7 | 34.2±1.0 | 29.3±5.2 | 23.2±3.7 | 43.7±1.2 | 64.2±0.3 | ![15_image_0.png](15_image_0.png) zero-shot Label the Persuasion Strategy for the Utterances in Dialogues during Social Deduction Game. Do not hesitate to select multiple strategies if one category can not summarize the given utterance. Strategy Definition: 1. Identity Declaration: State one's own role or identity in the game 2. Accusation: Claim someone has a specific identity or strategic behavior 3. Interrogation: Questions about someone's identity or behavior 4. Call for Action: Encourage people to take an action during the game 5. Defense: Defending yourself or someone else against an accusation or defending a game-related argument 6. Evidence: Provide a body of game-related facts or information 7. No Strategy: Any sentences that do not fall into other categories are here. Clarification or discussion of game rules should also be considered "No-Strategy" Utterance: "$utterance$" Strategy: one-shot Label the Persuasion Strategy for the Utterances in Dialogues during Social Deduction Game. Do not hesitate to select multiple strategies if one category can not summarize the given utterance. Strategy Definition: [same as above] Utterance: "No, but in order to find it, I had to really tap around to find it." Strategy: Defense, Evidence Utterance: "$utterance$" Strategy: five-shot Label the Persuasion Strategy for the Utterances in Dialogues during Social Deduction Game. Do not hesitate to select multiple strategies if one category can not summarize the given utterance. Strategy Definition: [same as above] Utterance: "I'll just come out and say I was a villager, so I have no idea what's going on." Strategy: Identity Declaration Utterance: "So James might be the werewolf." Strategy: Accusation Utterance: "Did anybody do any swapping? Anybody willing to fess up to anything about swapping?" Strategy: Interrogation, Call for Action Utterance: "No, but in order to find it, I had to really tap around to find it." Strategy: Defense, Evidence Utterance: "Okay. Good point." Strategy: No Strategy Utterance: "$utterance$" Strategy: Table 8: Prompt templates used for GPT-3, the variable within dollars is to be replaced with the corresponding value. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3; 4; Appendix C ✓ B1. Did you cite the creators of artifacts you used? 3.1; 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 3.1; 4.2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3.1; 4.2 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3; 5 ## C ✓ **Did You Run Computational Experiments?** 4; 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.2; 4.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.4; 5.2; Appendix D E F ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2; 4.3 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3.2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix B C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 3.2 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? 3 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 3.1 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3.2
knowles-larkin-2023-long
Long to reign over us: A Case Study of Machine Translation and a New Monarch
https://aclanthology.org/2023.findings-acl.412
Novel terminology and changes in terminology are often a challenge for machine translation systems. The passing of Queen Elizabeth II and the accession of King Charles III provide a striking example of translation shift in the real world, particularly in translation contexts that have ambiguity. Examining translation between French and English, we present a focused case-study of translations about King Charles III as produced both by publicly-available MT systems and by a neural machine translation system trained specifically on Canadian parliamentary text. We find that even in cases where human translators would have adequate context to disambiguate terms from the source language, machine translation systems do not always produce the expected output. Where we are able to analyze the training data, we note that this may represent artifacts in the data, raising important questions about machine translation updates in light of real world events.
# Long To Reign Over Us: A Case Study Of Machine Translation And A New Monarch Rebecca Knowles and **Samuel Larkin** National Research Council Canada {Rebecca.Knowles, Samuel.Larkin}@nrc-cnrc.gc.ca ## Abstract Novel terminology and changes in terminology are often a challenge for machine translation systems. The passing of Queen Elizabeth II and the accession of King Charles III provide a striking example of translation shift in the real world, particularly in translation contexts that have ambiguity. Examining translation between French and English, we present a focused case-study of translations about King Charles III as produced both by publicly-available MT systems and by a neural machine translation system trained specifically on Canadian parliamentary text. We find that even in cases where human translators would have adequate context to disambiguate terms from the source language, machine translation systems do not always produce the expected output. Where we are able to analyze the training data, we note that this may represent artifacts in the data, raising important questions about machine translation updates in light of real world events. ## 1 Introduction With the passing of Queen Elizabeth II on September 8, 2022, King Charles III became the first King of Canada in over 70 years. Given official bilingualism (English and French) in Canada, this raised a natural question of how machine translation (MT) systems - particularly those trained on data collected from Canadian government sources, which forms a disproportionately large amount of publicly available data for this language pair (Bowker and Blain, 2022) - might perform on terminology related to the new sovereign. We hypothesized that systems trained on relatively recent parliamentary text might produce errors due to both linguistic features of French and English as well as the paucity of references to kings in the training data. We expand on this, showing that not only is this the case for MT systems trained solely on Canadian parliamentary data; these errors also appear (albeit less 6589 frequently) in the output of large publicly available MT systems. In this work we will distinguish between *errors*, where context (and world knowledge of the two sovereigns in question) would be sufficient for a human translator to translate correctly, and other potential artifacts of the data where there is insufficient context at the sentence level to translate unambiguously. This work can be viewed as a narrowly-focused miniature challenge set (Isabelle et al., 2017), aiming to examine a specific intersection of MT challenges through a recent known example: world knowledge (or lack thereof) and changes in the state of the world, dataset imbalances, a subset of the different ways in which grammatical gender and the pronouns and inflections used for the referent affect translation for this language pair, and asymmetries in translation ambiguity.1 By keeping this tight focus, we are able to point out some areas in which MT is not yet "solved," even for this highly-resourced language pair. On the other hand, this tight focus on both the language pair and the specific case of text about these two monarchs limits the scope of what this work addresses; we provide a brief discussion of more general related work in the following section and additional notes in the Limitations section. ## 2 Related Work How to incorporate (new or updated) terminology into MT has long been an area of interest, from compound noun splitting and subword models (Koehn and Knight, 2003; Sennrich et al., 2016) to rapidly incorporating terminology from external sources like lexicons or dictionaries (Arthur et al., 2016; Kothur et al., 2018). Recently, there has been a focus on handling novel terminology resulting from the COVID-19 pandemic, including a shared task (Alam et al., 2021), the release of 1We release the annotated data as supplementary material. targeted datasets (Anastasopoulos et al., 2020), and evaluations of MT performance on related terminology (Bowker and Blain, 2022). There has been work on bias, imbalance, and gender-inclusivity in coreference resolution (Rudinger et al., 2018; Zhao et al., 2018; Cao and Daumé III, 2020), on linguistic gender in MT (Vanmassenhove et al., 2018), on incorporating coreference into MT to improve pronoun and gender inflection translation (Miculicich Werlen and PopescuBelis, 2017; Saunders et al., 2020), and on benchmarks for and analysis of gender in MT (Currey et al., 2022; Savoldi et al., 2021). There has also been analysis of and attempts to mitigate language pair asymmetries in linguistically conveyed information, such as by incorporating factors (Koehn, 2005; Avramidis and Koehn, 2008; Mager et al., 2018). Here, while some of our examples might benefit from such approaches, many would require additional context beyond the sentence. The topic of additional context in MT and its evaluation remains an open area (Tiedemann and Scherrer, 2017; Junczys-Dowmunt, 2019; Castilho et al., 2020), and within this realm there has been work specifically done on anaphora resolution (Voita et al., 2018). ## 3 Linguistic And Grammatical Notes In French, nouns are grammatically classed as masculine or feminine, and adjectives, articles, and determiners take inflected forms that agree with the nouns in terms of number and grammatical gender. The noun *Majesté* (majesty) is feminine (f). The form of address *Sa Majesté* on its own is ambiguous to translate into English, as the feminine form of the third person singular possessive determiner Sa agrees with the feminine noun *Majesté*, without regard to the specific referent. Depending on the referent's pronouns, Sa could be correctly translated as various singular third person pronouns such as Her, His, or *Their* (singular; for plural Their, the French source would be *Leurs Majestés*). Without additional context, like the sovereign's title and name, we expect current MT systems to almost always produce *Her Majesty* as a translation, due to the preponderance of that translation in the data. The question arises: will MT systems use information about words like King/Roi or the frequency with which the name Charles is associated with masculine pronouns to produce translations like His Majesty King Charles III? We anticipate more translation errors in the French–English translation direction, but examine both translation directions. Table 1 illustrates five cases into which the examples in our data fall. In case A, a pair of words is unambiguously translated in either translation direction within this domain, such as *Reine* and Queen. Sometimes French has two forms of a noun like *souverain* (m)/*souveraine* (f) but English only has one unmarked form, sovereign, making the translation unambiguous in the French to English direction only (case B). In case C, the translation from English is unambiguous both because Sa is used for either He or She in our data and because its translation is governed by the grammatical gender of the noun *Majesté*, and does not depend on the referent. As described earlier, the reverse (case D) requires additional context when translating from French into English (due to the agreement between the possessive determiner and the grammatical gender of the noun in French, and the selection of the English pronoun based on the referent). The reverse direction of case B is case E, where additional context is required to translate the English word sovereign into French.2 ## 4 Mt Systems 4.1 Online Systems We used MT output from two publicly available translation tools, Google Translate (https: //translate.google.com/) 3and Bing Translator (https://www.bing.com/translator). For the latter, we specify "French (Canada)". We do not know if they have been updated since September 8, 2022. All translations were re-run on January 13, 2023, to use recent versions. ## 4.2 Internal We also use French-English (FR-EN) and EnglishFrench (EN-FR) MT systems trained on data from the Canadian Hansard (House of Commons), 2This highlights a subtle distinction between the last four cases. Those using the example of sovereign have a noun whose linguistic gender marking is selected based on the referent, whereas in the case of B and E, the English pronoun is selected based on the referent but the French determiner is selected based on the grammatical gender of the noun; e.g., if you wanted to describe "her path", the choice to use the translation *voie* (f) or *chemin* (m) would determine whether to translate her as sa or son, respectively. 3Google Translate offers (binary) gender-specific translations in some language pairs for some sentences (Johnson, 2020); while we did not test this for all sentences in our set, most did not appear to offer these options, even when it would be appropriate to do so (likely due to length/complexity). A. *Reine*/Queen bidirectionally unambiguous translation (EN↔FR) B. *souverain(e)* unambiguously translated as sovereign (FR→EN) C. His Majesty unambiguously translates as *Sa Majesté* (EN→FR) D. *Sa Majesté* requires context for Sa, e.g., *Sa Majesté la Reine* (Her Majesty the Queen) E. sovereign requires context to translate as *souverain* (m) or *souveraine* (f) Table 1: Examples of unambiguous translations and translations that require context for disambiguation. which we refer to as Internal. We trained Transformer models (Vaswani et al., 2017) using Sockeye (Hieber et al., 2018) version 2.3.14 on over 5.6 million lines of text drawn from sessions 39-1 (2006) to 43-2 (2021),4all predating the accession of the new sovereign. These systems were built for other projects, and were only used for decoding (no additional training was performed). See Appendix A for more details. ## 5 Experiments We collect a small amount of existing parallel text from several sources: the text of the Prime Minister's statement regarding King Charles III's accession to the throne, text from the Canadian Hansard (proceedings of the House of Commons), and the Royal Anthem (*God Save the Queen/King*).5 From these, we manually extract terms that vary in at least one language based on whether they would refer to Queen Elizabeth II or King Charles III. This includes pronouns/determiners, adjectives and nouns that are grammatically marked for gender, and their names and titles. After translation, an author of this paper annotated each term in context to mark if it had been translated as expected. This was done via first automatically checking for string matches, followed by a manual check of all examples and notes on the cases where the expected translation was not found. Table 2 shows a summary of the Hansard and Prime Minister's Announcement settings in which at least one system produced a translation error. ## 5.1 Prime Minister'S Announcement The text of the prime minister's announcement on the accession to the throne is 7 lines long and contains 24 terms that we examine. Of these, 10 are bidirectionally unambiguous (e.g., "Queen Elizabeth II"). In the English to French direction another $$\begin{array}{l|c|c|c}&\text{Bing}&\text{Google}&\text{Internal}\\ \hline\text{EN}{\rightarrow}\text{FR PM Ann.}&24/24&23/24&24/24\\ \text{FR}{\rightarrow}\text{EN Hansard-King}&3/3&3/3&1/3\\ \text{FR}{\rightarrow}\text{EN PM Ann.}&22*/24&23/24&17/24\\ \end{array}$$ Table 2: Fraction of accurate term translations. Anthem and sets where all systems performed perfectly omitted. *In the case of FR→EN PM Announcement, Bing produces one translation that is rephrased such that a pronoun is not needed; we count this as correct. 11 are unambiguous, while the other 3 have enough context that a human translator could translate them unambiguously. In the French to English direction, another 3 are unambiguous, and the remaining 11 have sufficient context for a human translator. In the English to French direction, across all the systems and terms, there is only one case where the correct translation is not produced: an instance of Google producing *souverain* where it ought to produce *souveraine* in a sentence that references both monarchs (see Table 3).6 As expected, it is in the French to English direction that we see the most errors. All systems perform accurately on the 13 unambiguous translations. On the 11 remaining terms that have adequate context for translation, the Bing system correctly translates 8 (also producing two instances of "Her Majesty" rather than "His," and one valid translation that is rephrased such that a pronoun is not needed), the Google system accurately translates 10 (with the same Her/His Majesty substitution), and the Internal system only accurately translates 4 (with 6 Her/His Majesty substitutions and 1 substitution of them for him). ## 5.2 Hansard We selected sentences from the Hansard, all of which referenced the Queen. There were 9 from the training data and 2 from held out data. Across these sentences, there are a total of 13 terms that we examine. Two of the terms are bidirectionally unambiguous to translate. In the English to French 6The Internal system produces *souveraine* twice in a row in the same sentence, but a full discussion of all types of translation errors is beyond the scope of this short paper. English French MT While we continue to mourn the loss of Canada's longest-reigning **sovereign**, Her Majesty Queen Elizabeth II, we also look to the future with the proclamation of the accession of His Majesty King Charles III as Sovereign of Canada. Alors que nous continuons de pleurer la perte de la **souveraine** qui a régné le plus longtemps sur le Canada, Sa Majesté la reine Elizabeth II, nous nous tournons vers l'avenir au moment de la proclamation de l'accession au trône de **Sa Majesté le roi Charles** III, **souverain** du Canada. Alors que nous continuons à pleurer la perte du plus ancien *souverain* ** **du Canada, Sa Majesté la reine Elzabeth II [...] (Google)** $$\begin{array}{l l l}{{[...]}}&{{\mathrm{the}}}&{{\mathrm{proclination}}}&{{\mathrm{of}}}&{{\mathit{H e r}}}\\ {{\mathrm{Maje e t y}}}&{{\mathrm{Ki n g}}}&{{\mathrm{Charles}}}&{{\mathrm{III,}}}&{{\mathrm{the}}}\\ {{\mathrm{s~w o v e r e i g n~of~C anada.~(Internal)}}}&{{}}&{{}}\end{array}$$ Table 3: Examples of translation errors. Terms in bold, errors in red and italics. direction, the remaining 11 are all also unambiguous to translate. In the French to English direction, 10 would require additional context to guarantee translation accuracy, while 1 has sufficient context for a human translator to translate it accurately. For the two bidirectionally unambiguous translations and for the one contextually informed translation in the French to English direction, we also produce alternative versions of the same segments modified to reference King Charles III. In translating English to French, all terms are translated correctly for both monarchs by all MT systems. In translating French to English, all translations of text about Queen Elizabeth II are correct (modulo capitalization or apostrophe differences) for all systems. All 10 of the sentences that would require additional context to guarantee translation accuracy were examples with *Sa Majesté*, and all were translated as "Her Majesty" by all three MT systems. Note that we would especially expect this to be true of the training data for the Internal MT system, since this training data had already been observed and possibly memorized by the system, but it is also the case for the one sentence with this phrase from the held out data. The one sentence where the context would have been sufficient for a human translator included the phrase *Sa Majesté* le roi Charles III; both publicly available systems handled this correctly, while the Internal system translated it as "Her Majesty King Charles III." The internal system also once left Roi untranslated. Nevertheless, these results are somewhat weakened by the fact that much of the data is from the training data for the Internal system, and may also be incorporated in the public MT systems; possibly implicating memorization. ## 5.3 Anthem The Royal Anthem has a number of references to the Queen or King (depending on the version) as well as pronouns and (in the case of French) inflected adjectives. As song lyrics, the MT output is often adequate (the Internal system struggles the most) but not poetic. We present only the following high-level comments: when translated line by line, all systems default to masculine inflections of the adjectives, but when lines are merged to provide additional coreferent context, the adjectives are inflected to match the referent. ## 6 Discussion And Conclusions Perhaps unlike the introduction of COVID-19 terminology (where an entire new topic or domain is rapidly introduced to the translation landscape), the accession of a new monarch may cause a shift in terminology in an existing domain, in this case one with 70 years of history.7 As we expected, ambiguous terms tend to be translated in a way that likely corresponds to the imbalance in the training data (i.e., in the feminine, as referencing Queen Elizabeth II); this also highlights the need for context (whether document-level or external) that is often required for accurate translation when there is an asymmetry in what information is (un)marked across a language pair. Though they likely contain many Canadian translations (see Bowker and Blain (2022)), we cannot examine the public system training data, only the Internal system data. While there are thousands of mentions of the Queen in the Hansard training data, there are only hundreds of references to kings, and only 36 instances of the term "His Majesty" as compared to 882 instances of "Her Majesty". In our Internal system, an additional consequence of this is subword segmentation of words like roi: the word was fully segmented into its three characters, rather than appearing as a single token in the vocabulary, likely contributing to observed errors. We also found that even in sentences that would have adequate context for a human translator (with knowledge of the forms 7The recent terminology shift in English from Turkey to Türkiye may provide another example for study; as of May 2, 2023, Google and Bing exhibited different results when translating the country's name from French into English. of address for the two monarchs), the MT systems sometimes made errors. Without examining the inner workings of the systems, the fact that this occurred primarily in sentences with references to both monarchs leaves open the question of whether this is a problem of erroneous implicit coreference resolution, imbalance in the training data around these particular terms, or a combination of the two. Nevertheless, while accuracy in term translation is high overall, these striking errors where context ought to be sufficient serve as a warning that even in high-resource language pairs, history and data maintain a strong influence. ## Limitations This work has a narrow focus: small-scale analysis, translation between one language pair (French and English), examining terminology around two realworld public figures (whose forms of address are both highly prescribed and publicly documented),8 in a specific newsworthy event (the accession to the throne of a new king after over 70 years of data and translation about a queen). First, the scale of the analysis is quite small, so it does not examine in detail questions of frequency of errors, distributions of errors, or statistical significance. While this work raises issues that may be relevant for consideration across other language pairs, the relevance of the specific linguistic conventions discussed here will vary across language pairs, and certainly do not cover the full range of asymmetries in linguistically encoded information (see, e.g., Mager et al. (2018)). Due to the prescribed forms of address of the two monarchs in question, this work only examined translations related to a small subset of terms (e.g., "His"/"Her", *Reine*/Roi) and does not examine performance on terms used related to other individuals or to other third person singular pronouns or forms of address that could be used by a monarch. The specific circumstances (a 70 year reign of a sovereign of a country with an official bilingualism policy and this particular set of linguistic features) means that we may not expect these results to generalize to other potentially comparable scenarios. Lastly, we cannot examine the training data used for the public models, so we can only draw conclusions related to training data about the internal system. 8See, e.g., https://www.canada. ca/en/canadian-heritage/services/ protocol-guidelines-special-event/ styles-address.html ## Ethics Statement This work included data collection, specifically the selection of test sentences from public-facing Canadian government websites as well as the annotation of machine translation errors. This was performed by one of the authors, who reads both languages and received confirmation on French-related questions from fluent colleagues. While this work does focus on two identifiable individuals, these two individuals are public figures and the data sources that we select are official sources of public information about them (in fact, produced by governments of which they were/are the Heads of State). There is discussion in the NLP and MT literature of the harms of misgendering and of treating gender as a binary or immutable feature (Cao and Daumé III, 2020; Saunders et al., 2020). In this work, we focus on some aspects of grammatical gender that can be unrelated to an individual referent (e.g., Sa Majesté), as well as some aspects of linguistic gender that do have a tie to the referent (e.g., pronouns, inflection of adjectives). By choosing this particular case study of the accession of King Charles III after the passing of Queen Elizabeth II, this paper does focus on only two linguistic genders in French and English, because the current and past official formal forms of address of these two particular individuals are well-documented in this language pair by sources from their governments. We use the most recent available information for this, as linked in the footnote in the previous section. For a broader discussion of gender-inclusive language related to translation and this particular language pair, there are various sources on the topic,9and some of these conventions are changing. From a computational cost perspective, this paper reused existing neural MT systems (publicly available systems and internal systems) rather thank training systems from scratch, and translated a very small amount of text. ## Acknowledgements We thank our colleagues and the anonymous reviewers for their feedback on this paper. ## References Md Mahfuz Ibn Alam, Ivana Kvapilíková, Antonios Anastasopoulos, Laurent Besacier, Georgiana Dinu, Marcello Federico, Matthias Gallé, Kweonwoo Jung, Philipp Koehn, and Vassilina Nikoulina. 2021. Findings of the WMT shared task on machine translation using terminologies. In Proceedings of the Sixth Conference on Machine Translation, pages 652–663, Online. Association for Computational Linguistics. Antonios Anastasopoulos, Alessandro Cattelan, ZiYi Dou, Marcello Federico, Christian Federmann, Dmitriy Genzel, Francisco Guzmán, Junjie Hu, Macduff Hughes, Philipp Koehn, Rosie Lazar, Will Lewis, Graham Neubig, Mengmeng Niu, Alp Öktem, Eric Paquin, Grace Tang, and Sylwia Tur. 2020. TICO-19: the Translation initiative for COvid-19. arXiv:2007.01788. Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1557–1567, Austin, Texas. Association for Computational Linguistics. Eleftherios Avramidis and Philipp Koehn. 2008. Enriching morphologically poor languages for statistical machine translation. In Proceedings of ACL-08: HLT, pages 763–770, Columbus, Ohio. Association for Computational Linguistics. Lynne Bowker and Frédéric Blain. 2022. When French becomes Canadian French: The curious case of localizing covid-19 terms with Microsoft Translator. The Journal of Internationalization and Localization, 9(1):1–37. Yang Trista Cao and Hal Daumé III. 2020. Toward gender-inclusive coreference resolution. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4568–4595, Online. Association for Computational Linguistics. Sheila Castilho, Maja Popovic, and Andy Way. 2020. ´ On context span needed for machine translation evaluation. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 3735– 3742, Marseille, France. European Language Resources Association. Anna Currey, Maria Nadejde, Raghavendra Reddy Pappagari, Mia Mayer, Stanislas Lauly, Xing Niu, Benjamin Hsu, and Georgiana Dinu. 2022. MT-GenEval: A counterfactual and contextual dataset for evaluating gender accuracy in machine translation. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 4287–4299, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 200–207, Boston, MA. Association for Machine Translation in the Americas. Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 2486–2496, Copenhagen, Denmark. Association for Computational Linguistics. Melvin Johnson. 2020. A scalable approach to reducing gender bias in google translate. https://ai.googleblog.com/2020/04/a-scalableapproach-to-reducing-gender.html. Marcin Junczys-Dowmunt. 2019. Microsoft translator at WMT 2019: Towards large-scale document-level neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 225–233, Florence, Italy. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. *CoRR*, abs/1412.6980. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X: Papers, pages 79–86, Phuket, Thailand. Philipp Koehn and Kevin Knight. 2003. Empirical methods for compound splitting. In 10th Conference of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary. Association for Computational Linguistics. Sachith Sri Ram Kothur, Rebecca Knowles, and Philipp Koehn. 2018. Document-level adaptation for neural machine translation. In *Proceedings of the 2nd Workshop on Neural Machine Translation and Generation*, pages 64–73, Melbourne, Australia. Association for Computational Linguistics. Manuel Mager, Elisabeth Mager, Alfonso MedinaUrrea, Ivan Vladimir Meza Ruiz, and Katharina Kann. 2018. Lost in translation: Analysis of information loss during machine translation between polysynthetic and fusional languages. In *Proceedings of the Workshop on Computational Modeling* of Polysynthetic Languages, pages 73–83, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Using coreference links to improve Spanishto-English machine translation. In *Proceedings of* the 2nd Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2017), pages 30–40, Valencia, Spain. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Danielle Saunders, Rosie Sallis, and Bill Byrne. 2020. Neural machine translation doesn't translate gender coreference right unless you make it. In *Proceedings* of the Second Workshop on Gender Bias in Natural Language Processing, pages 35–43, Barcelona, Spain (Online). Association for Computational Linguistics. Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. Gender Bias in Machine Translation. *Transactions of the Association for Computational Linguistics*, 9:845–874. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In *Proceedings of the Third Workshop on Discourse in Machine* Translation, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. ## A Internal System Details We trained Transformer models (Vaswani et al., 2017) using Sockeye (Hieber et al., 2018) version 2.3.14 and cuda-10.1. We used Sockeye's default value of 6 encoder/ 6 decoder layers, 8 attention heads, a model size of 512 units with a FFN size of 2048, the Adam (Kingma and Ba, 2015) optimizer, label smoothing of 0.1 and a cross-entropywithout-softmax-output loss. The whole validation set (2000 sentences) is used during validation. We optimized for BLEU (Papineni et al., 2002) using Sockeye's default of sacreBLEU-1.4.14 (Post, 2018). Every 1000 updates, we evaluate BLEU on the validation and perform early stopping if there is no improvement after 32 checkpoints. Only sentence pairs with at most 200 tokens on both source and target side are used during training. Gradient clipping was set to absolute, the initial learning rate set to 0.0002, batch size set to 8192 tokens and we used weight tying and vocabulary sharing. Training was performed on 4 Tesla V100s, while inference used 1. During inference, the beam size is set to 5. The training data consisted of over 5.6 million lines of text drawn from sessions 39-1 (2006) to 43-2 (2021), with validation and additional held out data drawn exclusively from 43-2. Hansard text is publicly available at https://www.ourcommons.ca/ documentviewer/en/house/latest/hansard. These systems were built for other projects, and were simply used to decode the selected texts (no additional training was performed for this paper). ## B Test Data Sets The Prime Minister's statement (with link to the French version) is found at: https://pm.gc.ca/en/news/statements/2022 /09/10/statement-prime-minister -proclamation-accession-his-majesty -king-charles The segments from House of Commons were subselected from sentences available at https://www.ourcommons.ca/documentviewer /en/house/latest/hansard The Royal Anthem data is collected from https://www.canada.ca/en/canadian -heritage/services/royal-symbols-titles/ royal-anthem.html ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 1 (Introduction), 6 (Discussions and Conclusion), Limitations (unnumbered section) ✓ A2. Did you discuss any potential risks of your work? Ethics section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and 1 (Introduction) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Annotated Data Released. ✓ B1. Did you cite the creators of artifacts you used? 4 & 5, Appendix B ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Annotated data release provides link to terms of use regarding unofficial, non-commercial reproduction of House of Commons text. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We do discuss in the limitations section what conclusions should not be drawn from our annotated data. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data is public data (press release, Parliamentary text, anthem) that uniquely identifies two individual public figures. We do not anonymize it because the study focuses on the translation of those figures' titles and coreferents. We do not use any data that provides private personal information. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 1,3,5 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4,5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** We Performed Machine Translation. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We present information about the parameters of the models used (Appendix A) but do not include full details of computational budget, as these MT systems were trained for prior unrelated work and only used here to decode an extremely small set of test sentences. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Not applicable; using existing MT systems and analyzing output. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. This is an extremely small-scale study. We report counts of errors. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 5. One Of The Authors Annotated The Data. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? One of the authors performed the data annotation, and did not provide self with a written set of instructions for annotation. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? One of the authors performed the data annotation. We did not provide information about the author's salary or demographic information. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. An author performed the annotation in awareness of the use of the data. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Author annotated MT errors. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? There was only one annotator (one of the authors).
shinzato-etal-2023-unified
A Unified Generative Approach to Product Attribute-Value Identification
https://aclanthology.org/2023.findings-acl.413
Product attribute-value identification (PAVI) has been studied to link products on e-commerce sites with their attribute values (e.g., ⟨Material, Cotton⟩) using product text as clues. Technical demands from real-world e-commerce platforms require PAVI methods to handle unseen values, multi-attribute values, and canonicalized values, which are only partly addressed in existing extraction- and classification-based approaches. Motivated by this, we explore a generative approach to the PAVI task. We finetune a pre-trained generative model, T5, to decode a set of attribute-value pairs as a target sequence from the given product text. Since the attribute value pairs are unordered set elements, how to linearize them will matter; we, thus, explore methods of composing an attribute-value pair and ordering the pairs for the task. Experimental results confirm that our generation-based approach outperforms the existing extraction and classification-based methods on large-scale real-world datasets meant for those methods.
## A Unified Generative Approach To Product Attribute-Value Identification Keiji Shinzato Rakuten Institute of Technology, Rakuten Group, Inc. keiji.shinzato@rakuten.com Yandi Xia Rakuten Institute of Technology, Rakuten Group, Inc. yandi.xia@rakuten.com ## Abstract Product attribute-value identification (PAVI) has been studied to link products on e-commerce sites with their attribute values (*e.g.*, ⟨Material, Cotton⟩) using product text as clues. Technical demands from real-world e-commerce platforms require PAVI methods to handle unseen values, multi-attribute values, and canonicalized values, which are only partly addressed in existing extraction- and classification-based approaches. Motivated by this, we explore a generative approach to the PAVI task. We finetune a pre-trained generative model, T5, to decode a set of attribute-value pairs as a target sequence from the given product text. Since the attributevalue pairs are unordered set elements, how to linearize them will matter; we, thus, explore methods of composing an attribute-value pair and ordering the pairs for the task. Experimental results confirm that our generation-based approach outperforms the existing extractionand classification-based methods on large-scale real-world datasets meant for those methods. ## 1 Introduction Since organized product data play a crucial role in serving better product search and recommendation to customers, product attribute value identification (PAVI) has been a core task in the e-commerce industry. For attributes pre-defined by e-commerce sites, the task aims to link values of those attributes to products using product titles and descriptions as clues (Figure 1). For example, from the title "D&G Cotton piqué polo shirt Designed and manufactured in Italy," models are required to return a set of possible attribute-value pairs, namely {⟨Brand, Dolce & Gabbana⟩, ⟨Material, *Cotton*⟩, ⟨Country of origin, *Italy*⟩, ⟨Country of design, *Italy*⟩}. In the literature, PAVI has been addressed basically by extraction from the product text by using named entity recognition (Probst et al., 2007; Naoki Yoshinaga Institute of Industrial Science, The University of Tokyo ynaga@iis.u-tokyo.ac.jp ## Wei-Te Chen Rakuten Institute of Technology, Rakuten Group, Inc. weite.chen@rakuten.com ![0_image_0.png](0_image_0.png) Figure 1: Overview of our generative approach for PAVI; it takes product text to return a set of attribute-value pairs. In this example, the model generates *Dolce &* Gabbana as a brand, which is a canonicalized form of D&G, and two attributes have the entity *Italy* as values. Wong et al., 2008; Putthividhya and Hu, 2011; Bing et al., 2012; Shinzato and Sekine, 2013; More, 2016; Zheng et al., 2018; Rezk et al., 2019; Karamanolakis et al., 2020; Zhang et al., 2020) or question answering (Xu et al., 2019; Wang et al., 2020; Shinzato et al., 2022; Yang et al., 2022). However, since PAVI requires canonicalized values rather than raw value strings in the product text, some researchers have started to solve PAVI as classification (Chen et al., 2022; Fuchs and Acriche, 2022). To adopt PAVI models in real-world e-commerce platforms, there are the following challenges. Unseen values. Since values can be entities such as *brands*, models need to identify values unseen in the training data (Zheng et al., 2018). Since the classification-based approach assumes a predefined set of target classes (attribute-value pairs), it cannot handle such unseen attribute-value pairs. Multi-attribute values. When values can be associated with multiple attributes (e.g., *Italy* in Figure 1), models need to identify multiple attributes for a single value string in the text. To address this, the extraction-based approach must solve nested named entity recognition (Wang et al., 2020). 6599 | Approach | Unseen | Multi | Canon | |-------------------|----------|-----------|---------| | Extraction | Support | Partially | Not | | Classification | Not | Support | Support | | Generation (ours) | Support | Support | Support | Canonicalized values. E-commerce vendors need attribute values in the canonical form (*e.g.,* Dolce & Gabbana for D&G) in actual services such as faceted product search (Chen et al., 2022). The extraction-based approach needs a further step to canonicalize extracted raw value strings (Putthividhya and Hu, 2011; Zhang et al., 2021). Motivated by the shortcomings of the existing approaches to PAVI (Table 1), we propose to cast PAVI as sequence-to-set generation, which can handle all the challenges by using canonicalized attributevalue pairs for training (Figure 1). We expect that 1) generation can decode unseen values by considering corresponding values in the input, 2) generation can decode the same string in the input multiple times as values for different attributes, and 3) generation can learn how to canonicalize raw strings in input. We finetune the pre-trained generative model T5 (Raffel et al., 2020) to autoregressively decode a set of attribute-value pairs from the given text. As discussed in (Vinyals et al., 2016; Yang et al., 2018; Madaan et al., 2022), the output order will matter to decode sets as a sequence. We therefore explore methods of composing an attribute-value pair and ordering the pairs for the task. We evaluate our generative framework on two real-world datasets, MAVE (Yang et al., 2022) and our in-house product data. The experimental results demonstrate that our generation-based approach outperforms extraction- and classification-based methods on their target datasets. Our contribution is as follows. - We have solved the product attribute-value identification task as a sequence-to-set generation for the first time. - We revealed the effective order of attributevalue pairs for the T5 model among various ordering schemes (Table 2). - We provided the first comprehensive comparison among extraction-, classification-, and generation-based models on two real-world PAVI datasets, and empirically confirmed that the generation-based models outperformed the others (Table 6) while addressing all challenges in PAVI (Tables 9, 11 and 12). ## 2 Related Work Product Attribute-Value Extraction Traditionally, a myriad of previous studies formulated PAVI as named entity recognition (NER) (Probst et al., 2007; Wong et al., 2008; Putthividhya and Hu, 2011; Bing et al., 2012; Shinzato and Sekine, 2013; More, 2016; Zheng et al., 2018; Rezk et al., 2019; Karamanolakis et al., 2020; Zhang et al., 2020). However, since the number of attributes in real-world e-commerce sites can exceed ten thousand (Xu et al., 2019), the NER-based models suffer from the data sparseness problem, which makes the models perform poorly. While the extraction-based approach can identify unseen values in the training data, it cannot canonicalize values by itself and is difficult to handle overlapping values, although nested NER (surveyed in Wang et al. (2022)) can remedy the latter issue. To mitigate the data sparseness problem, some studies leveraged QA models for the PAVI task (Xu et al., 2019; Wang et al., 2020; Yang et al., 2022; Shinzato et al., 2022), by assuming the target attribute for extraction as additional input. These QA-based approaches take an attribute as *query* and product text as *context*, and extract attribute values from the context as *answer* for the query. Similar to the traditional NER-based models, these extractive QA-based models do not work for canonicalized values. To improve the ability to find unseen values, Roy et al. (2021) generated a value for the given product text and attribute. However, we need to apply these QA-based models to the same context with each of thousands of attributes, unless comprehensive attribute taxonomy is designed to narrow down possible attributes; such taxonomy is not always available and is often imperfect, as investigated by Mao et al. (2020) for Amazon.com. Product Attribute-Value Identification as Classification Chen et al. (2022) solved PAVI as multilabel classification (MLC), assuming attribute-value pairs as target labels. One of the problems in this approach is that the distribution between positive and negative labels is heavily skewed because the number of possible attribute values per product is much smaller than the total number of attribute values. To alleviate the imbalanced label prob- | Ordering | Attribute-value pairs placed in the target sequence | |--------------|-----------------------------------------------------------------------------------| | Rare-first | Material [SEPav ] Nylon [SEPpr ] Color [SEPav ] Red [SEPpr ] Color [SEPav ] White | | Common-first | Color [SEPav ] White [SEPpr ] Color [SEPav ] Red [SEPpr ] Material [SEPav ] Nylon | | Random | Color [SEPav ] Red [SEPpr ] Material [SEPav ] Nylon [SEPpr ] Color [SEPav ] White | Table 2: Example of attribute-value pair ordering with the attribute-then-value composition. We assume that the frequency of the pairs is ⟨Color, White⟩ > ⟨Color, Red⟩ > ⟨Material, Nylon⟩. lem, they introduced a method called label masking to reduce the number of negative labels using an attribute taxonomy designed by the e-commerce platform. To mitigate the extreme multi-class classification, Fuchs and Acriche (2022) decomposed the target label, namely attribute-value pair, into two atomic labels, attribute and value, to perform a hierarchical classification. Although these classification-based approaches support canonicalized values and multi-attribute values, they cannot handle unseen values. In this study, we adopt a generative approach to return a set of attribute-value pairs from given product data, and empirically compare it with the above two approaches. Our approach can be applied to the task settings adopted by the QA-based models, by simply feeding one (or more) target attributes as additional input (*e.g.*, title [SEP] description [SEP] attributes) to decode their values in order. ## 3 Proposed Method As mentioned above, previous studies formalize PAVI as either sequence tagging or multi-label classification problems. These approaches do not address all the challenges derived from real-world e-commerce sites at the same time (Table 1). We thus propose a unified generative framework that formalizes PAVI as a sequence-to-set problem. Let us denote x = {x1, x2*, . . . , x*n} as product data (title and description) where n is the number of tokens in x. Given product data x, the model is trained to return a set of attribute-value pairs y = {⟨a1, v1⟩,⟨a2, v2⟩*, . . . ,*⟨ak, vk⟩} for x, where k is the number of attribute-value pairs associated with the product; ai = {a1, a2*, . . . , a*mi} and vi = {v1, v2*, . . . , v*li} are corresponding attribute and value.1 mi and li are the numbers of tokens in ai and vi, respectively. As the backbone of our approach, we employ T5 (Raffel et al., 2020), a pre-trained generative model based on Transformer (Vaswani et al., 2017) that maps an input sequence to an output sequence. The key issue in formulating the PAVI task as sequence-to-sequence generation is how to linearize a set of attribute-value pairs into a sequence. Firstly, we should consider how to associate attributes and their corresponding values in the output sequence. Secondly, the autoregressive generation decodes output tokens (here, attributes and values) one by one conditioned on the previous labels. Thus, if specific (or informative) tokens are first decoded, it will make it easy to decode the remaining tokens. However, due to the exposure bias, decoding specific (namely, infrequent) tokens are more likely to fail. To address the challenge, we decompose the issue on linearization into two subproblems on how to compose an attribute-value pair and how to order attribute-value pairs. In what follows, we will describe these subproblems. ## 3.1 Composition Of Attribute-Value Pair We consider the following ways to compose an attribute-value pair.2In both ways, attributes and values are separated by a special token [SEPav ]. Attribute-then-value, ⟨**A, V**⟩ Attribute is placed, and then its value (*e.g.,* Color [SEPav ] White). In general, the vocabulary size of attributes is much smaller than that of values. Thus, models will be easier to decode attributes than values. Value-then-attribute, ⟨**V, A**⟩ Value is placed, and then its attribute (*e.g.,* White [SEPav ] Color). This will be effective when the target values appear as raw strings in the given text and are easier to decode than attributes. ## 3.2 Ordering Of Attribute-Value Pairs In this work, we design three different types of the attribute-value pair ordering (Table 2). We use a special token [SEPpr ] as a separator between pairs. 2We have also attempted to generate all attributes prior to values (namely, a1[SEPpr ] *. . .* ak[SEPav ]v1[SEPpr ] *. . .* vk) or vice versa; this unpaired generation slightly underperformed the paired generation used here. | MAVE | In-House Product Data | | | | | | |--------------------------------------------------------|-------------------------|---------|---------|-----------|---------|---------| | Train | Dev. | Test | Train | Dev. | Test | | | The number of examples | 640,000 | 100,000 | 290,773 | 640,000 | 100,000 | 100,000 | | without values | 150,412 | 23,220 | 67,936 | 0 | 0 | 0 | | The number of distinct attributes | 693 | 660 | 685 | 1,320 | 1,119 | 1,123 | | The number of distinct attribute values | 54,200 | 21,734 | 37,092 | 13,328 | 8,402 | 8,445 | | The number of distinct attribute-value pairs | 63,715 | 25,675 | 43,605 | 14,829 | 9,310 | 9,356 | | The number of attribute-value pairs | 1,594,855 | 249,543 | 722,130 | 2,966,227 | 463,463 | 462,507 | | with unseen values | 0 | 4,667 | 13,578 | 0 | 443 | 491 | | with multi-attribute (or nested) values | 134,290 | 20,832 | 60,832 | 103,727 | 16,280 | 15,843 | | whose values appear as raw strings in the product text | 1,594,855 | 249,543 | 722,130 | 1,340,043 | 210,181 | 207,997 | | The average number of subwords per example (input) | 253.73 | 253.73 | 253.56 | 357.87 | 359.89 | 356.93 | | The average number of subwords per example (output) | 10.35 | 10.39 | 10.32 | 46.43 | 46.44 | 46.31 | | The average number of attributes per example | 1.64 | 1.64 | 1.64 | 3.24 | 3.25 | 3.23 | | The average number of values per example | 2.25 | 2.25 | 2.25 | 4.62 | 4.62 | 4.61 | | The average number of subwords per attribute | 2.84 | 2.82 | 2.85 | 4.77 | 4.72 | 4.69 | | The average number of subwords per value | 4.15 | 3.46 | 3.81 | 4.09 | 3.96 | 3.93 | Rare-first Specific attribute values (*e.g.*, brands) can help models decode other attribute values. For example, since *Levi's* has many products made of denim, it is easy to decode the material if *Levi's* is decoded in advance. Meanwhile, since there are many brands that have products made of denim, decoding denim as a material in advance is useless to decode the brands. To capture this inter-value dependency, we assume a correlation between the frequency and specificity of attribute-value pairs, and place attribute-value pairs to the target sequence in rare-first ordering of attribute-value pair frequency calculated from the training data. The attributevalue pairs with the same ranking will be placed randomly for this and following ordering. Common-first When the model autoregressively decodes outputs, intermediate errors affect future decoding. Thus, it is important to decode from confident attribute-value pairs. Since models will be easier to decode attribute-value pairs that have more training examples, we place attribute-value pairs to the target sequence in the common-first ordering of attribute-value pair frequency. This approach is adopted by Yang et al. (2018) in solving multi-label document classification as generation. Random To see whether the orders matter, we randomly sort attribute-value pairs in the target sequence; more precisely, we collect, uniquify, and shuffle attribute-value pairs taken from all training examples, and sort the pairs in each example according to the obtained order of the pairs. If this random ordering shows inferior performance against the above orderings, we can conclude output orders matter in this task. ## 4 Experiments We evaluate our generative approach to PAVI using two real-world datasets. In the literature, different types of approaches are rarely compared due to the proprietary nature of codes and datasets in this task. We thus compare our generation-based model with extraction- and classification-based models, all of which are based on public pre-trained models, using not only in-house but also public datasets. ## 4.1 Datasets We used MAVE (Yang et al., 2022) 3and our inhouse product data for experiments. The MAVE dataset is designed to evaluate the extraction-based PAVI models, while the in-house dataset is designed to evaluate classification-based models (Table 3). MAVE dataset compiles the product data taken from Amazon Review Data (Ni et al., 2019). The dataset contains various kinds of products such as shoes, clothing, watches, books, and home decor decals. Each example consists of product titles and descriptions, attribute, value, and span of the attribute value. To construct such tuples, Yang et al. (2022) trained five AVEQA models (Wang et al., 2020) using a large amount of silver data where attribute values were annotated using manually tailored extraction rules. Then, they applied the trained models to the Amazon Review Data in order to detect spans of values corresponding to attributes given to the models. To produce attribute value spans with high precision, they chose only attribute values that all five models extracted 3https://github.com/google-research-datasets/ MAVE | Title | Description | (original attribute-value info.) | Attribute-value pairs | |---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------| | Chicago Blackhawks | Chicago Blackhawks pet jersey - | | | | Pet Dog Hockey Jersey LARGE | size LARGE. This great-looking jersey features screened-on logos on the sleeves and screened-on team name/number on the back. | ⟨ Type, Jersey, 0, 34, 40 ⟩, ⟨ Type, jersey, 1, 23, 29 ⟩, ⟨ Type, jersey, 1, 63, 69 ⟩, ⟨ Clothing Type, Jersey, 0, 34, 40 ⟩, ⟨ Clothing Type, jersey, 1, 23, 29 ⟩, ⟨ Clothing Type, jersey, 1, 63, 69 ⟩, ⟨ Special use, None ⟩ | ⟨ Type, Jersey ⟩, ⟨ Type, jersey ⟩, ⟨ Clothing Type, Jersey ⟩, ⟨ Clothing Type, jersey ⟩, ⟨ Special use, None ⟩ | | Northwave | [north | | | | wave] | Espresso | | | | Original Red Men's / Women's / Sneakers 25 - 27cm | Product description. These sneakers are the perfect accent for your feet and come in a soft red color. The sole is made of lightweight rubber to reduce weight. It is a popular color. | ⟨ Shoe size (cm), 25.0 ⟩, ⟨ Shoe size (cm), 26.0 ⟩, ⟨ Shoe size (cm), 27.0 ⟩, ⟨ Color, Red ⟩ | ⟨ Shoe size (cm), 25.0 ⟩, ⟨ Shoe size (cm), 26.0 ⟩, ⟨ Shoe size (cm), 27.0 ⟩, ⟨ Color, Red ⟩ | (positive). In addition, if no span is extracted from either model, and there is no extracted span from the extraction rules, they consider that there are no values for the attributes (negative); refer to Table 4 for example product data. As a result, MAVE consists of 2,092,898 product data for training and 290,773 product data for testing. Similar to Yang et al. (2022), to make the training faster, we randomly selected 640,000 and 100,000 product data as the training and development sets from the original training data, respectively. We used the test data in MAVE for our evaluation as it is. In-House Product Data is taken from our ecommerce platform, Rakuten,4 which sells a wide range of products such as smartphones, car supplies, furniture, clothing, and kitchenware. Each example consists of a tuple of title, description, and a set of attribute-value pairs. The sellers assign products attribute-value pairs defined in the attribute taxonomy provided by the e-commerce platform. Since both attributes and values in the taxonomy are canonicalized, there exist spelling gaps between values in the taxonomy and those in the product text (*e.g., Dolce & Gabbana* in the taxonomy and D&G in the title). For experiments, among our in-house product data with one or more attribute-value pairs, we randomly sampled 640,000, 100,000, and 100,000 product data for training, development, and testing, respectively. ## 4.2 Models We compare the following models: BERT-NER: extraction-based model. On the top of BERT, we place a classification layer that uses the outputs from the last layer of BERT as feature representations of each subword. Each subword is classified into one of the labels. We employ BILOU chunking scheme (Sekine et al., 1998; Ratinov and Roth, 2009); the total number of labels is N × 4 + 1, where N is the number of distinct attributes in the training data. We have used BERT as the backbone here because the common extractionbased baseline (Zheng et al., 2018) uses classic BiLSTM-CRF as the backbone (Huang et al., 2015) and BERT-based models outperform in QA-based models (Wang et al., 2020); BERT-NER can be a stronger and easily replicable baseline. To annotate entities in text, we referred the beginning and ending positions in tuples for MAVE, and performed a dictionary matching for our in-house dataset. If annotations are overlapped, we keep the longest token length value, and drop all other overlapping values. For multi-attribute values, we adopt the most frequent attribute-value pair. BERT-MLC: classification-based model. We put a classification layer on the top of BERT, and feed the embeddings of the CLS token to the classification layer as a representation of given text (Chen et al., 2022). The model predicts all possible attribute values from the representation through the classification layer. The total number of labels is the number of attribute values in the training data. BERT-MLC w/ Tax: the current state-of-the-art classification-based model that can be comparable with the other methods. We added to BERT-MLC the | MAVE | In-House Product Data | | | | | | | |--------------------------|-------------------------|------|----------|----------|-----------------|------|---------| | BERT-NER | BERT-MLC | T5 | BERT-NER | BERT-MLC | BERT-MLC w/ TAX | T5 | | | Training (10 epochs) | 22 | 22 | 24 × 10 | 22 | 22 | 22 | 24 × 10 | | Inference (the dev set) | 8 | 8 | 80 × 10 | 8 | 8 | 8 | 80 × 10 | | Inference (the test set) | 1.6 | 1.6 | 16 × 6 | 0.8 | 0.8 | 0.8 | 8 × 6 | | Total | 31.6 | 31.6 | 1,136 | 30.8 | 30.8 | 30.8 | 1,088 | label masking (Chen et al., 2022), which leverages the skewed distributions of attributes in training and testing, using an attribute taxonomy defined for our in-house data. Although this is the state-ofthe-art classification-based method, it **requires the** attribute taxonomy as extra supervision. Since the MAVE dataset does not provide the attribute taxonomy, we train and evaluate this model only on our in-house dataset. T5: generation-based model of ours. We finetune T5 on the training data obtained by each element in {Attribute-then-value, Value-then-attribute} × {Random, Rare-first, *Common-first*}. For random ordering, we create three training data with different random seeds, next train a model on each training data, and then chose the model that achieves the best micro F1 on the development set. ## 4.3 Implementations We implemented all models in PyTorch.5 We used t5-base6and sonoisa/t5-base-japanese7in Transformers (Wolf et al., 2020), both of which have 220M parameters, as the pre-trained T5 models for MAVE and our in-house data, respectively. For training and testing, we used the default hyperparameters provided with each model. We ran teacher forcing in training, and performed beam search of size four in testing. For BERT-based models, we used bert-base-cased8for MAVE, and cl-tohoku/bert-base-japanese9for our inhouse dataset, both of which have 110M parameters.10 We set 0.1 of a dropout rate to a classification layer. We use Adam (Kingma and Ba, 2015) optimizer with learning rates shown in Table 14 in Appendix. We trained the models up to 10 epochs with a batch size of 32 and chose the models that perform the best micro F1 on the development set. Computing Infrastructure We used NVIDIA DGX A100 GPU on a Linux (Ubuntu) server with a AMD EPYC 7742 CPU at 2.25 GHz with 2 TB main memory for performing the experiments. Table 5 shows GPU hours taken for the experiments. ## 4.4 Evaluation Measure Following the literature (Xu et al., 2019; Wang et al., 2020; Yang et al., 2022; Shinzato et al., 2022; Chen et al., 2022), we used micro and macro precision (P), recall (R), and F1 as metrics. We compute macro performance in attribute-basis. Since the goal of PAVI is not to detect spans of values in text but to assign attribute-value pairs to products, we pick one attribute-value pair from multiple identical attribute-value pairs in MAVE (*e.g.*, ⟨Type, jersey⟩ in Table 4). Note that we do not need this unification process for our in-house dataset because it provides unique attribute-value pairs. Since attribute values in the MAVE dataset are based on outputs from QA-based models (Wang et al., 2020) and those in our in-house data are assigned voluntarily by sellers on our marketplace, both datasets may contain some missing values. To reduce the impact of those missing attribute-value pairs, we discard predicted attribute-value pairs if there are no ground truth labels for the attributes. In the MAVE dataset, there are attributes whose values do not appear in the text (negative). For the ground truth with such no attribute values, models can predict no values (NN), or incorrect values (FPn) while for the ground truth with concrete attribute values, the model can predict no values (FN), correct values (TP), or incorrect values (FPp). Based on those types of predicted values, P and R | MAVE | In-House Product Data | | | | | | | | | | | | | |----------------------------------------------|-------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Models | Micro | Macro | Micro | Macro | | | | | | | | | | | P (%) | R (%) | F1 | P (%) | R (%) | F1 | P (%) | R (%) | F1 | P (%) | R (%) | F1 | | | | extraction-based BERT-NER | 96.38 | 84.91 | 90.28 | 80.36 | 57.75 | 64.61 | 96.09 | 40.26 | 56.75 | 45.26 | 18.12 | 23.50 | | | classification-based BERT-MLC | 93.52 | 70.37 | 80.31 | 40.53 | 20.72 | 25.40 | 94.53 | 74.43 | 83.29 | 40.81 | 18.05 | 22.82 | | | BERT-MLC w/ TAX | - | - | - | - | - | - | 93.65 | 77.47 | 84.79 | 58.19 | 32.76 | 39.33 | | | generation-based (ours) T5 ⟨A, V⟩ Rare-first | 95.45 | 91.70 | 93.54 | 77.57 | 64.35 | 68.97 | 88.61 | 81.50 | 84.91 | 66.33 | 47.25 | 53.10 | | | Common-first | 95.29 | 92.16 | 93.70 | 78.26 | 66.94 | 70.63 | 85.30 | 82.83 | 84.05 | 62.10 | 41.85 | 47.49 | | | Random | 95.10 | 91.46 | 93.24 | 77.24 | 62.71 | 67.45 | 87.73 | 81.41 | 84.45 | 61.64 | 42.47 | 47.92 | | | ⟨V, A⟩ | Rare-first | 95.24 | 91.97 | 93.57 | 80.59 | 68.02 | 72.51 | 89.82 | 80.73 | 85.03 | 65.73 | 44.61 | 50.93 | | Common-first | 94.62 | 92.85 | 93.73 | 80.50 | 69.72 | 73.47 | 84.25 | 82.97 | 83.60 | 63.61 | 43.61 | 49.13 | | | Random | 95.13 | 92.04 | 93.56 | 80.56 | 67.28 | 71.83 | 88.25 | 81.41 | 84.69 | 63.06 | 42.09 | 48.10 | | are computed as follows: $$\mathbf{P}={\frac{|\mathrm{TP}|}{|\mathrm{TP}|+|\mathrm{FP}_{\mathrm{p}}|+|\mathrm{FP}_{\mathrm{n}}|}}\,,\,\mathbf{R}={\frac{|\mathrm{TP}|}{|\mathrm{TP}|+|\mathrm{FN}|}}\,.$$ The numerator is $\mathbf{P}=\mathbf{P}=\mathbf{P}/(\mathbf{P}+\mathbf{P})$. Note that F1 is computed as 2 × P × R / (P + R). Note that since there are no attributes with no values in our in-house dataset, the value of |FPn| is always 0. ## 4.5 Results Table 6 shows the performance of each model on MAVE and our in-house datasets. Our generationbased models with *-first ordering mostly outperformed the extraction- and classification-based baselines in terms of F1. 11 The differences between the **best** models and the baselines were significant (p < 0.0005) under approximate randomized test (Noreen, 1989). The higher recall of our generation-based models suggests the impact of capturing inter-value dependencies (§ 3.2). The impact of the composition of attribute-value pairs depends on whether the output values are canonicalized. On the MAVE dataset, the models with the value-then-attribute composition outperformed those with attribute-then-value composition in terms of macro F1. This is because all output values appear in the MAVE dataset. Thus, to the models, it is easier to generate values than attributes. Meanwhile, the advantage of value-thenattribute composition is smaller on our in-house 11The gap in performance may be partly attributed to the difference in the number of parameters in the base models. However, as shown in Table 1, the generation model still has the advantage that it can address the challenges in the PAVI task that the other approaches intrinsically cannot solve. dataset since there is no guarantee that the target values appear in the text as raw strings. The impact of the ordering of attribute-value pairs depends on the number of attribute-value pairs per example. On the in-house dataset, the models with rare-first ordering consistently outperformed those with common-first ordering in terms of F1. This result implies that decoding specific attribute-value pairs in advance is more helpful to generate general attribute-value pairs on the inhouse dataset. Meanwhile, there is no clear difference between the models with *-first orderings on the MAVE dataset, since the number of attributevalue pairs per example is small. These results confirm that the generative approach learns to flexibly perform canonicalization if it is required in the training data.12 Meanwhile, the performance of extraction- and classificationbased approaches depends on whether the attributevalue pairs are canonicalized or not. ## Quantitative Comparison Of Each Approach To see the detailed behaviors of individual approaches, we categorized the attributes in the MAVE and our in-house datasets according to the number of training examples and the number of distinct values per attribute. We divide the attributes into four accord-12To make a more lenient comparison for BERT-NER on the in-house dataset, we have also evaluated all models on attribute-value pairs in the test data whose attributes are observed in the training data of BERT-NER. On this test data, our generation-based model still outperformed the BERT-NER and BERT-MLC models; T5 (⟨V, A⟩, Rare-first) and BERT-NER show the best micro (macro) F1 of 85.75 (55.48) and 58.92 (30.17), respectively. | Models | # of distinct values (med: 19) (19, ∞) (0, 19] all | | | | | | |------------|------------------------------------------------------|-------------|-------------|-------------|-------------------------------|-----| | # training | hi | NER | 90.5 / 80.1 | 90.2 / 69.3 | 90.5 / 77.3 | | | examples | MLC | 80.7 / 40.2 | 85.5 / 34.9 | 80.8 / 38.9 | | | | (med: 268) | T5 | 93.9 / 86.9 | 94.4 / 78.2 | 93.9 / 84.7 | | | | lo | NER | 77.0 / 71.6 | 72.0 / 41.7 | 74.6 / 50.3 | | | | MLC | 18.7 / | 9.3 | 35.7 / 10.0 | 27.3 / | 9.8 | | | T5 | 81.1 / 76.7 | 79.4 / 54.8 | 80.3 / 61.1 | | | | | all | NER | 90.4 / 78.0 | 87.0 / 49.9 | 90.3 / 64.6 | | | | MLC | 80.4 / 32.6 | 78.4 / 17.4 | 80.3 / 25.4 | | | | | T5 | 93.8 / 84.4 | 91.7 / 61.7 | 93.7 / 73.5 | Models | # of distinct values (med: 3) | | | (3, ∞) | (0, 3] | all | | | | | | # training | hi | NER | 56.4 / 29.6 | 62.9 / 31.6 | 56.8 / 30.2 | | | examples | MLC | 83.5 / 34.1 | 82.1 / 44.6 | 83.4 / 37.2 | | | | (med: 44) | T5 | 85.0 / 63.2 | 87.8 / 71.8 | 85.1 / 65.7 | | | | lo | NER | 25.2 / 14.8 | 27.7 / 14.0 | 26.8 / 14.3 | | | | MLC | 5.0 / | 1.6 | 8.9 / | 3.1 | 7.4 / | 2.7 | | T5 | 44.5 / 31.3 | 47.4 / 29.9 | 46.3 / 30.4 | | | | | all | NER | 56.4 / 26.0 | 62.0 / 20.6 | 56.7 / 23.5 | | | | MLC | 83.5 / 26.3 | 80.6 / 18.8 | 83.3 / 22.8 | | | | | T5 | 84.9 / 55.5 | 86.8 / 45.7 | 85.0 / 50.9 | | | | ing to median frequency and number of values. Tables 7 and 8 list micro and macro F1 values of each approach for each category of attributes on the MAVE and our in-house datasets, respectively. From the table, we can see that T5 shows the best performance in all categories. This suggests that T5 is more robust than BERT-NER and BERT-MLC in the PAVI task. We can also observe that the performance of BERT-MLC drops significantly for attributes with a small number of training examples compared to those with a large number of training examples; the classification-based approach makes an effort to better classify more frequent attributes. Meanwhile, the performance drops of BERT-NER and T5 are more moderate than BERT-MLC, especially on the MAVE dataset. Moreover, we can see that T5 shows better micro F1 for attributes that have a smaller number of distinct values on our in-house dataset, whereas it shows better micro F1 for attributes that have a larger number of distinct values on the MAVE dataset. This implies that, although it is easy for the generation-based approaches to extract diverse values from text, it is still difficult to canonicalize those diverse values. ## 4.6 Analysis From the better macro F1 of T5 with *-first ordering than with random ordering, we confirmed that our generation-based models successfully capture inter-value dependencies to decode attribute-value pairs. In what follows, we perform further analysis to see if the generative approach addresses the three challenges; namely, unseen, multi-attribute (or nested), and canonicalized values (Table 1). | Models | MAVE F1 | In-House F1 | | | | | |--------------|------------|---------------|-------|-------|-------|------| | Micro | Macro | Micro | Macro | | | | | BERT-NER | 34.57 | 22.16 | 14.29 | 3.02 | | | | T5 | ⟨A, V⟩ | Rare-first | 38.21 | 27.87 | 19.44 | 5.08 | | Common-first | 37.34 | 29.02 | 17.03 | 6.55 | | | | Random | 36.65 | 27.64 | 15.94 | 5.93 | | | | ⟨V, A⟩ | Rare-first | 37.44 | 29.10 | 18.15 | 5.89 | | | Common-first | 38.19 | 31.22 | 18.61 | 6.15 | | | | Random | 36.59 | 28.98 | 12.64 | 2.34 | | | ## Can Generative Models Identify Unseen Values? To see how effective our generative models are for unseen attribute values, we compare its performance with BERT-NER on attribute-value pairs in the test data that do not appear in the training data (13,578 and 491 unseen values exist in the MAVE and in-house datasets, respectively). Table 9 shows the results. We can see that the T5 models outperform BERT-NER, especially in terms of macro F1. Although the extraction-based approach can extract unseen values, the unified generative approach works better for extracting unseen values than the extraction-based approach. ## Can Generative Models Identify Multi-Attribute values? Next, to see how effective our generative models are for identifying multi-attribute values, we compare its performance to the baselines on attribute-value pairs in the test data that appear only as multi-attribute (or nested) values in input text. The number of such values in the MAVE and our inhouse datasets is 60,832 and 15,843, respectively. Table 11 shows the results. We can see that the T5 models outperform all baselines in terms | Required processing | Attribute-value pair | Text | |-----------------------------------------|-----------------------------------------------------|-----------------------------------| | Understand structured values | ⟨Series, iPhone (Apple)⟩ | iPhone 6S iPhone Softbank... | | ⟨Chest (cm), 104 - 112⟩ | ...Size [L] Chest 110cm Length 66cm... | | | Refer to the world knowledge | ⟨Sleeve length, Long⟩ | Women's Trench Coat Dark Brown... | | ⟨Indication, Rhinitis⟩ | For runny nose, nasal congestion, sore throat,... | | | Recognize paraphrase | ⟨Material, Polyurethane⟩ | Material: PU leather / Plastic | | ⟨Compatible brand, Galaxy S8 plus⟩ | SC-03J Galaxy S8+ Galaxy... | | | Understand text | ⟨Feature, With card holder⟩ | The card slot is on the left. | | ⟨With or without casters, With casters⟩ | Table leg: Pipe, twin-wheel casters with stopper... | | Table 10: Example of canonicalization that T5 models need to perform to generate values that do not appear in text. Substrings in text that can be regarded as a clue to generate the values are in *italic*. | Models | MAVE F1 | In-House F1 | Models | Micro F1 | Macro F1 | | |-----------------|------------|---------------|----------|------------|---------------------------------------------------------------------------------------------------------------|-------| | Micro | Macro | Micro | Macro | BERT-MLC | 72.49 | 20.10 | | BERT-MLC w/ TAX | 73.87 | 35.12 | | | | | | T5 | ⟨A, V⟩ | Rare-first | 73.09 | 43.40 | | | | Common-first | 71.91 | 39.07 | | | | | | Random | 72.48 | 39.19 | | | | | | ⟨V, A⟩ | Rare-first | 72.93 | 40.08 | | | | | Common-first | 71.20 | 37.95 | | | | | | Random | 72.30 | 37.20 | | | | | | BERT-NER | 47.85 | 35.60 | 81.35 | 45.54 | | | | BERT-MLC | 68.79 | 24.95 | 76.43 | 30.47 | | | | BERT-MLC w/ TAX | - | - | 77.19 | 41.55 | | | | T5 | ⟨A, V⟩ | Rare-first | 75.14 | 54.30 | 79.90 | 58.44 | | Common-first | 75.31 | 53.89 | 80.16 | 55.09 | | | | Random | 74.73 | 52.48 | 80.45 | 53.50 | | | | ⟨V, A⟩ | Rare-first | 75.40 | 56.11 | 80.13 | 57.81 | | | Common-first | 75.38 | 57.07 | 80.16 | 60.83 | | | | Random | 74.97 | 54.28 | 80.18 | 56.08 | Table 12: Performance on attribute-value pairs whose values do not appear as raw strings in input text in our | | Table 12: Performance on attribute-value pairs whose values do not appear as raw strings in input text in our in-house test data. The score of BERT-NER is 0. Table 11: Performance on attribute-value pairs that can be obtained only by identifying multi-attribute values. of macro F1. Although the classification-based models can identify multi-attribute values, the generative models outperformed those models. ## Can Generative Models Identify Canonicalized values? Lastly, to verify how effective our generative models are for identifying canonicalized values, we compare its performance with BERT-MLC (w/ TAX) on 207,997 attribute-value pairs whose values do not appear as raw strings in the corresponding product text in our in-house dataset. Table 12 shows the results. The T5 models show comparable performance to and outperform the baselines in terms of micro and macro F1, respectively. To see what types of canonicalization the T5 models need to perform when the canonicalized values do not appear in the text, we manually inspect attribute-value pairs whose values do not appear in text on the development set. Table 10 exemplifies canonicalization that T5 models need to perform. From the table, we can see that the canonicalization included understanding structure in values (labels) (*e.g., iPhone* is a product of *Apple*), referring the world knowledge (*the coat* has *long sleeves*), recognizing paraphrases (PU is an abbreviation of *polyurethane*), and understanding product descriptions ("*the card slot is on* the left" entails that the product *has a card holder*). We conclude that our generative model addressed all the challenges in the PAVI task better than the other two approaches. ## 5 Conclusions We have proposed a generative framework for product attribute-value identification (PAVI), which is a task to return a set of attribute-value pairs from product text on e-commerce sites. Our model can address the challenges of the PAVI task; unseen values, multi-attribute values, and canonicalized values. We finetune a pre-trained model T5 to autoregressively decode a set of attribute-value pairs from the given product text. To linearize the set of attribute-value pairs, we explored two types of attribute-value composition and three types of the orderings of the attribute-value pairs. Experimental results on two real-world datasets demonstrated that our generative approach outperformed the extraction- and classification-based baselines. We plan to augment the ability to decode unseen values by using a pluggable copy mechanism (Liu et al., 2021). We will evaluate our model on another PAVI setting where the target attribute(s) are given. ## 6 Limitations Since our generative approach to product attributevalue identification autoregressively decodes a set of attribute-value pairs as a sequence, the inference is slow (Table 5) and how to linearize the set of attribute-value pairs in the training data will affect the performance (Table 6). The best way of composing an attribute-value pair and ordering the pairs will depend on the characteristics of the datasets such as the existence of canonicalized values and the number of attribute-value pairs per example. Those who attempt to apply our method to their own datasets should keep this in mind. ## Acknowledgements This work (second author) was partially supported by JSPS KAKENHI Grant Number 21H03494. We thank the anonymous reviewers for their hard work. ## References Lidong Bing, Tak-Lam Wong, and Wai Lam. 2012. Unsupervised extraction of popular product attributes from web sites. In *Information Retrieval Technology*, pages 437–446, Berlin, Heidelberg. Springer Berlin Heidelberg. Wei-Te Chen, Yandi Xia, and Keiji Shinzato. 2022. Extreme multi-label classification with label masking for product attribute value extraction. In Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5), pages 134–140, Dublin, Ireland. Association for Computational Linguistics. Gilad Fuchs and Yoni Acriche. 2022. Product titles-toattributes as a text-to-text task. In Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5), pages 91–98, Dublin, Ireland. Association for Computational Linguistics. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, arXiv:1508.01991. Giannis Karamanolakis, Jun Ma, and Xin Luna Dong. 2020. TXtract: Taxonomy-aware knowledge extraction for thousands of product categories. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8489–8502, Online. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the third International Conference on Learning Representations, San Diego, CA, USA. Yi Liu, Guoan Zhang, Puning Yu, Jianlin Su, and Shengfeng Pan. 2021. BioCopy: A plug-and-play span copy mechanism in Seq2Seq models. In *Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing*, pages 53–57, Virtual. Association for Computational Linguistics. Aman Madaan, Dheeraj Rajagopal, Niket Tandon, Yiming Yang, and Antoine Bosselut. 2022. Conditional set generation using seq2seq models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4874–4896, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yuning Mao, Tong Zhao, Andrey Kan, Chenwei Zhang, Xin Luna Dong, Christos Faloutsos, and Jiawei Han. 2020. Octet: Online catalog taxonomy enrichment with self-supervision. In *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '20, page 2247–2257, New York, NY, USA. Association for Computing Machinery. Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. In *KDD 2016 Workshop on* Enterprise Intelligence, San Francisco, CA, USA. Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188–197, Hong Kong, China. Association for Computational Linguistics. Eric W. Noreen. 1989. *Computer-intensive methods for* testing hypotheses. Wiley, New York. Katharina Probst, Rayid Ghani, Marko Krema, Andrew E. Fano, and Yan Liu. 2007. Semi-supervised learning of attribute-value pairs from product descriptions. In *Proceedings of the 20th International* Joint Conference on Artificial Intelligence, IJCAI'07, pages 2838–2843, Hyderabad, India. Morgan Kaufmann Publishers Inc. Duangmanee Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for product attribute extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1557–1567, Edinburgh, Scotland, UK. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado. Association for Computational Linguistics. Martin Rezk, Laura Alonso Alemany, Lasguido Nio, and Ted Zhang. 2019. Accurate product attribute extraction on the field. In Proceedings of the 35th IEEE International Conference on Data Engineering, pages 1862–1873, Macau SAR, China. IEEE. Kalyani Roy, Pawan Goyal, and Manish Pandey. 2021. Attribute value generation from product title using language models. In *Proceedings of The 4th Workshop on e-Commerce and NLP*, pages 13–17, Online. Association for Computational Linguistics. Satoshi Sekine, Ralph Grishman, and Hiroyuki Shinnou. 1998. A decision tree method for finding and classifying names in Japanese texts. In *Sixth Workshop* on Very Large Corpora, pages 171–178, Quebec, Canada. Keiji Shinzato and Satoshi Sekine. 2013. Unsupervised extraction of attributes and their values from product description. In *Proceedings of the Sixth International* Joint Conference on Natural Language Processing, pages 1339–1347, Nagoya, Japan. Asian Federation of Natural Language Processing. Keiji Shinzato, Naoki Yoshinaga, Yandi Xia, and WeiTe Chen. 2022. Simple and effective knowledgedriven query expansion for QA-based product attribute extraction. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 227–234, Dublin, Ireland. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30, pages 5998–6008, Red Hook, NY, USA. Curran Associates, Inc. Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2016. Order matters: Sequence to sequence for sets. In *The fourth International Conference on Learning* Representations, San Juan, Puerto Rico. Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D. Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020. Learning to extract attribute value from product via question answering: A multi-task approach. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '20, pages 47–55, New York, NY, USA. Association for Computing Machinery. Yu Wang, Hanghang Tong, Ziye Zhu, and Yun Li. 2022. Nested named entity recognition: A survey. ACM Trans. Knowl. Discov. Data, 16(6):1–29. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing*, pages 38–45, Online. Association for Computational Linguistics. Tak-Lam Wong, Wai Lam, and Tik-Shun Wong. 2008. An unsupervised framework for extracting and normalizing product attributes from multiple web sites. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '08, page 35–42, New York, NY, USA. Association for Computing Machinery. Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5214–5223, Florence, Italy. Association for Computational Linguistics. Li Yang, Qifan Wang, Zac Yu, Anand Kulkarni, Sumit Sanghai, Bin Shu, Jon Elsas, and Bhargav Kanagal. 2022. MAVE: A product dataset for multi-source attribute value extraction. In *Proceedings of the Fifteenth ACM International Conference on Web Search* and Data Mining, WSDM '22, page 1256–1265, New York, NY, USA. Association for Computing Machinery. Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. SGM: Sequence generation model for multi-label classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3915–3926, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Danqing Zhang, Zheng Li, Tianyu Cao, Chen Luo, Tony Wu, Hanqing Lu, Yiwei Song, Bing Yin, Tuo Zhao, and Qiang Yang. 2021. QUEACO: Borrowing treasures from weakly-labeled behavior data for query attribute value extraction. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, CIKM '21, page 4362–4372, New York, NY, USA. Association for Computing Machinery. Hanchu Zhang, Leonhard Hennig, Christoph Alt, Changjian Hu, Yao Meng, and Chao Wang. 2020. Bootstrapping named entity recognition in Ecommerce with positive unlabeled learning. In Proceedings of The 3rd Workshop on e-Commerce and NLP, pages 1–6, Seattle, WA, USA. Association for Computational Linguistics. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. OpenTag: Open attribute value extraction from product profiles. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD '18, pages 1049–1058, New York, NY, USA. Association for Computing Machinery. | MAVE | In-House Product Data | | | | | | | | | | | | | |----------------------------------------------|-------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Models | Micro | Macro | Micro | Macro | | | | | | | | | | | P (%) | R (%) | F1 | P (%) | R (%) | F1 | P (%) | R (%) | F1 | P (%) | R (%) | F1 | | | | extraction-based (large) BERT-NER | 96.85 | 86.65 | 91.47 | 79.85 | 61.76 | 67.71 | 96.33 | 40.26 | 56.79 | 46.57 | 18.60 | 24.23 | | | classification-based (large) BERT-MLC | 93.19 | 51.63 | 66.45 | 17.31 | 7.40 | 9.49 | NaN | 0 | NaN | NaN | 0 | NaN | | | BERT-MLC w/ TAX | - | - | - | - | - | - | 94.36 | 77.99 | 85.40 | 52.36 | 28.01 | 33.92 | | | generation-based (ours) T5 ⟨A, V⟩ Rare-first | 95.45 | 91.70 | 93.54 | 77.57 | 64.35 | 68.97 | 88.61 | 81.50 | 84.91 | 66.33 | 47.25 | 53.10 | | | Common-first | 95.29 | 92.16 | 93.70 | 78.26 | 66.94 | 70.63 | 85.30 | 82.83 | 84.05 | 62.10 | 41.85 | 47.49 | | | Random | 95.10 | 91.46 | 93.24 | 77.24 | 62.71 | 67.45 | 87.73 | 81.41 | 84.45 | 61.64 | 42.47 | 47.92 | | | ⟨V, A⟩ | Rare-first | 95.24 | 91.97 | 93.57 | 80.59 | 68.02 | 72.51 | 89.82 | 80.73 | 85.03 | 65.73 | 44.61 | 50.93 | | Common-first | 94.62 | 92.85 | 93.73 | 80.50 | 69.72 | 73.47 | 84.25 | 82.97 | 83.60 | 63.61 | 43.61 | 49.13 | | | Random | 95.13 | 92.04 | 93.56 | 80.56 | 67.28 | 71.83 | 88.25 | 81.41 | 84.69 | 63.06 | 42.09 | 48.10 | | | Hyperparameters | BERT-NER BERT-MLC | T5 | | |----------------------------|---------------------|------|------| | Max token length (encoder) | 512 | 512 | 512 | | Max token length (decoder) | n/a | n/a | 256 | | Epoch | 10 | 10 | 10 | | Batch size | 32 | 32 | 32 | | Dropout rate (classifier) | 0.1 | 0.1 | n/a | | Learning rate | 5e-5 | 5e-5 | 3e-4 | | Weight decay | 0 | 0 | 0 | Table 14: Hyperparameters for training models. ## A Final Hyperparameters Used For Each Model Table 14 shows the hyperparameters we used for training models. Other than those, we follow the default hyperparameters of T5 6 7and BERT8 9available from the HuggingFace models. ## B Performance Of Models Using **Bert**Large Table 13 shows the performance of models when we use BERTlarge as the base model for extraction- and classification-based approaches. We adopt bert-large-cased13 for MAVE and cl-tohoku/bert-large-japanese14 for our inhouse data. From the table, we can see that training BERT-MLC did not work well on both datasets. Especially, we cannot compute the performance on our in-house data because the model did not predict any attribute-value pairs for all inputs. Although BERTlarge has a larger number of parameters (330M) than the T5 models (220M), BERT-NER based on BERTlarge still shows lower performance than our generative models on both datasets. This result means that our generative approach is more effective in the PAVI task than the extraction-based approaches based on BERT-NER. Meanwhile, BERT-MLC w/ TAX shows a slightly better micro F1 score than ours. Given that it requires an attribute taxonomy as the extra supervision and exhibits low macro F1, the generative approach is sufficiently comparable to the classification-based approach. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6. ✓ A2. Did you discuss any potential risks of your work? Section 6. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The license or terms of the data we used is not described on the official page. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
jo-etal-2023-k
K-{U}ni{M}orph: {K}orean {U}niversal {M}orphology and its Feature Schema
https://aclanthology.org/2023.findings-acl.414
We present in this work a new Universal Morphology dataset for Korean. Previously, the Korean language has been underrepresented in the field of morphological paradigms amongst hundreds of diverse world languages. Hence, we propose this Universal Morphological paradigms for the Korean language that preserve its distinct characteristics. For our K-UniMorph dataset, we outline each grammatical criterion in detail for the verbal endings, clarify how to extract inflected forms, and demonstrate how we generate the morphological schemata. This dataset adopts morphological feature schema from CITATION and CITATION for the Korean language as we extract inflected verb forms from the Sejong morphologically analyzed corpus that is one of the largest annotated corpora for Korean. During the data creation, our methodology also includes investigating the correctness of the conversion from the Sejong corpus. Furthermore, we carry out the inflection task using three different Korean word forms: letters, syllables and morphemes. Finally, we discuss and describe future perspectives on Korean morphological paradigms and the dataset.
# K-Unimorph: Korean Universal Morphology And Its Feature Schema Eunkyul Leah Jo1∗ Kyuwon Kim1,2∗ Xihan Wu1∗ **KyungTae Lim**3 Jungyeul Park1 **Chulwoo Park**4 1The University of British Columbia, Canada 2Seoul National University, South Korea 3SeoulTech & Teddysum, South Korea 4Anyang University, South Korea {eunkyul,wuxihan}@student.ubc.ca guwon0406@snu.ac.kr ktlim@seoultech.ac.kr jungyeul@mail.ubc.ca cwpa@anyang.ac.kr ## Abstract We present in this work a new Universal Morphology dataset for Korean. Previously, the Korean language has been underrepresented in the field of morphological paradigms amongst hundreds of diverse world languages. Hence, we propose this Universal Morphological paradigms for the Korean language that preserve its distinct characteristics. For our K-UniMorph dataset, we outline each grammatical criterion in detail for the verbal endings, clarify how to extract inflected forms, and demonstrate how we generate the morphological schemata. This dataset adopts morphological feature schema from Sylak-Glassman et al. (2015) and Sylak-Glassman (2016) for the Korean language as we extract inflected verb forms from the Sejong morphologically analyzed corpus that is one of the largest annotated corpora for Korean. During the data creation, our methodology also includes investigating the correctness of the conversion from the Sejong corpus. Furthermore, we carry out the inflection task using three different Korean word forms: letters, syllables and morphemes. Finally, we discuss and describe future perspectives on Korean morphological paradigms and the dataset. ## 1 Introduction The Universal Morphology (UniMorph) project is a collaborative effort providing broad-coverage morphological paradigms for diverse world languages (McCarthy et al., 2020; Kirov et al., 2018). UniMorph consists of a lemma and bundle of morphological features related to a particular inflected word form as follows, for example: 나서다naseoda 나섰다*naseossda* V;DECL;PST where 나서다*naseoda* is the lemma form and 나섰 다*naseossda* ('became') is the inflected form with V;DECL;PST (verb, declarative, and past tense) as morphological schema. ∗Equally contributed authors. It started in 2016 as a SIGMORPHON shared task (Cotterell et al., 2016) for the problem of morphological reinflection, and it introduced morphological datasets for 10 languages. The inflection task, using the given lemma with its part-of-speech to generate a target inflected form, has been continued through the years: CoNLL–SIGMORPHON 2017 Shared Task (Cotterell et al., 2017), CoNLL–SIGMORPHON 2018 Shared Task (Cotterell et al., 2018), SIGMORPHON 2019 Shared Task (McCarthy et al., 2019), SIGMORPHON 2020 Shared Task (Gorman et al., 2020) and SIGMORPHON 2021 Shared Task (Pimentel et al., 2021). However, the Korean language has not been a part of the shared task because of the lack of the dataset. Nonetheless, although rarely, morphological paradigms for Korean have been explored in the context of computational linguistics. Yongkyoon (1993) defined the inflectional classes for verbs in Korean using word-and-paradigm (WP) (Hockett, 1954) approaches. His fifteen classes of the verb which can be joined with seven different types of verbal endings, are based on inflected forms of the verb. Seokjoon (1999) systematized the list of final endings and their properties, which are also used as conjunctive endings in Korean. Otherwise, properties of verbs such as mood, tense, voice, evidentiality, interrogativity have been extensively studied in Korean linguistics independently: for example, *inter alia*, tense (Byung-sun, 2003), grammatical voice (Chulwoo, 2007), interaction of tense–aspect–mood marking with modality (Jae Mog, 1998), evidentiality (Donghoon, 2008), and interrogativity (Donghoon, 2011). In continuation of the efforts, this paper proposes a new Universal Morphology dataset for Korean. We adopt morphological feature schema from Sylak-Glassman et al. (2015) and Sylak-Glassman (2016) for the Korean language and extract inflected verb forms from the Sejong morphologically analyzed corpus over 0.6M sentences with 9.5M words. We set the criteria in detail by explaining how to extract inflected verbal forms (Section 2), and carry out the inflection task using different Korean word forms such as letter, syllable and morpheme (Section 3). Finally, we discuss future perspectives on a Korean UniMorph dataset (Section 4). ## 2 Unimorph Features Schema Verbal endings in the inflected forms of the predicate has been considered as still being in the part of the word as proposed in several grammar formalisms for Korean such as lexicalized tree adjoining grammars (Park, 2006), head driven phrase structure grammars (Ko, 2010), and combinatory categorial grammars (Kang, 2011) in contrast to government and binding (GB) theory (Chomsky, 1981, 1982) for Korean in which the entire sentence depends on separated verbal endings. This idea goes back to Maurice Gross's lexicon grammars (Gross, 1975), and his students who worked on a descriptive analysis of Korean in which the number of predicates in Korean could be fixed by generating possible inflection forms: *e.g.* Pak (1987); Nho (1992); Nam (1994); Shin (1994); Park (1996); Chung (1998); Han (2000). However, we have separated the postposition from the substantive such as noun phrases instead of keeping themselves together. Therefore, with the current Korean dataset, we decide to annotate morphological data for verbs (V). Table 1 shows the morphological schema for Korean UniMorph where we adopt features from Sylak-Glassman et al. (2015) and Sylak-Glassman (2016) for the Korean language. In addition to the features schema, we consider following these four different types of verbal endings, in which they convey grammatical meanings for the predicate: sentence final ending (ef), non-final ending (ep), conjunctive ending (ec), and modifier ending (etm). Evidentiality It is a grammatical category that reflects the source of information that a speaker conveys in a proposition. It is often expressed through morphological markers such as sentence final endings (ef) 대dae, 내nae, and 래lae bring in hearsay (HRSY), and non-final endings (ep) 겠*gess* introduce inferred (INFER). Since the suffix for the quotative (QUOT) is denoted with a postposition (jkq) in Korean instead of the verbal ending, it is excluded from the current set of schemata. Interrogativity It indicates either to express a statement (DECL) or a question (INT). We consider all sentence final ending (ef) ended with 다da as declarative DECL, and sentence final ending (ef) included 가ga and 까kka as interrogative INT. Mood The grammatical mood of a verb indicates modality on a verb by the morphological marking. Realis (REAL) and irrealis (IRR) are represented by a verbal modifier ending (also known as an adnominal ending) (etm), ㄴn and ㄹl, respectively. The usage of adnominal endings consists of (i) collocation such as 인한inhan, 치면*chimyeon*, 대한 daehan, (ii) modifiers and (iii) relative clauses. Realis and irrealis are concerned with regardless of modifiers or relative clauses. General purposive (PURP) is decided by 려고*lyeogo* and 하러*haleo*, and obligative (OBLIG) is introduced by 야ya. It is worthwhile to note that we do not consider indicative (IND) because we specify declarative DECL. Tense It refers to the time frame in which a verb's action or state of being occurs. Non-final endings (ep) such as 았ass and 었*eoss* and final endings (ef) such as ㄴ다nda 는다*neunda* can represent the past (PAST) and the present (PRS) tenses, repectively. Since the future tense (FUT) has been considered as irrealis (IRR) in Korean, we don't annotate it here. Voice We deduce the passive (PASS) from the verb stem instead of the verbal ending such as *jabhi* ('be caught'). Whereas the verb jab ('catch') and the passive suffix hi might be segmented, the current criteria of the Sejong corpus combines them together as a single morpheme. 이히리기i, hi, li, gi are verbal endings known for both the passive and the causative. If the verb has a verbal ending 게 ge such as verb stem+{이i|히hi|리li|기gi}+게ge {하ha|만들*mandeul* ('make')}, then it is causative (CAUS), otherwise passive (PASS). Other schema For politeness, we introduce only polite (POL) using the non-final ending (ep) 시si as the direct encoding of the speaker-addressee relationship (Brown and Levinson, 1987, p.276). Lastly, since we are not able to deduce the valency of the verb from morphemes, we do not include INTR (intransitive), TR (transitive) and DITR (ditransitive). However, we leave them for future work because the valency might still be valid morphological feature schemata for Korean. | Evidentiality | HRSY | hearsay: 일il ('work')/NNB 이i ('COP')/VCP + 래lae ('HRSY')/EF ('happen') | | | | | | | |-------------------------|-------------------------------------------------------------------------------|-------------------------------------------------------------|---------------|----|-------|------------|----|------| | INFER | inferred: 괜찮gwaenchanh ('fine')/VA + 겠gess ('INFER')/EP + 다da ('DECL')/EF | | | | | | | | | Interrogativity | DECL | declarative: 모이moi ('gather')/VV + ㄴ다nda ('DECL')/EF | | | | | | | | INT | interrogative: 배우baeu ('study')/VV + 는가neunga ('INT')/EF | | | | | | | | | Mood | REAL | realis: 얻eod ('get')/VV + 은eun ('REAL')/ETM | | | | | | | | IRR | irrealis: 잊ij ('forget')/VV + 을eul ('IRR')/ETM | | | | | | | | | PURP | general purposive: 달래dallae ('appease')/VV + 려고lyeogo ('PURP')/EC | | | | | | | | | OBLIG | obligative: | 이어지ieoji ('connect')/VV + 어야eoya ('OBLIG')/EC | | | | | | | | ('should be connected') | | | | | | | | | | Tense | PRS | present: 들리deulli + ('hear')/VV + ㄴ다nda ('PRS,DECL')/EF | | | | | | | | PST | past: | 나타나natana | ('appear')/VV | + | 았ass | ('PST')/EP | + | 다da | | ('DECL')/EF | | | | | | | | | | Voice | CAUS | causative: 보이boi ('show')/VV + 게ge ('CAUS')/EC | | | | | | | | PASS | passive: | 잡히jabhi ('be caught')/VV + 었eoss ('PAT')/EP + 다da | | | | | | | | ('DECL')/EF | | | | | | | | | Table 1: Korean UniMorph schema for verbs: vv for verb, va for adjective, vcp for copula, and nnb for bound noun, ## 3 Experimental Results 3.1 Data Creation We prepare the data by extracting inflected verb forms from the Sejong morphologically analyzed corpus (sjmorph) over 676,951 sentences with 7,835,239 eojeols (word units separated by space) which represent 9,537,029 tokens. We are using the same training/dev/test data split that Park and Tyers (2019) proposed for Korean part of speech (POS) tagging. However, the current sjmorph doesn't contain POS labels for the eojeol (the word). Instead, it contains the sequence of POS labels for morphemes as follows: 나섰다naseossda 나서*naseo*/VV+었*eoss*/EP+다da/EF where it contains only each morpheme's POS label: a verb 나서*naseo* ('become'), a non-final ending 었*eoss* ('PST'), and a final ending 다da ('DECL'), and it does not show whether the word 나섰다 naseossda ('became') is a verb. Previous works (Petrov et al., 2012; Park et al., 2016; Park and Tyers, 2019; Kim and Colineau, 2020) propose a partial mapping table between Sejong POS (and the sequence of Sejong POSs) (XPOS) and Universal POS (UPOS) labels where UPOS represents the grammatical category of the word. However, no study has presented the correctness of their conversion rules. Therefore, we utilize UD_Korean-GSD (McDonald et al., 2013) in Universal Dependencies (Nivre et al., 2016, 2020) that provides Sejong POS(s) and Universal POS labels for each word. Nevertheless, we observed several critical POS annotation errors in UD_Korean-GSD. For this reason, we proceeded to revise GSD's Sejong POS(s) and Universal POS to evaluate our criteria of getting verbs (inflected forms and their lemmas) from sjmorph. This approach involved randomly selecting 300 sentences from the GSD and manually revising their POS labels based on the Sejong POSs. For thorough verification, they were examined by our linguist for over 60 hours over 3 weeks. The main places of error that we noticed were how words for proper nouns were labeled as NOUN even with its XPOS of proper nouns (NNP). They were corrected to the UPOS label of PROPN. Another common place of error was how the dataset recognized and labeled words according to their roles as constituent parts of the sentence they are in, instead of the word's own category. For example, the temporal nouns was usually annotated as ADV instead of NOUN. We changed this mislabeling by acknowledging the word itself, separate from the sentence. Again, the Sejong POS labels were revised based on the criteria of the Sejong corpus. After correcting 738 words for Sejong POS labels and 705 words for Universal POS labels from 300 sentences in the development file, we trained the sequence of Sejong POS labels using semi-supervised learning to predict the Universal POS label for each word. Among 3674 predictions, there were only 332 UPOS prediction errors, and an error scarcely occurs for VERB labels, which we attempted to ex- | train | dev | test | | |-----------|---------|--------|--------| | lemma | 41,631 | 7505 | 7595 | | inflected | 197,774 | 19,251 | 27,846 | Table 2: Statistics of Korean UniMorph | Source | Target | | |--------------|---------------|-----------------| | letter (L) | ㄴㅏㅅㅓㄷㅏ | ㄴㅏㅅㅓㅆㄷㅏ | | syllable (S) | 나서다 | 나섰다 | | morpheme (M) | 나서다 | 나서었다 | | surface form | 나서다naseoda | 나섰다naseossda | tract from sjmorph. Therefore, we consider this current error rate for the verb to be negligible. Finally, we extract 244,871 inflected verbal forms for 43,959 lemma types from sjmorph. Then, we remove all duplicated items from train+dev datasets compared to the test dataset. In Table 2 is the brief statistics of the current dataset. ## 3.2 Morphological Reinflection The goal of the morphological reinflection task creates the generative function of morphological schema to produce the inflected form of the given word. For Korean, we use 나서다*naseoda* and V;DECL;PST to predict 나섰다*naseossda* by using the composition of alphabet letters (L), syllables (S) and morphemes (M) of the word as shown in Table 3. The word is decomposed into the sequence of consonants and vowels by Letter, the sequence of units constructed with two or three letters by syllable, and the sequence of morphological units by morpheme. The conversion from the target form of each representation to the surface form and vice versa are straightforward in technical terms. For our task, we use the baseline system from The CoNLL–SIGMORPHON 2018 Shared Task (Cotterell et al., 2018).1 The system uses alignment, span merging and rule extraction to predict the set of all inflected forms of a lexical item (Durrett and DeNero, 2013). We also build a basic neural model using fairseq2(Ott et al., 2019) and Transformer (Vaswani et al., 2017). Table 4 shows the experimental results for Korean UniMorph using the three different representation forms. It is notable that the morpheme forms outperform the other surface representation forms such as by letters and syllables of Table 4: Experimental results (accuracy) | L | S | M | | |----------|-------|-------|-------| | baseline | 26.88 | 27.75 | 31.29 | | neural | 51.97 | 49.72 | 54.26 | ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) | UniMorph 4.0 Korean | K-UniMorph | | |-----------------------|------------------------|------------------------| | Evide. | - | HRS, INFER | | Finit. | FIN, NFIN | - | | Inter. | DECL, INT, IMP | DECL, INT | | Mood | COND, PURP | REAL, IRR, PURP, OBLIG | | Tense | PRS, PST, FUT | PRS, PST | | Voice | CAUS | CAUS, PASS | | Polit. | FORM, INFORM, POL ELEV | POL | | Per. | 1, 2 | - | | Num. | PL | - | the word. This is because morpheme forms imply lemma forms for both source and target data. While the average number of inflected forms per lemma is 8.285, there are 22 verb lemmas that have more than 400 different inflected forms. The average number of inflected forms per lemma and morphological feature pair is also 5.634, and this makes Korean difficult to predict the inflected form. ## 3.3 Comparison With Unimorph 4.0 Korean UniMorph 4.0 (Batsuren et al., 2022) includes a Korean dataset, which provides 2686 lemma and 241,323 inflected forms that are automatically extracted from Wiktionary. It is mainly comprised of adjectives and verbs with totals of 52,387 and 188,821, respectively.3 Thoroughly, we inspected the verbs in UniMorph 4.0 Korean to compare with K-UniMorph: Among the 152,454 inflected forms of verbs in UniMorph 4.0 Korean, there are only 16,489 forms that appear in 9.5M words of the Sejong corpus, and 135,965 forms (89.18%) that never occur. UniMorph 4.0 Korean annotated all verbs (V) as FIN and all participles (V.CPTP) as NFIN. We can consider adding FIN for all verbs endings with ef (final verbal endings) and NFIN for all verbs ending with etm (adnominal endings, which are utilized for relative clauses, modifiers, and a part of collocations). To inspect this, UniMorph 4.0 Korean provides the imperative-jussive modality IMP which consists of 1;PL and 2, but it seems that Number (PL) occurs only with 1 (Person). While K-UniMorph considers only 시si (an honorific for the agent) as POL, UniMorph 4.0 Korean uses ELEV 3The counts are short of some numbers because the errors, 92 forms without morphological schema, are excluded. | Core case | NOM | nominative which marks the subject of a verb: 병원byeongwon ('hospital')/NNG + 이i ('NOM')/JKS | | | | | | | | |-----------------------------------|-------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|-------------------------------------------|-----|--------|----|----|-------|-----------| | ACC | accusative | which | marks | the | object | of | a | verb: | 원인wonin | | ('cause')/NNG + 을eul ('ACC')/JKO | | | | | | | | | | | Non-core, non-local case | DAT | dative which marks the indirect object: | 국민gugmin ('peo | | | | | | | | ple')/NNG + 에게ege ('DAT')/JKB | | | | | | | | | | | GEN | genitive which marks the possessor: 사회sahoe ('society')/NNG + 의ui ('GEN')/JKG | | | | | | | | | | INS | instrumental which marks means by which an action occurred: 대리석daeliseog ('marble')/NNG + 으로eulo ('INS')/JKB | | | | | | | | | | COM | comitative which marks the accompaniment: 망치mangchi ('hammer')/NNG + 와wa ('COM')/JC | | | | | | | | | | VOC | vocative which indicate the direct form of address: | 달dal | | | | | | | | | ('moon')/NNG + 아a ('VOC')/JKV | | | | | | | | | | | Local case | ALL | allative which marks a type of locative grammatical case: 길gil ('road')/NNG + 로lo ('ALL')/JKB | | | | | | | | | ABL | ablative which expresses motion away from something: 밑mit ('bottom')/NNG + 에서부터eseobuteo ('ABL')/JKB | | | | | | | | | | Comparison | CMPR | comparative: | 예상yesang ('expectation')/NNG + 보다boda | | | | | | | | ('CMPR')/JKB | | | | | | | | | | | Information structure | TOP | topic which is what is being talked about: | 사람salam ('peo | | | | | | | | ple')/NNG + 은eun ('TOP')/JX | | | | | | | | | | Table 6: Korean UniMorph schema for nouns. for 시si, and POL comes from verbal endings 요yo and 습니다*seubnida* with either FORM or INFM. However, FORM.ELEV is to elevate the referent. Therefore, it should be with IMP;2|3 and instead, FORM.HUMB can be introduced with IMP;1 for 습 니다*seubnida*, and INFM.ELEV|INFN.HUMB for 요 yo. Hence, K-UniMorph provides a richer feature schema based on linguistics analysis. Table 5 summarises the different usage of the feature schema between UniMorph 4.0 Korean K-UniMorph. ## 4 Discussion And Future Perspectives We have dealt with UniMorph schema for verbs, and obtained experimental results for the morphological reinflection task using the different representation forms of the word. Nouns in Korean have been considered by separating postposition from the lemma of the noun instead of keeping themselves together (e.g. 프랑스*peulangseu* ('France') and 의ui ('GEN') instead of 프랑스의*peulangseuui*) in several grammar formalisms for Korean. However, in addition to exogenously given interests such as *inflection in context*, 4recent studies insist the functional morphemes including both verbal endings and postpositions in Korean should be treated as part of a word, with the result that their categories do not require to be assigned individually in a syntactic level (Park and Kim, 2023). Accordingly, it would be more efficient to assign the syntactic categories on the fully inflected lexical word derived by the lexical rule of the morphological processes in the lexicon. Therefore, we will investigate how we adopt features for nouns such as cases including non-core and local cases such as NOM (nominative), ACC (accusative), comparison (CMPR), and information structure TOP (topic) (Table 6). It will also include a typology of jkb (adverbial marker), which raises ambiguities. An adverbial marker can represent 'dative' which marks the indirect object, 'instrumental' which marks means by which an action occurred, 'allative' which marks a type of locative grammatical case, 'ablative' which expresses motion away from something, or 'comparative' (CMPR, 예상*yesang*. We leave a detailed study on nouns and other grammatical categories for future work. All datasets of K-UniMorph are available at https://github.com/jungyeul/K-UniMorph to reproduce the results. ## Acknowledgement We would like to thank Ekaterina Vylomova and Omer Goldman at the UniMorph project for their help and support. We also wish to thank three anonymous reviewers for providing us with helpful feedback. This research was based upon work partially supported by the Students as Partners Course Design Grants through the Office of the Provost & Vice-President Academic at the University of British Columbia to Eunkyul Leah Jo, and by *Basic Science Research Program* through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2021R1F1A1063474) to KyungTae Lim. This research was also supported in part through computational resources and services provided by Advanced Research Computing at the University of British Columbia. ## References Khuyagbaatar Batsuren, Omer Goldman, Salam Khalifa, Nizar Habash, Witold Kieras, Gábor Bella, ´ Brian Leonard, Garrett Nicolai, Kyle Gorman, Yustinus Ghanggo Ate, Maria Ryskina, Sabrina Mielke, Elena Budianskaya, Charbel El-Khaissi, Tiago Pimentel, Michael Gasser, William Abbott Lane, Mohit Raj, Matt Coler, Jaime Rafael Montoya Samame, Delio Siticonatzi Camaiteri, Esaú Zumaeta Rojas, Didier López Francis, Arturo Oncevay, Juan López Bautista, Gema Celeste Silva Villegas, Lucas Torroba Hennigen, Adam Ek, David Guriel, Peter Dirix, Jean-Philippe Bernardy, Andrey Scherbakov, Aziyana Bayyr-ool, Antonios Anastasopoulos, Roberto Zariquiey, Karina Sheifer, Sofya Ganieva, Hilaria Cruz, Ritván Karahó\vga, Stella Markantonatou, George Pavlidis, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Candy Angulo, Jatayu Baxi, Andrew Krizhanovsky, Natalia Krizhanovskaya, Elizabeth Salesky, Clara Vania, Sardana Ivanova, Jennifer White, Rowan Hall Maudslay, Josef Valvoda, Ran Zmigrod, Paula Czarnowska, Irene Nikkarinen, Aelita Salchak, Brijesh Bhatt, Christopher Straughn, Zoey Liu, Jonathan North Washington, Yuval Pinter, Duygu Ataman, Marcin Wolinski, Totok Suhardijanto, Anna Yablonskaya, Niklas Stoehr, Hossep Dolatian, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Aryaman Arora, Richard J. Hatcher, Ritesh Kumar, Jeremiah Young, Daria Rodionova, Anastasia Yemelina, Taras Andrushko, Igor Marchenko, Polina Mashkovtseva, Alexandra Serova, Emily Prud'hommeaux, Maria Nepomniashchaya, Fausto Giunchiglia, Eleanor Chodroff, Mans Hulden, Miikka Silfverberg, Arya D. McCarthy, David Yarowsky, Ryan Cotterell, Reut Tsarfaty, and Ekaterina Vylomova. 2022. UniMorph 4.0: Universal Morphology. In Proceedings of the Thirteenth Language Resources and Evaluation Confer- ence, pages 840–855, Marseille, France. European Language Resources Association. Penelope Brown and Stephen C. Levinson. 1987. *Politeness: Some Universals in Language Usage*. Studies in Interactional Sociolinguistics. Cambridge University Press. Hwang Byung-sun. 2003. A Study on Interpretation of the Korean Tense. The Korean Language and Literature, 79(1):309–346. Noam Chomsky. 1981. Lectures on Government and Binding. Studies in Generative Grammar. Foris Publications, Dordrecht, The Netherlands. Noam Chomsky. 1982. *Some Concepts and Consequences of the Theory of Government and Binding*. Linguistic Inquiry Monograph 6. The MIT Press, Cambridge, MA. Park Chulwoo. 2007. The Grammatical Voice in Korean: an Interface Phenomenon between Syntax and Semantics. *Korean Linguistics*, 37(1):207–228. Min-Chung Chung. 1998. *Les nominalisations* d'adjectifs en coréen : constructions nominales à support issda (il y avoir). Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL– SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. In *Proceedings of the* CoNLL–SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 1–27, Brussels. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. CoNLLSIGMORPHON 2017 Shared Task: Universal Morphological Reinflection in 52 Languages. In *Proceedings of the CoNLL SIGMORPHON 2017 Shared* Task: Universal Morphological Reinflection, pages 1–30, Vancouver. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 Shared Task— Morphological Reinflection. In *Proceedings of* the 2016 Meeting of SIGMORPHON, pages 10–22, Berlin, Germany. Association for Computational Linguistics. Lim Donghoon. 2008. The Mood and Modal systems in Korean. *Korean Semantics*, 26(2):211–248. Lim Donghoon. 2011. Sentence types in Korean. *Journal of Korean Linguistics*, 60(1):323–359. Greg Durrett and John DeNero. 2013. Supervised Learning of Complete Morphological Paradigms. In *Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1185–1195, Atlanta, Georgia. Association for Computational Linguistics. Kyle Gorman, Lucas F. E. Ashby, Aaron Goyzueta, Arya McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 40–50, Online. Association for Computational Linguistics. Maurice Gross. 1975. *Méthodes en syntaxe*. Hermann. Sunhae Han. 2000. Les predicats nominaux en coreen : Constructions a verbe support hata. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Charles F. Hockett. 1954. Two Models of Grammatical Description. *WORD*, 10(2-3):210–234. Song Jae Mog. 1998. Semantic functions of the non - terminal suffix - te - in Korean : from a typological perspective. *Journal of Korean Linguistics*, 32(1):135–169. Juyeon Kang. 2011. Problèmes morpho-syntaxiques analysés dans un modèle catégoriel étendu : application au coréen et au français avec une réalisation informatique. Ph.D. thesis, Université Paris IV - Paris-Sorbonne, Paris, France. Myung Hee Kim and Nathalie Colineau. 2020. An Enhanced Mapping Scheme of the Universal Part-OfSpeech for Korean. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 3826–3833, Marseille, France. European Language Resources Association. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sabrina J. Mielke, Arya McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Universal Morphology. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)*, pages 1868–1873, Miyazaki, Japan. European Language Resources Association (ELRA). Kil Soo Ko. 2010. *La syntaxe du syntagme nominal* et l'extraction du complément du nom en coréen : description, analyse et comparaison avec le français. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Arya D McCarthy, Christo Kirov, Matteo Grella, Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vylomova, Sabrina J Mielke, Garrett Nicolai, Miikka Silfverberg, Timofey Arkhangelskiy, Nataly Krizhanovsky, Andrew Krizhanovsky, Elena Klyachko, Alexey Sorokin, John Mansfield, Valts Ernštreits, Yuval Pinter, Cassandra L Jacobs, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2020. UniMorph 3.0: Universal Morphology. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 3922–3931, Marseille, France. European Language Resources Association. Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sabrina J. Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans Hulden. 2019. The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and CrossLingual Transfer for Inflection. In *Proceedings of* the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229– 244, Florence, Italy. Association for Computational Linguistics. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal Dependency Annotation for Multilingual Parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics. Jee-Sun Nam. 1994. *Classification syntaxique des constructions adjectivales en coréen*. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Yun-Chae Nho. 1992. *Les constructions converses du* coréen : études des prédicats nominaux. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), page 1659–1666, Portorož, Slovenia. European Language Resources Association (ELRA). Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo ˇ Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France. European Language Resources Association. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Hyong-Ik Pak. 1987. *Lexique-grammaire du coréen :* construction à verbes datifs. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Jungyeul Park. 2006. *Extraction automatique d'une* grammaire d'arbres adjoints à partir d'un corpus arboré pour le coréen. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Jungyeul Park, Jeen-Pyo Hong, and Jeong-Won Cha. 2016. Korean Language Resources for Everyone. In Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers (PACLIC 30), pages 49–58, Seoul, Korea. Pacific Asia Conference on Language, Information and Computation. Jungyeul Park and Mija Kim. 2023. A role of functional morphemes in Korean categorial grammars. Korean Linguistics, 19(1):1–30. Jungyeul Park and Francis Tyers. 2019. A New Annotation Scheme for the Sejong Part-of-speech Tagged Corpus. In *Proceedings of the 13th Linguistic Annotation Workshop*, pages 195–202, Florence, Italy. Association for Computational Linguistics. Sounnam Park. 1996. *La construction des verbes neutres en coreen*. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A Universal Part-of-Speech Tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2089– 2096, Istanbul, Turkey. European Language Resources Association (ELRA). Tiago Pimentel, Maria Ryskina, Sabrina J. Mielke, Shijie Wu, Eleanor Chodroff, Brian Leonard, Garrett Nicolai, Yustinus Ghanggo Ate, Salam Khalifa, Nizar Habash, Charbel El-Khaissi, Omer Goldman, Michael Gasser, William Lane, Matt Coler, Arturo Oncevay, Jaime Rafael Montoya Samame, Gema Celeste Silva Villegas, Adam Ek, Jean-Philippe Bernardy, Andrey Shcherbakov, Aziyana Bayyr-ool, Karina Sheifer, Sofya Ganieva, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Andrew Krizhanovsky, Natalia Krizhanovsky, Clara Vania, Sardana Ivanova, Aelita Salchak, Christopher Straughn, Zoey Liu, Jonathan North Washington, Duygu Ataman, Witold Kieras, Marcin Woli ´ nski, Totok Suhardijanto, Niklas ´ Stoehr, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Richard J. Hatcher, Emily Prud'hommeaux, Ritesh Kumar, Mans Hulden, Botond Barta, Dorina Lakatos, Gábor Szolnok, Judit Ács, Mohit Raj, David Yarowsky, Ryan Cotterell, Ben Ambridge, and Ekaterina Vylomova. 2021. SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages. In *Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology*, pages 229–259, Online. Association for Computational Linguistics. Park Seokjoon. 1999. A few notes on the systematization of final endings in modern Korean: Focusing on '-geodeun' and '-lyeogo'. Journal of Educational Research, 7(1):225–244. Kwang-Soon Shin. 1994. *Le verbe support hata en* coréen contemporain : morpho-syntaxe et comparaison. Ph.D. thesis, Université Paris 7 - Denis Diderot, Paris, France. John Sylak-Glassman. 2016. The Composition and Use of the Universal Morphological Feature Schema (UniMorph Schema). Technical report, Johns Hopkins University, Baltimore, MD, USA. John Sylak-Glassman, Christo Kirov, Matt Post, Roger Que, and David Yarowsky. 2015. A Universal Feature Schema for Rich Morphological Annotation and Fine-Grained Cross-Lingual Part-of-Speech Tagging. In *Systems and Frameworks for Computational Morphology*, pages 72–93, Cham. Springer International Publishing. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6000–6010. Curran Associates, Inc. No Yongkyoon. 1993. Deciding on Inflectional Classes in a Word-and-Paradigm Morphology. In *Proceedings of the 5th Annual Conference on Human and* Cognitive Language Technology, pages 405–411, Daejeon, South Korea. Special Interest Group of Human and Cognitive Language Technology. ## A Neural Experiment Description We use the default setting of fairseq for the neural experiment for the Table 4 in §3.2 as described in Table 7. fairseq fairseq-preprocess, fairseq-train and fairseq-interactive. GPU around 1 hour of GPU has been consumed for the training step for each experiment. Total runtime It takes about 2 to 3 hours for completing one experiment including all steps (preprocessing, training and evaluation). Results A single run with a seed number | task | translation | |-------------------------|---------------| | arch | transformer | | dropout | 0.3 | | learning rate | 0.0001 | | lr-scheduler | inverse_sqrt | | attention-dropout | 0.3 | | activation-dropout | 0.3 | | activation-fn | relu | | encoder-embed-dim | 256 | | encoder-ffn-embed-dim | 1024 | | encoder-layers | 4 | | encoder-attention-heads | 4 | | decoder-embed-dim | 256 | | decoder-ffn-embed-dim | 1024 | | decoder-layers | 4 | | decoder-attention-heads | 4 | | optimizer | adam | | adam-betas | (0.9, 0.98) | | clip-norm | 1.0 | | warmup-updates | 4000 | | label-smoothing | 0.1 | | batch-size | 400 | | max-update | 20000 | Table 7: Hyperparameter ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 A4. Have you used AI writing assistants when working on this paper? Not applicable. Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 3.2 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We will follow UniMorph's policy for data distribution ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 1 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3, table 2 ## C ✓ **Did You Run Computational Experiments?** 3, Appendix A ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix A ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3 ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Annotator is one of authors ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Because it is CC BY-NC-SA D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
oota-etal-2023-brain
How does the brain process syntactic structure while listening?
https://aclanthology.org/2023.findings-acl.415
Syntactic parsing is the task of assigning a syntactic structure to a sentence. There are two popular syntactic parsing methods: constituency and dependency parsing. Recent works have used syntactic embeddings based on constituency trees, incremental top-down parsing, and other word syntactic features for brain activity prediction given the text stimuli to study how the syntax structure is represented in the brain{'}s language network. However, the effectiveness of dependency parse trees or the relative predictive power of the various syntax parsers across brain areas, especially for the listening task, is yet unexplored. In this study, we investigate the predictive power of the brain encoding models in three settings: (i) individual performance of the constituency and dependency syntactic parsing based embedding methods, (ii) efficacy of these syntactic parsing based embedding methods when controlling for basic syntactic signals, (iii) relative effectiveness of each of the syntactic embedding methods when controlling for the other. Further, we explore the relative importance of syntactic information (from these syntactic embedding methods) versus semantic information using BERT embeddings. We find that constituency parsers help explain activations in the temporal lobe and middle-frontal gyrus, while dependency parsers better encode syntactic structure in the angular gyrus and posterior cingulate cortex. Although semantic signals from BERT are more effective compared to any of the syntactic features or embedding methods, syntactic embedding methods explain additional variance for a few brain regions.
# How Does The Brain Process Syntactic Structure While Listening? Subba Reddy Oota1, Mounika Marreddy2, Manish Gupta2,3 **and Bapi Raju Surampudi**2 1INRIA, Bordeaux, France; 2IIIT Hyderabad, India; 3Microsoft, India subba-reddy.oota@inria.fr, mounika.marreddy@research.iiit.ac.in gmanish@microsoft.com, raju.bapi@iiit.ac.in ## Abstract Syntactic parsing is the task of assigning a syntactic structure to a sentence. There are two popular syntactic parsing methods: constituency and dependency parsing. Recent works have used syntactic embeddings based on constituency trees, incremental top-down parsing, and other word syntactic features for brain activity prediction given the text stimuli to study how the syntax structure is represented in the brain's language network. However, the effectiveness of dependency parse trees or the relative predictive power of the various syntax parsers across brain areas, especially for the listening task, is yet unexplored. In this study, we investigate the predictive power of the brain encoding models in three settings: (i) individual performance of the constituency and dependency syntactic parsing based embedding methods, (ii) efficacy of these syntactic parsing based embedding methods when controlling for basic syntactic signals, (iii) relative effectiveness of each of the syntactic embedding methods when controlling for the other. Further, we explore the relative importance of syntactic information (from these syntactic embedding methods) versus semantic information using BERT embeddings. We find that constituency parsers help explain activations in the temporal lobe and middle-frontal gyrus, while dependency parsers better encode syntactic structure in the angular gyrus and posterior cingulate cortex. Although semantic signals from BERT are more effective compared to any of the syntactic features or embedding methods, syntactic embedding methods explain additional variance for a few brain regions. We make our code publicly available1. ## 1 Introduction A key assumption in psycholinguistics is that sentence processing involves two operations: (i) the construction of a syntactic structure that represents 1https://tinyurl.com/BrainSyntax the relation between its components and (ii) the retrieval of the meaning of single linguistic units from semantic memory. When presented with a sentence in a task, humans can understand word meaning effectively while reading and listening. Listeners and readers appear to extract a similar semantic meaning from narrative stories (Rubin et al., 2000; Diakidoy et al., 2005), hence suggesting that the brain represents semantic information in an amodal form, i.e., independent of input modality. Further, earlier language-fMRI encoding studies have observed that sentence semantics alone cannot explain all the variance in brain activity; *syntactic* information can also be used to explain some of the variance (Binder et al., 2016; Fedorenko and Thompson-Schill, 2014). Prior to different aspects of semantic interpretation, the brain performs syntactic structure analysis inherently (Hirst, 1984). The syntactic information helps to identify the structural constituents that have to be interpreted as nominal, ordinal, or noun phrases, e.g., we identify "Brazil", "four", "world cups", and "2002" in a sentence: "Brazil won four world cups till 2002" before interpreting the semantics. Hence, investigating how the brain encodes syntactic word features is crucial for understanding language comprehension in the brain. Two paradigms of syntactic parsing: Constituency and dependency are two different syntactic formalisms using different structural primitives (dependency relations and phrases). There has been some discussion in the field of theoretical linguistics with regard to whether they capture the same information or to what degree the structures they sanction are equivalent (Hays, 1964; Jung, 1995). Discussing the linguistic information the two parsers capture, Rambow (2010) states from a theoretical linguistic point of view that they describe distinct syntactic entities; thus, they are not strictly equivalent. Dependencies capture direct relations between words, identical to the- individual predictive power of these three syntactic word ![1_image_0.png](1_image_0.png) predictive power of the three syntactic word embedding predictive power of each of the three syntactic word embedding methods when controlling for the other two. Figure 1: Four steps of our proposed approach: (1) fMRI acquisition, (2) Syntactic parsing, (3) Regression model training, and (4) Predictive power analysis of the three embeddings methods. 084 guistic information the three parsers capture, Ram-085 bow (2010) states from a theoretical linguistic point 086 of view that they describe distinct syntactic entities, 087 and thus are not strictly equivalent. Dependencies 088 capture direct relations between words, identical 089 to thematic functions such as subject, object, mod-090 ifier, etc. Constituent syntactic structure, on the 091 other hand, is not so much about functional rela-092 tions between words, but about the recursive group-093 ing of sentence constituents (words and phrases), 094 such that at each level, each grouping acts as a syn-095 tactic unit (Schneider, 1998). Moreover, accord-096 ing to Jung (1995) only dependencies can express 097 the syntactic word-to-word relations of a sentence, 098 whereas constituency expresses the linear order of a 099 sentence. On the other hand, incremental top-down 100 parser processes input words from left to right, pro-101 ducing the possible parses in a top-down manner 102 as future words are read. Therefore, Jung (1995) 103 sees the two grammars as complementary but not 104 equivalent. Following these last observations, we 105 consider dependency and constituent structures as 106 distinct and the type of information that they cap-107 ture as nonequivalent. The question we address 108 in this study is whether different brain regions are 109 associated with building different kinds of syntac-110 tic structure. We compare the predictive power of 111 syntactic structural measures derived from these 112 parsers with regard to modeling the brain activ-113 ity in language processing areas recorded during 114 naturalistic story listening. 115 **Stimulus types for studying syntactic processing:** 116 Earlier psycholinguistic studies explored syntactic 117 processing while subjects were involved in activimatic functions such as subject, object, modifier, etc. Constituent syntactic structure, on the other hand, is not so much about functional relations between words but about the recursive grouping of sentence constituents (words and phrases), such that at each level, each grouping acts as a syntactic unit (Schneider, 1998). Moreover, according to Jung (1995), only dependencies can express the syntactic word-to-word relations of a sentence, whereas constituency expresses the linear order of a sentence. On the other hand, an incremental topdown constituency parser processes input words from left to right, producing the possible parses in a top-down manner as future words are read. Therefore, Jung (1995) sees the two grammars as complementary but not equivalent. Following these last observations, we consider dependency and constituent structures as distinct and the type of information they capture as nonequivalent. The question we address in this study is whether different brain regions are associated with building different kinds of syntactic structures. We compare the predictive power of syntactic structural measures derived from these parsers with regard to modeling the brain activity in language processing areas recorded during naturalistic story listening. Stimulus types for studying syntactic processing: Earlier psycholinguistic studies explored syntactic processing while subjects were involved in activities that required less versus more syntactic comprehension effort (Friederici, 2011) using carefully designed sentence/phrase stimuli. In the past decade, the study of syntactic processing has been extended to naturalistic settings that use narratives, such as reading (Reddy and Wehbe, 2021) or listening to stories (Bhattasali et al., 2018; Zhang et al., 2022) generally in a task-free setting. Due to the com2 ties that required less versus more syntactic compre- 118 hension effort (Friederici, 2011) using carefully de- 119 signed sentence/phrase stimuli. In the past decade, 120 the study of syntactic processing has been extended 121 to naturalistic settings that use narratives, such as 122 reading (Reddy and Wehbe, 2021) or listening to 123 stories (Bhattasali et al., 2018; Zhang et al., 2022) 124 generally in a task-free setting. Due to the com- 125 plexity of extracting syntactic word embeddings 126 from sentence parsers, investigation of the predic- 127 tive power of sentence parsers for brain encoding, 128 especially for the neuroimaging data from natural- 129 istic listening paradigms is still under-explored. 130 Brain Regions of Interest (ROIs) for syntactic 131 processing: Several classical studies report the 132 involvement of a language network of mostly left- 133 lateralised cortical regions including the left in- 134 ferior frontal gyrus (IFG) with sub regions (BA 135 44 and BA 45), the left posterior superior tempo- 136 ral gyrus (pSTG), and the left anterior temporal 137 pole (ATP) (Caramazza and Zurif, 1976; Friederici 138 et al., 2006; Friederici, 2011; Pallier et al., 2011; 139 Zaccarella and Friederici, 2015). However, sev- 140 eral other studies do not report activity in left IFG 141 and left pSTG (Humphries et al., 2006; Rogalsky 142 and Hickok, 2009; Bemis and Pylkkänen, 2011), 143 despite using paradigms similar to the above men- 144 tioned studies. A series of recent studies have used 145 functional magnetic resonance imaging (fMRI) 146 brain activity to find that those brain regions span- 147 ning both the left and right hemispheres are in- 148 volved in language processing (Fedorenko and 149 Thompson-Schill, 2014; Caucheteux et al., 2021a; 150 Reddy and Wehbe, 2021; Zhang et al., 2022; Oota 151 plexity of extracting syntactic word embeddings from sentence parsers, investigation of the predictive power of sentence parsers for brain encoding, especially for the neuroimaging data from naturalistic listening paradigms, is still under-explored. Brain Regions of Interest (ROIs) for syntactic processing: Several classical studies report the involvement of a language network of mostly leftlateralised cortical regions, including the left inferior frontal gyrus (IFG) with sub-regions (BA 44 and BA 45), the left posterior superior temporal gyrus (pSTG), and the left anterior temporal pole (ATP) (Caramazza and Zurif, 1976; Friederici et al., 2006; Friederici, 2011; Pallier et al., 2011; Zaccarella and Friederici, 2015). However, several other studies did not report activity in left IFG and left pSTG (Humphries et al., 2006; Rogalsky and Hickok, 2009; Bemis and Pylkkänen, 2011), despite using paradigms similar to the studies mentioned above. A series of recent studies have used functional magnetic resonance imaging (fMRI) brain activity to find that those brain regions spanning both the left and right hemispheres are involved in language processing (Fedorenko and Thompson-Schill, 2014; Caucheteux et al., 2021a; Reddy and Wehbe, 2021; Zhang et al., 2022; Oota et al., 2022b,a; Toneva et al., 2022; Aw and Toneva, 2023; Oota et al., 2022c; Merlin and Toneva, 2022). Further, these works conclude that syntax is distributed throughout the language system (Blank et al., 2016; Fedorenko et al., 2012, 2020; Caucheteux et al., 2021a; Wang et al., 2020; Reddy and Wehbe, 2021; Zhang et al., 2022; Oota et al.). However, whether different brain regions are sensitive to distinct sentence-parsing strategies remains unclear. Moreover, in a listening task, it is unclear how syntactic features are represented in the brain and whether the neural correlates of different syntactic parsing signals overlap or dissociate from one another. Word stimulus representations for brain encoding: Several studies have used basic syntactic features such as part-of-speech, dependency relations, complexity metrics (Caucheteux et al., 2021a; Reddy and Wehbe, 2021), and semantic word embeddings (Oota et al., 2018; Jain and Huth, 2018; Hollenstein et al., 2019) to represent words for brain encoding with text stimulus. In this paper, to understand how the brain processes linguistic structure in sentences, we leverage three different text representations using syntax parsers, as shown in Fig. 1. We aim to understand the relative importance of these syntax parser embeddings and also their additional importance when compared with basic syntactic features or semantic embeddings like BERT. Limitations of previous work: (i) Existing work has focused on either constituency parsing mainly including incremental top-down parsing (Reddy and Wehbe, 2021). No previous work has explored syntactic structure present in dependency trees. Reddy and Wehbe (2021) have only used one-hot vector for dependency tags as part of their complexity metrics. But we leverage dependency information more systematically by learning the dependency representations using graph convolutional networks. (ii) Existing work has mostly focused on reading tasks only, and that too on small number of subjects (e.g., 7 subjects in (Reddy and Wehbe, 2021)). There is evidence that several cortical regions are activated during listening (Handjaras et al., 2016). But which brain areas and subregions of the language network are involved in syntactic processing is yet unexplored. (iii) Lastly, existing work does not perform pairwise predictive power comparison for different syntactic parse methods. Overall, our main contributions are as follows. (1) We explore (a) basic syntactic features such as complexity metrics, part-of-speech (POS) tags, and dependency role (DT) tags, (b) embeddings obtained from three parse tree representations, and (c) semantic BERT embeddings for brain encoding. (2) Constituency and dependency tree-based embeddings are effective across different language regions for brain activity prediction, even after controlling for basic syntactic signals. (3) We find that prediction of the activation in regions such as the bilateral temporal areas (ATL, PTL) and middlefrontal gyrus (MFG) is significantly related to constituency parse representations. At the same time, brain activity in other language regions, such as the angular gyrus (AG) and posterior cingulate cortex (PCC) is significantly associated with dependency parse embeddings. (4) Lastly, in the inferior frontal gyrus (IFG), we identify that dependency parse embeddings encode syntactic information better in the sub-regions such as 44, 45, IFJa, and IFSp of the left hemisphere, whereas constituency parse tree and incremental top-down parse tree based embeddings are better aligned in the right hemisphere. ## 2 Feature Representations We used four different features computed per word to simultaneously test different syntactic and semantic representations. (1) Constituency Tree-based Embeddings: Similar to Reddy and Wehbe (2021), we build three types of constituency tree-based graph embeddings (ConTreGE): (i) ConTreGE Complete vectors (CC), (ii) ConTreGE Incomplete vectors (CI) and (iii) Incremental Top-Down Parser Embeddings (INC). A CC vector is generated for every word using the largest subtree completed by that word. A subtree is considered complete when all of its leaves are terminals. The largest subtree completed by a given word refers to the subtree with the largest height. A CI vector is generated for every word using the incomplete subtree that contains all of the Phrase Structure Grammar productions needed to derive the words seen till then, starting from the root of the sentence's tree. Some examples for CC and CI are added in the Appendix (Figs. 6 and 7). Like (Reddy and Wehbe, 2021), we use Berkeley Neural Parser2for constituency parsing (i.e., for both CI and CC). In ConTreGE Complete tree (CC), the largest subtree completed by a given word refers to the subtree with the largest height that also satisfies the following conditions - the given word must be one of its leaves and all of its leaves must only contain words that have been seen till then. In ConTreGE Incomplete tree (CI), the embeddings are constructed using incomplete subtrees that are constructed by retaining all the phrase structure grammar productions that are required to derive the words seen till then, starting from the root of the sentence's tree. If incomplete subtrees are more representative of the brain's processes, it would mean that the brain correctly predicts certain phrase structures even before the entire phrase or sentence is read. The incremental top-down parser is a statistical syntactic parser that processes input strings from left to right, producing partial derivations in a top-down manner, using beam search as detailed in (Roark, 2001). Specifically, we use the implementation as described here3. The INC embeddings are obtained using exactly the same methods as described in Section 3 of (Reddy and Wehbe, 2021). The brain could be computing several possible top-down partial parses that can derive the words seen so far and modifying the list of possible parses as future words are read. The INC feature space is constructed to encode the different possible parse trees that can derive the words seen so far. When considering parse tree based representations, the embeddings may contain information about what is yet to be seen by the subject. However, this is not a problem since it mimics the human capability of guessing what is to come next. With this embedding space, we attempt to measure the ability of the brain to predict future constituents correctly. (2) Dependency Tree-based Embeddings (DEP): Graph Convolutional Networks (GCNs) have been widely used to encode syntactic information from dependency parse trees (Vashishth et al., 2019). Rather than using pretrained syntactic GCN word embeddings generated from Wikipedia (Vashishth et al., 2019), we create DEP embeddings using GCNs on the "Narrative stories" dataset as follows. To generate syntactic word embeddings using GCN, we first extract the dependency parse tree Gs=(Vs, ϵs) for every sentence in our dataset s = (w1, w2,. . . , wn), using the Stanford CoreNLP parser (Manning et al., 2014). Here, Vs = {w1, w2,*. . .* , wn} and ϵs denotes the labeled directed dependency edges of the form (wi, wj , lij ), where lij is the dependency relation of wito wj . GCN computations iteratively utilize the context defined by a word's neighbors in the graph to compute embedding for every word wi. Further, we also perform edge-wise gating to give importance to relevant edges and suppress noisy ones. We follow the architecture defined in (Vashishth et al., 2019) for training a GCN on our dataset leading to syntactically-rich DEP embeddings. Overall, GCN utilizes syntactic context to learn rich DEP embeddings. (3) Basic Syntactic Features: Similar to (Wang et al., 2020; Reddy and Wehbe, 2021; Zhang et al., 2022), we use various multi-dimensional syntactic features such as Punctuation (PU), Complexity Metrics (CM), and Part-of-speech and dependency tags (PD), described briefly below. Punctuation (PU) The role of punctuation is to resolve syntactic and semantic ambiguity in the lexical grammar and encode relational discourse links between text units in sentences (Briscoe, 1996). Punctuation-based features are encoded using a one-hot vector where the type of punctuation is presented along with a word (e.g. . or ,). Complexity Metrics (CM) We use three features in the complexity metrics: Node Count (NC), Word Length (WL), and Word Frequency (WF). The node count for each word is the number of subtrees that are completed by incorporating each word into its sentence. Word length is the number of characters present in the word. Word frequency reports log base-10 of the number of occurrences per billion of a given word in a large text corpus. ## Part-Of-Speech And Dependency Tags (Pd) We use the Spacy English dependency parser (Honnibal and Montani, 2017) to extract the Part-ofspeech (POS) and dependency tags. Unlike DEP embeddings (which use GCNs), in PD, we generate a one-hot vector for each word and dependency tag. The final vector is called PD, a concatenation of both the POS tag and dependency vector. Note that DEP and PD features use different methods for dependency analysis - PD features are just one-hot encoded representations while DEP features are learned syntactic embeddings using GCNs. (4) BERT Features Given an input sentence, the pretrained BERT (Devlin et al., 2019) outputs token representations at each layer. Since BERT embeds a rich hierarchy of linguistic signals: surface information at the bottom, syntactic information in the middle, semantic information at the top (Jawahar et al., 2019); hence, we use the \#tokens × 768D vector from the last hidden layer to obtain the semantic embeddings. For uniformity of feature dimensions, we used PCA to bring down the dimensions to 250. ## 3 Dataset Curation Brain Imaging Dataset The "Narratives" collection aggregates a variety of fMRI datasets collected while human subjects listened to real spoken stories (Nastase et al., 2021). We analyze data from 82 subjects listening to the story titled 'PieMan' with 282 TRs (repetition time - fMRI recorded every 1.5 sec.). We chose this story since it contains maximum number of subjects in the "Narratives" collection. The dataset is in English and contains 957 words across 67 sentences. The story duration is 07m:02s. We use the multi-modal parcellation of the human cerebral cortex (Glasser Atlas: consists of 180 ROIs in each hemisphere) to display the brain maps (Glasser et al., 2016) since the Narratives dataset contains annotations tied to this atlas. The data covers eight language brain ROIs with the following subdivisions: (i) angular gyrus (AG: PFm, PGs, PGi, TPOJ2, and TPOJ3); (ii) anterior temporal lobe (ATL: STSda, STSva, STGa, TE1a, TE2a, TGv, and TGd); (iii) posterior temporal lobe (PTL: A5, STSdp, STSvp, PSL, STV, TPOJ1); (iv) inferior frontal gyrus (IFG: 44, 45, IFJa, IFSp); (v) middle frontal gyrus (MFG: 55b); (vi) inferior frontal gyrus orbital (IFGOrb: a47r, p47r, a9-46v), (vii) posterior cingulate cortex (PCC: 31pv, 31pd, PCV, 7m, 23, RSC); and (viii) dorsal medial prefrontal cortex (dmPFC: 9m, 10d, d32) (Baker et al., 2018; Milton et al., 2021; Desai et al., 2023). The dataset has been made available freely without restrictions by Nastase et al. (2021). Downsampling Since the rate of fMRI data acquisition (TR = 1.5sec) was lower than the rate at which the text stimulus was presented to the subjects, several words fall under the same TR in a single acquisition. Hence, we match the stimulus acquisition rate to fMRI data recording by downsampling the stimulus features using a 3-lobed Lanczos filter (LeBel et al., 2021). After downsampling, we obtain chunk-embedding corresponding to each TR. TR Alignment To account for the slowness of the hemodynamic response, we model the hemodynamic response function using finite response filter (FIR) per voxel and for each subject separately with 8 temporal delays corresponding to 12 seconds. ## 4 Methodology Encoding Model To explore how and where syntactic and semantic specific features are represented in the brain when listening to stories, we extract different features describing each stimulus sentence and use them in an encoding model to predict brain responses. If a feature is a good predictor of a specific brain region, information about that feature is likely encoded in that region. The main goal of each fMRI encoder model is to predict brain responses associated with each brain voxel when given stimuli. We train a model per subject separately. Following the literature on brain encoding (Wehbe et al., 2014; Toneva et al., 2020; Caucheteux et al., 2021b; Reddy and Wehbe, 2021; Toneva et al., 2021; Zhang et al., 2022; Oota et al., 2022b, 2023), we choose to use a ridge regression model instead of more complicated models. We plan to explore more such models as part of future work. The ridge regression objective function for the i th example is f(Xi) = min W ∥Yi − XiW∥ 2 F + λ∥W∥ 2 F . Here, W are the learnable weight parameters, ∥.∥F denotes the Frobenius norm, and λ > 0 is a tunable hyperparameter representing the regularization weight. λ was tuned on a small disjoint validation set obtained from the training set. Cross-Validation We follow 4-fold (K=4) crossvalidation. All the data samples from K-1 folds were used for training, and the model was tested on samples of the left-out fold. Evaluation Metric We evaluate our models using the popular brain encoding evaluation metric, R2. Let TR be the number of time repetitions. Let Y = {Yi} T R i=1 and Yˆ = {Yˆi} T R i=1 denote the actual and predicted value vectors for a single voxel. Thus, Y ∈ RT R and also Yˆ ∈ RT R. We use R2(Y, Yˆ ) metric to measure the coefficient of determination for every voxel. We average R2score over all voxels in a region to get region-level aggregated metric. Finally, they are further averaged across all subjects to obtain final region-level metrics, which are reported with mean and standard deviation. Statistical Significance We run a permutation test to check if R2scores are significantly higher than chance. We permute blocks of contiguous fMRI TRs, instead of individual TRs, to account for the slowness of the underlying hemodynamic response. We choose a standard value of 10 TRs. The predictions are permuted within fold 5000 times, and the resulting R2scores are used as an empirical distribution of chance performance, from which the p-value of the unpermuted performance is estimated. We also run a bootstrap test, to test if a model has a higher R2score than another. In each iteration, we sample with replacement the predictions of both models for a block of TRs, compute the difference of their R2, and use the resulting distribution to estimate the p-value of the unpermuted difference. Finally, the Benjamni-Hochberg False Discovery Rate (FDR) correction (Benjamini and Hochberg, 1995) is used for all tests (appropriate because fMRI data is considered to have positive dependence (Genovese, 2000)). The correction is performed by grouping all the voxel-level p-values (i.e., across all subjects and feature groups) and choosing one threshold for all of our results. The correction is done this way as we test multiple prediction models across multiple voxels and subjects. ## 5 Experiments And Results We discuss detailed hyper-parameter settings in ## Appendix A. Which word representations are semantic versus syntactic? We first empirically show that syntactic embeddings do not encode a significant amount of semantic information. In particular, we train the RidgeCV regression model in a 10-fold cross-validation setting to predict the semantic GloVe (Pennington et al., 2014) features (300 dimensions) using syntactic embeddings for all the representations, similar to earlier works (Caucheteux et al., 2021a; Reddy and Wehbe, 2021; Zhang et al., 2022). Average R2scores are as follows: BERT (0.560), CC (0.052), CI (0.020), DEP (0.170), INC (0.040), PD (0.183), CM (0.027), and PU (0.005). These R2 scores indicate that (a) overall, BERT has high semantic information compared to other embeddings, and (b) constituency parsers have low semantic information compared to DEP. Overall, all the syntactic embeddings consist of very low semantic information. Hence, it is reasonable to infer that any additional variance predicted by the syntactic parsing methods compared to the semantic feature space (BERT) is mainly due to their syntactic information. Performance of individual embedding methods: In order to assess the performance of the fMRI encoder models learned using the individual syntactic and semantic representations, we computed the R2 scores between the predicted and true responses across various ROIs of the language network. Fig. 2 reports the % of ROI voxels with significant R2 scores (based on a hypothesis test where the R2 score for each voxel is greater than 0) across different representations for different language regions in the left and right hemispheres. We make the following observations from Fig. 2: (1) Among basic syntactic features, PD features perform best across most of the language regions, whereas CM yields the second-best result. (2) Among the syntactic embedding methods, CC encodes syntactic information better in the language regions such as temporal lobes (ATL and PTL) and MFG. (3) Among the syntactic embedding methods, DEP embeddings predict brain activity better in the language regions (PCC and IFG of left hemisphere, and AG, IFGorb, and PCC of right hemisphere). (4) Semantic embeddings using BERT are the best across all regions in the right hemisphere, but the effectiveness of BERT is rather mixed in the left hemisphere. Further, we report the avg R2scores across all different language ROIs in the Appendix (Fig. 8). We further demonstrate the performance of embedding methods for various sub-regions of each language ROI in the Appendix Figs. 9 to 15. We observe the following from these figures: (1) In the ATL region (Fig. 10), CC better encodes in the superior temporal sulcus with dorsal attention (STSda). For STS in ventral attention (STSva), CC encodes better in the left hemisphere while DEP is better in the right. (2) In the PTL region (Fig. 11), CC is best for STSdp sub-region. (3) In the IFG region (Fig. 12), DEP is better aligned with 44 region whereas CC is better aligned with IFJa region. These results are in line with observations made in (Pallier et al., 2011). Overall, a higher percentage of voxels with all the frontal and temporal regions, demonstrates that language comprehension may be associated more with both frontal and temporal regions (Cohen et al., 2021). We also report brain maps with avg R2for all the representations in Fig. 3. From Figs. 2 and 3, we can infer that the different word representations, including all syntactic and semantic methods, are highly distributed across ROIs of language network. In particular, PTL and MFG have high overlap for both syntactic (CC, CI, DEP, INC), and semantic (BERT) features. Also, ROIs such as PTL, IFGOrb and PCC have higher overlap with PD. Most of these observations agree with previous findings on the brain networks of language processing (Friederici, 2011; Fedorenko and ThompsonSchill, 2014; Caucheteux et al., 2021a; Reddy and Wehbe, 2021; Zhang et al., 2022), support- ![6_image_0.png](6_image_0.png) Figure 2: **Performance of Individual Embedding Methods**: ROI-wise analysis of the prediction performance of ![6_image_1.png](6_image_1.png) various feature sets. We show the % of ROI voxels with a significant increase in prediction performance. Each bar shows avg %; error bars show standard error across 82 subjects. Left hemisphere (Top); Right hemisphere (Bottom). Figure 3: R2score per voxel for the whole brain. (a) PU (b) CM (c) PD (d) CC (e) CI (f) INC (g) DEP (h) BERT. ![6_image_2.png](6_image_2.png) Figure 4: **Additional Predictive Power of various Representations**: For each model, we show the % of ROI voxels with a significant increase in prediction performance. Each bar shows avg %; error bars show standard error across 82 subjects. '-' indicates a hypothesis test for the difference in R2scores between the two feature groups being larger than 0. Left hemisphere (Top); Right hemisphere (Bottom). Note that PU values here are slightly different from Fig. 2 since here the FDR correction was done across all the groups. ing that both syntax and semantics are distributed across language ROIs. Lastly, similar to an earlier study (Blank et al., 2016), basic syntactic features are much less associated with voxels in AG region. Additional predictive power of various representations Many feature spaces have overlapping information, e.g., PD (part-of-speech and dependency) tags include punctuation, BERT vectors have been shown to encode syntax (Jawahar et al., 2019; Luoma and Pyysalo, 2020), and DEP embeddings built from GCNs encode some POS tags information. Are various representations capturing very similar signals, i.e., redundant or capturing new information, which is additionally useful to predict brain activations? To answer this question, we first organize the feature groups in the increasing order of syntactic information. We build hierarchical feature groups in increasing order of syntactic information and test for significant differences in prediction performance between two consecutive groups. We start with the simple feature - punctuation (PU) and then add more complex features in this order: the complexity metrics (CM), POS and dependency tags (PD), {CC, CI, INC, DEP}, and lastly, BERT. Fig. 4 reports the % of ROI voxels with significant R2scores (hypothesis test where the difference in R2scores between the two feature groups is larger than 0) across feature groups for different ROIs in the left and right hemispheres, respectively. We make the following observations from Fig. 4. (i) Unlike (Reddy and Wehbe, 2021), we find that punctuation features yield a lower predictive performance across language regions for listening in both the left and right hemispheres. This is intuitive since punctuation marks are not "visible" when listening. (ii) Amongst CC, CI, INC, and DEP, after controlling for basic syntactic features {PD, CM, PU}, CC displays a large % of significant voxels across multiple language sub-regions, largest in ATL, PTL, and MFG in left and in IFGOrb, PCC and dmPFC in the right hemispheres. This means there are voxels in these language sub-regions that capture hierarchical English grammar syntax beyond simple syntax signals captured by PD, CM, and PU. (iii) DEP parser explains addition variance after controlling for basic syntactic features for the AG region which is mainly a knowledge store of thematic relations between entities. Also, DEP yields a large % of significant voxels for the IFG region in the left hemisphere whereas PCC region in the right hemisphere. Although INC does not show any additional variance in the left hemisphere, it performs well for IFG and MFG in the right hemisphere. (iv) On top of these representations, BERT adds to the variance the most in the context of CC, CI, INC, and DEP features in both hemispheres. Pairwise predictive power comparison for syntactic parse methods and BERT To compare relative extra syntactic information in various parsebased representations, we compute the difference in R2 between every pair of representations from {CC, CI, DEP}. For this analysis, we ignore INC since it performed worst, as shown in Fig. 2. Thus, we plot % of significant ROI voxels for {CC, DEP}- {CC} and other such feature-pairwise combinations in Fig. 5 for both hemispheres. We make the following observations from Fig. 5. (i) CC and CI show greater variance in brain predictivity (ATL and PTL for both hemispheres, MFG, IFGOrb and dmPFC of left hemisphere) even after controlling for either DEP. Also, CC and DEP show greater variance after controlling for CI. However, DEP or CI have negligible % of ROI voxels after controlling for CC, specifically for temporal lobe (ATL and PTL) and frontal regions (IFG and MFG). Thus, we can conclude that constituency trees, specifically CC, encode similar syntactic information as DEP in temporal lobe (ATL and PTL) and frontal regions (IFG and MFG). Also, DEP based on dependency trees does not have additional syntactic information compared to constituency trees, except for AG, IFGOrb, PCC and dmPFC regions. (ii) While BERT provides improvement over CC, CI and DEP in most brain areas (especially in MFG and dMPFC), surprisingly in AG and IFG, BERT does not provide much additive value. ## 6 Discussion In this section, we correlate our empirical findings about syntactic parsing methods with previously proposed neuroscience theories. From Fig. 4, we observe that activity in the left temporal lobe (ATL and PTL) seems to be predicted well using either CC or basic syntactic (PD) representations. These results are supported by theory of Matchin and Hickok (2020), who concluded that parts of the PTL are involved in hierarchical lexical-syntactic structure building, while the ATL is a knowledge store of entities. While activity in the left IFGOrb, left PCC, and left AG seems to be better modeled by basic syntactic feature (PD) representations, that in MFG seems to be related to CC representations. DEP embeddings seem to perform better for activity in the left AG, left ATL and left IFG. This supports the theory of Matchin and Hickok (2020), which reports that ATL is a knowledge store of entities and AG is a store of thematic relations between entities. A sub-ROI in the left AG, namely parietal area G inferior (PGi) has significantly more number of voxels sensitive to dependency features when we control for all other syntactic features. On the other hand sub-ROIs in the right temporo-parietooccipital junction (TPOJ) are more sensitive to incremental top-down syntactic features (Appendix ![8_image_0.png](8_image_0.png) Fig. 16). While it is known that AG is sensitive to stimuli that are connected through a narrative rather than unconnected words (Baker et al., 2018), the current findings suggest that distinct sub-ROIs within AG are related to different syntactic features. Further sub-regions in the prefrontal cortex such as Brodmann area (BA) 44 and the inferior frontal junction area (IFJa) also seem to be related to representations of dependency parser (Appendix Fig. 19). The results in the prefrontal cortex seem to concur with the observations of Grodzinsky and Friederici (2006) and Kaan and Swaab (2002) who have shown that Broca's area (Brodmann areas 44 and 45) has higher brain activation while processing complex sentences. Since narrative listening also involves processing highly complex sentences, consistent activation found in Left Brodmann areas 44 and 45 may relate to parsing of sentences or to see if they had distinct meanings. The right hemisphere activation in the language network (AG, ATL, PTL, IFG, MFG, IFGOrb, PCC, and dMPFC) on the whole seems to be associated with basic syntactic features such as word length, word frequency, word count as embodied in CM representations. In linguistic studies, INC has been shown to be effective in checking if sentences with different syntax, have the same or different meaning. This in line with our observation that representations from INC parser seem to be more related to language regions (inferior frontal gyrus, IFG) in the right hemisphere as shown in Fig. 19. Overall, Grodzinsky and Friederici (2006) concluded that syntax processing is not limited to specific regions (left IFG or Broca's area). Along with IFG, other regions such as PTL, ATL, MFG, and IFGOrb are also involved in different stages of syntax processing (Oota et al., 2022c). Our results (Fig. 2) also seem to support distributed representation of syntax across the language network. Moreover, our results clearly show the kind of syntax encoded by these individual ROIs. 7 Conclusion We studied the relative importance of multiple constituency and dependency syntax parsing methods for fMRI prediction for the listening task. We find that (1) both CC and DEP are effective; CC is more important than CI, (2) CC is better in temporal cortex and MFG, while DEP is better in AG and PCC, (3) while BERT embeddings seem to be the most effective, syntactic embedding methods also explain additional variance for a few ROIs. In line with previous works, we find that syntax and semantic processing is spread across multiple brain areas. ## 8 Limitations Although these experiments were performed on only one dataset, it is indeed large with data from 82 participants. That said, it will be nice to perform experiments with more listening datasets. We experiment with a linear encoder - Ridge regression. We plan to experiment with more complex encoders as part of future work. This work was done on data related to English stories only. Several other languages belong to the same language family as English (Malik-Moraleda et al., 2022). While we can expect the insights and learnings to hold across languages in the same language family as English, empirical validation needs to be done. For languages in other language families, syntactic structure may be very different from English. Hence, more work needs to be done to check which of these insights hold for datasets in other language families. This work was conducted on a dataset where the participants were involved in the listening task. However, the stimuli was represented in the text form. We believe that an audio form of the stimuli can lead to improved insights. Thus, more work needs to be done to design representations (like prosodic features) for auditory stimuli. ## 9 Ethical Statement We did not create any new fMRI data as part of this work. We used Narratives-Pieman dataset which is publicly available without any restrictions. Narratives dataset can be dowloaded from https://datasets.datalad. org/?dir=/labs/hasson/narratives. Please read their terms of use4for more details. We do not foresee any harmful uses of this technology. ## References Khai Loong Aw and Mariya Toneva. 2023. Training language models to summarize narratives improves brain alignment. In *The Eleventh International Conference on Learning Representations*. Cordell M Baker, Joshua D Burks, Robert G Briggs, Andrew K Conner, Chad A Glenn, Kathleen N Taylor, Goksel Sali, Tressie M McCoy, James D Battiste, Daniel L O'Donoghue, et al. 2018. A connectomic atlas of the human cerebrum—chapter 7: the lateral parietal lobe. *Operative Neurosurgery*, 15(suppl_1):S295–S349. Douglas K Bemis and Liina Pylkkänen. 2011. Simple composition: A magnetoencephalography investigation into the comprehension of minimal linguistic phrases. *Journal of Neuroscience*, 31(8):2801–2814. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. *Journal of the* Royal statistical society: series B (Methodological), 57(1):289–300. Shohini Bhattasali, John Hale, Christophe Pallier, Jonathan Brennan, Wen-Ming Luh, and R Nathan 4https://datasets.datalad.org/labs/hasson/ narratives/stimuli/README Spreng. 2018. Differentiating phrase structure parsing and memory retrieval in the brain. Proceedings of the Society for Computation in Linguistics, 1(1):74– 80. Jeffrey R Binder, Lisa L Conant, Colin J Humphries, Leonardo Fernandino, Stephen B Simons, Mario Aguilar, and Rutvik H Desai. 2016. Toward a brainbased componential semantic representation. *Cognitive neuropsychology*, 33(3-4):130–174. Idan Blank, Zuzanna Balewski, Kyle Mahowald, and Evelina Fedorenko. 2016. Syntactic processing is distributed across the language system. *Neuroimage*, 127:307–323. Ted Briscoe. 1996. The syntax and semantics of punctuation and its use in interpretation. In *Proceedings of* the Association for Computational Linguistics Workshop on Punctuation, pages 1–7. Citeseer. Alfonso Caramazza and Edgar B Zurif. 1976. Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. *Brain* and language, 3(4):572–582. Charlotte Caucheteux, Alexandre Gramfort, and JeanRemi King. 2021a. Disentangling syntax and semantics in the brain with deep networks. In International Conference on Machine Learning, pages 1336–1348. PMLR. Charlotte Caucheteux, Alexandre Gramfort, and JeanRemi King. 2021b. Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3635–3644. Laurent Cohen, Philippine Salondy, Christophe Pallier, and Stanislas Dehaene. 2021. How does inattention affect written and spoken language processing? *cortex*, 138:212–227. Rutvik H Desai, Usha Tadimeti, and Nicholas Riccardi. 2023. Proper and common names in the semantic system. *Brain Structure and Function*, 228(1):239– 254. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Irene-Anna N Diakidoy, Polyxeni Stylianou, Christina Karefillidou, and Panayiota Papageorgiou. 2005. The relationship between listening and reading comprehension of different types of text at increasing grade levels. *Reading psychology*, 26(1):55–80. Evelina Fedorenko, Idan Asher Blank, Matthew Siegelman, and Zachary Mineroff. 2020. Lack of selectivity for syntax relative to word meanings throughout the language network. *Cognition*, 203:104348. Evelina Fedorenko, Alfonso Nieto-Castanon, and Nancy Kanwisher. 2012. Lexical and syntactic representations in the brain: an fmri investigation with multivoxel pattern analyses. *Neuropsychologia*, 50(4):499– 513. Evelina Fedorenko and Sharon L Thompson-Schill. 2014. Reworking the language network. Trends in cognitive sciences, 18(3):120–126. Angela D Friederici. 2011. The brain basis of language processing: from structure to function. *Physiological* reviews, 91(4):1357–1392. Angela D Friederici, Christian J Fiebach, Matthias Schlesewsky, Ina D Bornkessel, and D Yves Von Cramon. 2006. Processing linguistic complexity and grammaticality in the left frontal cortex. *Cerebral* Cortex, 16(12):1709–1717. Christopher R Genovese. 2000. A bayesian time-course model for functional magnetic resonance imaging data. *Journal of the American Statistical Association*, 95(451):691–703. Matthew F Glasser, Timothy S Coalson, Emma C Robinson, Carl D Hacker, John Harwell, Essa Yacoub, Kamil Ugurbil, Jesper Andersson, Christian F Beckmann, Mark Jenkinson, et al. 2016. A multimodal parcellation of human cerebral cortex. *Nature*, 536(7615):171–178. Yosef Grodzinsky and Angela D Friederici. 2006. Neuroimaging of syntax and syntactic processing. *Current opinion in neurobiology*, 16(2):240–246. Giacomo Handjaras, Emiliano Ricciardi, Andrea Leo, Alessandro Lenci, Luca Cecchetti, Mirco Cosottini, Giovanna Marotta, and Pietro Pietrini. 2016. How concepts are encoded in the human brain: a modality independent, category-based cortical organization of semantic knowledge. *Neuroimage*, 135:232–242. David G Hays. 1964. Dependency theory: A formalism and some observations. *Language*, 40(4):511–525. Graeme Hirst. 1984. A semantic process for syntactic disambiguation. In *AAAI*, pages 148–152. Nora Hollenstein, A de la Torre, Nicolas Langer, and Ce Zhang. 2019. Cognival: A framework for cognitive word embedding evaluation. In Proceedings of The SIGNLL Conference on Computational Natural Language Learning 2019. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Colin Humphries, Jeffrey R Binder, David A Medler, and Einat Liebenthal. 2006. Syntactic and semantic modulation of neural activity during auditory sentence comprehension. *Journal of cognitive neuroscience*, 18(4):665–679. Shailee Jain and Alexander G Huth. 2018. Incorporating context into language encoding models for fmri. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 6629–6638. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does bert learn about the structure of language? In *ACL 2019-57th Annual Meeting of the* Association for Computational Linguistics. Wha-Young Jung. 1995. Syntaktische Relationen im Rahmen der Dependenzgrammatik, volume 9. Buske Verlag. Edith Kaan and Tamara Y Swaab. 2002. The brain circuitry of syntactic comprehension. *Trends in cognitive sciences*, 6(8):350–356. Amanda LeBel, Shailee Jain, and Alexander G Huth. 2021. Voxelwise encoding models show that cerebellar language representations are highly conceptual. Journal of Neuroscience, 41(50):10341–10355. Jouni Luoma and Sampo Pyysalo. 2020. Exploring cross-sentence contexts for named entity recognition with bert. In Proceedings of the 28th International Conference on Computational Linguistics, pages 904– 914. Saima Malik-Moraleda, Dima Ayyash, Jeanne Gallée, Josef Affourtit, Malte Hoffmann, Zachary Mineroff, Olessia Jouravlev, and Evelina Fedorenko. 2022. An investigation across 45 languages and 12 language families reveals a universal language network. *Nature Neuroscience*, 25(8):1014–1019. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. William Matchin and Gregory Hickok. 2020. The cortical organization of syntax. *Cerebral Cortex*, 30(3):1481–1498. Gabriele Merlin and Mariya Toneva. 2022. Language models and brain alignment: beyond wordlevel semantics and prediction. arXiv preprint arXiv:2212.00596. Camille K Milton, Vukshitha Dhanaraj, Isabella M Young, Hugh M Taylor, Peter J Nicholas, Robert G Briggs, Michael Y Bai, Rannulu D Fonseka, Jorge Hormovas, Yueh-Hsin Lin, et al. 2021. Parcellationbased anatomic model of the semantic network. Brain and behavior, 11(4):e02065. Samuel A Nastase, Yun-Fei Liu, Hanna Hillman, Asieh Zadbood, Liat Hasenfratz, Neggin Keshavarzian, Janice Chen, Christopher J Honey, Yaara Yeshurun, Mor Regev, et al. 2021. The "narratives" fmri dataset for evaluating models of naturalistic language comprehension. *Scientific data*, 8(1):1–22. Subba Reddy Oota, Frederic Alexandre, and Xavier Hinaut. 2022a. Long-term plausibility of language models and neural dynamics during narrative listening. In *Proceedings of the Annual Meeting of the* Cognitive Science Society, volume 44. Subba Reddy Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy, Manish Gupta, and Bapi Surampudi. Neural language taskonomy: Which nlp tasks are the most predictive of fmri brain activity? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3220–3237. Subba Reddy Oota, Jashn Arora, Vijay Rowtula, Manish Gupta, and Raju S Bapi. 2022b. Visio-linguistic brain encoding. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 116–133. Subba Reddy Oota, Manish Gupta, and Mariya Toneva. 2022c. Joint processing of linguistic properties in brains and language models. arXiv preprint arXiv:2212.08094. Subba Reddy Oota, Naresh Manwani, and Raju S Bapi. 2018. fMRI Semantic Category Decoding Using Linguistic Encoding of Word Embeddings. In *International Conference on Neural Information Processing*, pages 3–15. Springer. Subba Reddy Oota, Khushbu Pahwa, Mounika Marreddy, Manish Gupta, and Bapi S Raju. 2023. Neural architecture of speech. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Christophe Pallier, Anne-Dominique Devauchelle, and Stanislas Dehaene. 2011. Cortical representation of the constituent structure of sentences. *Proceedings* of the National Academy of Sciences, 108(6):2522– 2527. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Owen Rambow. 2010. The simple truth about dependency and phrase structure representations: An opinion piece. In Human language technologies: The 2010 annual conference of the North American Chapter of the Association for Computational Linguistics, pages 337–340. Aniketh Janardhan Reddy and Leila Wehbe. 2021. Can fmri reveal the representation of syntactic structure in the brain? *Advances in Neural Information Processing Systems*, 34. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. *Computational linguistics*, 27(2):249–276. Corianne Rogalsky and Gregory Hickok. 2009. Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex. *Cerebral Cortex*, 19(4):786–796. Donald L Rubin, Teresa Hafer, and Kevin Arata. 2000. Reading and listening to oral-based versus literate-based discourse. *Communication Education*, 49(2):121–133. Gerold Schneider. 1998. *A linguistic comparison of* constituency, dependency and link grammar. Ph.D. thesis, Master's thesis, University of Zürich. Mariya Toneva, Tom M Mitchell, and Leila Wehbe. 2022. Combining computational controls with natural text reveals aspects of meaning composition. *Nature Computational Science*, 2(11):745–757. Mariya Toneva, Otilia Stretcu, Barnabás Póczos, Leila Wehbe, and Tom M Mitchell. 2020. Modeling task effects on meaning representation in the brain via zero-shot meg prediction. *Advances in Neural Information Processing Systems*, 33:5284–5295. Mariya Toneva, Jennifer Williams, Anand Bollu, Christoph Dann, and Leila Wehbe. 2021. Same cause; different effects in the brain. In *First Conference on Causal Learning and Reasoning*. Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya, and Partha P Talukdar. 2019. Incorporating syntactic and semantic information in word embeddings using graph convolutional networks. In *ACL (1)*. Shaonan Wang, Jiajun Zhang, Nan Lin, and Chengqing Zong. 2020. Probing brain activation patterns by dissociating semantics and syntax in sentences. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9201–9208. Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. *PloS one*, 9(11):e112575. Emiliano Zaccarella and Angela D Friederici. 2015. Merge in the human brain: A sub-region based functional investigation in the left pars opercularis. *Frontiers in psychology*, 6:1818. Xiaohan Zhang, Shaonan Wang, Nan Lin, Jiajun Zhang, and Chengqing Zong. 2022. Probing word syntactic representations in the brain by a feature elimination method. In *AAAI*. ## A Hyper-Parameter Settings All experiments were conducted on a machine with 1 NVIDIA GEFORCE-GTX GPU with 16GB GPU RAM. We used banded ridge-regression with following parameters: MSE loss function, and L2decay (λ) varied from 10−1to 10−3; best λ was chosen by tuning on validation data; number of cross-validation runs was 4. ## B Constituency Complete Trees We now present the largest subtrees completed by a few of the words in the sentence: "I began my illustrious career as a hard-boiled reporter in the Bronx where I toiled for the Ram, uh, Fordham University's student newspaper". ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) ![21_image_0.png](21_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 A2. Did you discuss any potential risks of your work? Not applicable. Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 9 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 3 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 and 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chen-etal-2023-towards
Towards Imperceptible Document Manipulations against Neural Ranking Models
https://aclanthology.org/2023.findings-acl.416
Adversarial attacks have gained traction in order to identify vulnerabilities in neural ranking models (NRMs), but current attack methods often introduce noticeable errors. Moreover, current methods rely heavily on using a well-imitated surrogate NRM to guarantee the attack effect, making them difficult to use in practice. This paper proposes a framework called Imperceptible DocumEnt Manipulation (IDEM) to produce adversarial documents that are less noticeable to both algorithms and humans. IDEM instructs a well-established generative language model like BART to generate error-free connection sentences, and employs a separate position-wise merging strategy to balance between relevance and coherence of the perturbed text. Evaluation results on the MS MARCO benchmark demonstrate that IDEM outperforms strong baselines while preserving fluency and correctness of the target documents. Furthermore, the separation of adversarial text generation from the surrogate NRM makes IDEM more robust and less affected by the quality of the surrogate NRM.
## Towards Imperceptible Document Manipulations Against Neural Ranking Models Xuanang Chen1,2 Ben He1,2∗ Zheng Ye3∗ Le Sun2∗ **Yingfei Sun**1∗ 1University of Chinese Academy of Sciences, Beijing, China 2Institute of Software, Chinese Academy of Sciences, Beijing, China 3South-Central University for Nationalities, Wuhan, China chenxuanang19@mails.ucas.ac.cn, benhe@ucas.ac.cn yezheng@scuec.edu.cn, sunle@iscas.ac.cn, yfsun@ucas.ac.cn GLM (e.g., BART) BERT(NSP) Coherence Relevance ## Abstract Adversarial attacks have gained traction in order to identify vulnerabilities in neural ranking models (NRMs), but current attack methods often introduce noticeable errors. Moreover, current methods rely heavily on using a well-imitated surrogate NRM to guarantee the attack effect, making them difficult to use in practice. This paper proposes a framework called Imperceptible DocumEnt Manipulation (IDEM) to produce adversarial documents that are less noticeable to both algorithms and humans. IDEM instructs a well-established generative language model like BART to generate error-free connection sentences, and employs a separate position-wise merging strategy to balance between relevance and coherence of the perturbed text. Evaluation results on the MS MARCO benchmark demonstrate that IDEM outperforms strong baselines while preserving fluency and correctness of the target documents. Furthermore, the separation of adversarial text generation from the surrogate NRM makes IDEM more robust and less affected by the quality of the surrogate NRM. ## 1 Introduction Adversarial Information Retrieval (AIR) has been a topic of significant attention from the research community (Davison et al., 2006; Leng et al., 2012; Farooq, 2019). It refers to the scenario in which a portion of the collection can be maliciously manipulated, with Adversarial Web Search (Castillo and Davison, 2010) being a typical example. In the context of the Web, this type of manipulation is commonly referred to as black-hat search engine optimization (SEO) or Web spamming, whose goal is to deceive ranking algorithms by artificially inflating the relevance of targeted Web pages, resulting in undeservedly high rankings for those pages (Gyöngyi and Garcia-Molina, 2005). Meanwhile, in the last few years, neural ranking models (NRMs), particularly those that utilize pre-trained ![0_image_0.png](0_image_0.png) Figure 1: In IDEM, a generative language model is instructed to generate a group of connection sentences between the query and the target document, then a positionwise merging strategy is applied to select and position an optimal connection sentence within the target document, in order to promote the ranking (from 88th to 9th) but maintain the fluency measured by perplexity (PPL). language models (PLMs), have demonstrated remarkable performance across a diverse range of text ranking tasks (Lin et al., 2021). Furthermore, these NRMs have also been implemented in various industrial applications (Lin et al., 2021), such as Web search engines (Zou et al., 2021), to improve the accuracy and relevance of search results. However, recent studies have revealed the vulnerabilities of NRMs to adversarial document manipulations (Wu et al., 2022; Liu et al., 2022; Wang et al., 2022; Song et al., 2022), that is, small deliberate perturbations in the input documents can cause a catastrophic ranking disorder in the outcome of NRMs. This highlights the need to investigate and identify the potential weaknesses of NRMs before their deployment, in order to ensure their robustness and prevent potential risks. Several manipulation techniques have been proposed to maliciously boost the ranking of low-ranked documents, such as PRADA (Wu et al., 2022) and PAT (Liu et al., 2022). Although these adversarial attack methods have shown the ability to fool NRMs by replacing crucial words, or appending query text or other ad6648 versarial tokens, they are still subject to two major limitations. Firstly, existing attack methods tend to introduce grammatical errors, impossible expressions, or incoherent text snippets into the original document, making the attack easy to mitigate, such as by perplexity filters (Song et al., 2020) or grammar checkers (Liu et al., 2022). Secondly, existing attack methods heavily rely on a well-imitated surrogate NRM to produce adversarial text, but this requires a lot of in-domain training data collected by querying the victim NRM, which can be infeasible or even unavailable in real-world situations. To this end, we propose IDEM, an imperceptible document manipulation framework that aims to produce adversarial documents that are less perceptible to both humans and algorithms. As depicted in Figure 1, a well-established generative language model (GLM), such as BART, is first engineered to generate a series of grammatically correct connection sentences between the query and the target document, a position-wise merging mechanism is then employed to select an optimal connection sentence to be appropriately positioned within the target document. During the generation of connection sentences, we take advantage of the language modeling objective in the GLM without adding any ranking-incentivized objective. This not only helps to produce more natural and fluent text without introducing extra errors that are easy to detect, but also separates the surrogate NRM from the generation process to substantially reduce the dependence of attack performance on the surrogate NRM. Extensive experiments carried out on the widely-used MS MARCO passage ranking dataset indicate that IDEM is able to achieve better attack performance against black-box NRMs than recent baselines, regardless of whether the surrogate NRM is similar to or far from the victim NRM. According to both automatic and human evaluations, the adversarial documents produced by IDEM can maintain semantic fluency and text quality. Our contributions are three-fold: 1) We propose an imperceptible document manipulation framework, IDEM1, that employs contextualized blankinfilling and coherent merging for the ranking attack. 2) Extensive attack experiments under three types of surrogate NRMs show that IDEM is able to robustly promote the rankings of the target documents. 3) Automatic and human evaluations of text quality indicate that IDEM is capable of generating more natural and fluent adversarial documents. ## 2 Problem Statement Task description. Typically, given a query q and candidates D = {d1, d2, · · · , d|D|} collected by a retrieval model (e.g., BM25; Robertson et al., 1995) from the whole corpus, a NRM MV (i.e., the victim NRM) produces relevance scores s(*q, d*i) for all query-document pairs, outputting a re-ranking list L = [d1, d2, · · · , d|D|], wherein s(q, d1) > s(q, d2)>· · ·> s(q, d|D|). In the adversarial ranking attack, an adversarial document is constructed by applying a deliberate perturbation pito the target document di ∈ L such that it can be ranked higher by the victim NRM. Existing attack methods search and find perturbations on different levels of granularity, such as replacing important words with synonyms (e.g., PRADA) or adding an extra text piece (e.g., PAT). Ideally, adversarial documents should display semantic consistency and fluency to avoid detection. However, there is still room for improvement regarding the imperceptibility of adversarial examples produced by existing methods, such examples can be found in Appendix A.5. Task setting. For consistency with real-world situations (e.g., black-hat SEO), akin to recent studies (Wu et al., 2022; Liu et al., 2022), this work focuses on the decision-based black-box attack setting, where attackers can only obtain a limited number of queries {qi} m i=1 and their corresponding ranking lists {Li =[di,1, di,2, · · · , di,|D|]} m i=1 with rank positions by querying the victim NRM MV , but have no access to the exact relevance scores, as well as the architecture, parameters, gradients, or training data of the victim NRM. In this setting, a weaksupervised training data TS ={(qi, di,j , di,k)} m i=1 is usually collected on basis of {(qi,Li)} m i=1, wherein 1≤j < k*≤|D|* so that di,j and di,k are considered as relatively positive and negative documents, respectively. A surrogate NRM MS is trained on this TS to imitate the victim NRM MV using a pairwise loss, such as hinge loss (Wu et al., 2022). Besides, existing methods require this surrogate NRM to be functionally similar to the victim NRM, as it determines the direction of adversarial attack (Liu et al., 2022). This means the attack performance of existing methods heavily depends on the surrogate NRM, but it needs lots of in-domain training samples (i.e., m is large) as well as the associated cost of querying the victim NRM, while our proposed IDEM can achieve reliable attack effect even with an out-of-domain (OOD) surrogate NRM. ## 3 Method As outlined in Figure 1 and Algorithm 1, IDEM contains two stages, the first is to obtain a series of connection sentences that can bridge the semantic gap between the query and the target document. After that, these connection sentences are considered at all positions in the inter-sentences of the target document to trade-off the semantic fluency and attacked relevance, and the best one is output as the final adversarial document. ## 3.1 Connection Sentences Generation To obtain more natural and fluent connection sentences, we choose to draw support from the wellestablished GLMs, such as BART in our default setting. The BART model (Lewis et al., 2020) has been pre-trained using a text infilling loss function, which enables it to fill in blanks within the context in a more flexible way. Hence, in this section, we use the BART model as an example to illustrate how to engineer GLMs in a blank-infilling pattern to generate connection sentences that include information about both the query and the document. Specifically, given a query q and a target document di∈L, which is simplified as d from here, we concatenate them with a blank in between as seen in the template as follows. $$q\oplus\mathrm{It\,is\,known\,that}\,\,\underline{{\phantom{\Bigcirc\,}}}\,\oplus\,d$$ wherein query q ends with appropriate punctuation, and the blank is replaced by a single special token defined and used in the GLM, such as [MASK] token in the BART model. Given both query q and document d as the context, the prefix text "It is known that" serves as a prompt (Liu et al., 2021) to instruct the BART model to output more informative text that is related to both q and d. The choice of prompt words can have an impact on the final attack effect, which is analyzed in Section 4.3. Afterward, the BART model takes Eq. 1 as the input and fills a variable number of tokens (i.e., sentences with varying lengths) into the blank position. Herein, we do not incorporate extra rankingincentivized objectives into the BART model in order to produce grammatically correct connection sentences well suited to the surrounding text. Besides, we employ the Top-k sampling (Fan et al., ## Algorithm 1 Idem Input: a query q, a target document d, a surrogate NRM MS for a victim NRM MV Parameter: the number of sentence in the target document |d|, the max number of connection sentence N, sample times K, sample size M, the max word length of connection sentence L Output: an adversarial document d adv 1: **Stage 1: Connection Sentences Generation** 2: Let Sc = {} 3: for 1 to K do 4: **while** |Sc| < N do 5: Sample M connection sentence candidates {su}M u=1 from the BART model by taking Eq. 1 as the input 6: if the length of su is smaller than L **then** 7: Keep su in Sc 8: **end if** 9: **end while** 10: **end for** 11: **Stage 2: Merging with Original Document** 12: for su in Sc do 13: for position index v from 0 to |d| do 14: Get an adversarial candidate d adv ⟨u,v⟩ as in Eq. 2, evaluate its *coherence* score (Eq. 3) and *relevance* score (Eq. 4), and compute the weighted merging score (Eq. 5) 15: **end for** 16: **end for** 17: Find and save the top-1 ranked d adv ⟨u,v⟩ as d adv 18: **return** d adv 2018) strategy to ensure the diversity of the generated sentences. As depicted in Algorithm 1, we sample multiple times (i.e., K), take M candidates each time and save at most N connection sentences, denoted as Sc = {s1, s2, *· · ·* sN }. We also limit the maximum length of the connection sentence to L words to control the interference in the semantic content of the original document. All connection sentences in Sc are considered in the next stage for the merging with the original document d. ## 3.2 Merging With Original Document As Transformer-based NRMs have a positional bias towards the start of document (Jiang et al., 2021; Hofstätter et al., 2021), adding adversarial text pieces at the beginning can usually achieve remarkable attack effect (Wang et al., 2022), but this also has the obvious drawback that the attack can be easily detected (Liu et al., 2022). Thus, to balance between the attack effect and the fidelity of perturbed document content, we design a positionwise merging strategy to place an appropriate connection sentence at an optimal position within the original target document. Specifically, given a query q, a target document with multiple sentences d = s1, s2, · · ·, s|d| and a set of connection sentences Sc, we evaluate a large number of candidate adversarial documents with respect to the various combinations of d and Sc. For each connection sentence su ∈ Sc, we merge su with the target document d by placing it at position index v (i.e., an integer from 0 to |d|), obtaining adversarial candidates in the form of Eq. 2. $$d_{\langle u,v\rangle}^{a d v}=\begin{cases}\overline{{{s}}}_{u}\oplus d&v=0\\ d_{1\to v}\oplus\overline{{{s}}}_{u}\oplus d_{(v+1)\to|d|}&0<v<|d|\\ d\oplus\overline{{{s}}}_{u}&v=|d|\end{cases}\tag{2}$$ wherein da→b means the text piece consists of consecutive sentences from sa to sb, and the subscript ⟨*u, v*⟩ means the connection sentence su is inserted at the index v of the target document d. Subsequently, we examine all of the candidate adversarial documents d adv ⟨u,v⟩ and find out the best one as the final adversarial document d adv. An effective adversarial example should be imperceptible to human judges yet misleading to NRMs (Wu et al., 2022). If the semantics of the added connection sentence differs greatly from the surrounding text in the target document, the resulting content will be incoherent and easily noticed by humans. Thus, with the help of the next sentence prediction (NSP) function in the pre-trained BERT model (Devlin et al., 2019), we define a coherence score on the junction between the connection sentence and the original target document. Specifically, when the connection sentence su is placed in the middle of the target document d (i.e., 0 *< v <* |d|), both the text pieces before and after the connection sentence are fed into the NSP function to get the *coherence* score as seen in Eq. 3. $$coh(d^{adv}_{\langle u,v\rangle})=0.5\times[f_{nsp}(d_{1\to v},\overline{s}_{u}\oplus d_{(v+1)\to|d|})$$ $$+f_{nsp}(d_{1\to v}\oplus\overline{s}_{u},d_{(v+1)\to|d|})]\tag{3}$$ wherein $f_{nsp}(\mathrm{A},\mathrm{B})$ is the NSP score of text A being followed by text B. Similarly, when su is attached to the beginning or the end of the document d (i.e., v = 0 or v = |d|), the coherence score is given as fnsp(su, d) or fnsp(d, su), respectively. Apart from the semantic coherence, another important factor that impacts the feasibility of attack is the relevance between the query q and the candidate d adv ⟨u,v⟩ . Herein, due to the black-box setting as described in Section 2, we need a surrogate NRM MS to estimate the *relevance* score as in Eq. 4. $$r e l(q,d_{\langle u,v\rangle}^{a d v})={\mathcal{M}}_{S}(q,d_{\langle u,v\rangle}^{a d v})\qquad\quad(4)$$ wherein the surrogate NRM MS is not required to be heavily ranking-consistent with the victim NRM MV but is only expected to distinguish which connection sentence contains spurious-relevant information (e.g., query keywords). In other words, the surrogate NRM does not participate in the generation of connection sentences, so that the adversarial information is from the BART model rather than the surrogate NRM. Thus, IDEM does not compulsively require a well-imitated surrogate NRM that is trained on a large number of in-domain training samples. As demonstrated later in Section 4.2, even when using an OOD surrogate NRM that is only slightly better than BM25, our proposed IDEM still achieves relatively high attack performance. Finally, to trade-off the semantic coherence and relevance, as seen in Eq.5, we resort to a weighted sum of them as the merging score for each d adv **Lemma 1**: **The following form for the $\alpha$-function** $$\begin{split}score_{m}(q,d^{adv}_{\langle u,v\rangle})&=\alpha\times coh(d^{adv}_{\langle u,v\rangle})\\ &\quad+(1-\alpha)\times rel(q,d^{adv}_{\langle u,v\rangle})\end{split}\tag{5}$$ wherein $\alpha$ is a weight factor, and both scores are . transformed to [0, 1] by the min-max normalization before being added together. Out of all the candidates d adv ⟨u,v⟩ , the one with the highest merging score is chosen as the final adversarial document d adv. ## 4 Experiments 4.1 Experimental Setup Dataset. Akin to previous works (Wu et al., 2022; Wang et al., 2022; Liu et al., 2022), we employ popular MS MARCO passage dataset (Nguyen et al., 2016) for our experiments, which contains about 8.8M passages (seen as documents in this paper) as the corpus. The evaluation data contains a Dev set with 6,980 queries and an Eval set with 6,837 queries. In addition, we use Natural Question (NQ) dataset (Kwiatkowski et al., 2019) that has been processed by Karpukhin et al. (2020) as the OOD training source for the surrogate NRM. Victim NRM. Akin to Liu et al. (2022), the 'msmarco-MiniLM-L-12-v2' model publicly available at Sentence-Transformers (Reimers and Gurevych, 2019) is used as the representative victim NRM (i.e., MV ) in our study. This victim NRM adopts MiniLM (Wang et al., 2020) as the backbone, and it is fine-tuned on MS MARCO in a cross-encoder architecture (Nogueira and Cho, 2019). Overall, MV achieves highly effective ranking performance on the MS MARCO Dev set as seen in Table 2. The transfer attacks from MV to other types of victim NRMs (e.g., MonoT5) are available in Section 4.2. Surrogate NRMs. To examine the practicality of ranking attack methods in real-world situations, wherein access to the victim ranking system could be limited or unavailable, we experiment with three kinds of surrogate NRMs: MS1 is trained on all 6,837 Eval queries, MS2 is trained on the randomly sampled 200 Eval queries, and MS3 is trained on the OOD NQ queries. More details of surrogate NRMs can be found in Appendix A.1. Target documents. Following Wu et al. (2022), we evaluate attack methods on randomly sampled 1K Dev queries with two types of target documents, i.e., Easy-5 and Hard-5, which are sampled from the re-ranked results by the victim NRM on the top1K BM25 candidates. Determining whether a document is "Easy" or "Hard" depends on the difficulty of boosting it into the top-10 or top-50 positions. Specifically, **Hard-5** is the five bottom-ranked documents, and **Easy-5** is the five documents ranked between [51, 100] in which one document is randomly picked out of every 10 documents. Details of IDEM. By default, pre-trained BARTBase (Lewis et al., 2020) is instructed to generate connection sentences. The impacts of other GLMs, including pre-trained T5 (Raffel et al., 2020) and fine-tuned GPT-2 (Donahue et al., 2020), are also examined in Section 4.3. In the generation stage, k in sampling strategy is set as 50, and we sample 10 or 50 times (50 ones per time) and save at most 100 or 500 connection sentences with a max length of 12 words (i.e., M = 50, K = 10/50, N = 100/500 and L= 12). In the merging stage, we employ the NSP function in pre-trained BERT-Base to evaluate the coherence, and the α in Eq.5 is set as 0.5 (0.1) for Easy-5 (Hard-5) target documents. The impacts of L and N are available in Section 4.3, and the impact of α is analyzed in Appendix A.4. Compared methods. We compare the following attack methods against NRMs: **Query+** (Liu et al., 2022) directly adds the query text at the beginning of the target document. **PRADA** (Wu et al., 2022) finds and replaces important words with synonyms in the target document. **Brittle-BERT** (Wang et al., 2022) and PAT (Liu et al., 2022) append a few tokens at the beginning of the target document, the former only considers the ranking-oriented objective, while the later further considers semantic and fluency constraints. For fair comparisons, PRADA perturbs at most 20 tokens, Brittle-BERT and PAT add at most 12 tokens. More details of these attack baselines are available in Appendix A.2. Metrics. As for the re-ranking results, we report official MRR@10/1K metrics on the MS MARCO Dev set. Akin to Liu et al. (2022), we calculate the overlap of top-10 (Inter@10) and the Rank Biased Overlap (Webber et al., 2010) (RBO@1K, p is set as 0.7) to measure the ranking consistency between the surrogate and victim NRMs. As for the attack results, we report the percentage of successfully boosted target documents (i.e., attack successful rate, ASR) following Wu et al. (2022). Akin to Liu et al. (2022), we also report the average boosted ranks (*Boost*) of the target documents, and the percentage of the target documents that are promoted into top-10 (% r ≤ 10) and top-50 (% r ≤ 50). In addition, we measure the average perplexity (PPL) calculated using a pre-trained GPT-2 model (Radford et al., 2019), where a *lower* PPL value reflects better fluency of the adversarial documents as suggested by Kann et al. (2018) and Lei et al. (2022). ## 4.2 Results IDEM demonstrates the ability to achieve comparable attack performance while not greatly diminishing the semantic fluency. The attack results are summarized in Table 1, where all values are averaged across the 5K target documents for 1K Dev queries. When the query is added to the beginning of the target document (referred to as Query+), a notable increase in ranking can be observed, which is as expected. However, this approach negatively impacts semantic fluency, resulting in a loss of approximately 8 (17) perplexity points on Easy-5 (Hard-5) target documents. Ideally, when the victim NRM is freely accessible, a sufficient amount of surrogate training data is obtainable to train a well-imitated surrogate model, like MS1 in Table 2. As demonstrated in Table 1, when applied in conjunction with MS1 , the use of word-level synonym substitutions (i.e., PRADA) does not provide superior attack performance, but adding adversarial tokens at the beginning of the target documents Surrogate Method Easy-5 **Hard-5** NRM ASR % r≤10 % r≤50 *Boost* PPL↓ ASR % r≤10 % r≤50 *Boost* PPL↓ - Original - - - - 37.3 - - - - 50.5 - Query+ 100.0 86.9 99.2 70.3 45.4 100.0 47.8 78.3 955.1 67.5 PRADA 77.9 3.52 46.2 23.2 94.4 68.0 0.02 0.10 65.2 154.4 Brittle-BERT 98.7 81.3 96.7 67.3 107.9 **100.0 61.5 85.9 965.5** 152.5 PAT 89.6 30.6 73.8 41.9 50.9 98.0 6.24 20.1 589.1 71.4 IDEMN=100 99.3 77.9 97.0 67.0 **36.2** 99.5 41.4 66.9 875.6 **54.6** IDEMN=500 **99.7 87.4 99.0 70.3** 36.4 99.8 54.3 79.3 933.0 54.9 | MS1 MS2 MS3 | |---------------| PRADA 69.6 1.36 35.0 17.4 90.7 66.0 0.00 0.10 49.3 152.1 Brittle-BERT 81.6 33.2 69.7 36.4 131.3 94.8 8.98 25.4 565.1 179.5 PAT 61.9 8.46 37.3 12.5 49.3 84.4 0.82 3.40 221.7 66.2 IDEMN=100 96.8 65.3 91.8 60.7 **36.5** 97.6 31.7 57.0 822.0 **54.7** IDEMN=500 **98.7 74.8 95.4 65.1** 37.0 **99.1 39.6 67.4 890.7** 55.4 PRADA 71.5 1.86 37.5 19.1 91.5 71.9 0.00 0.08 73.4 168.7 Brittle-BERT 90.0 43.4 80.1 46.2 117.7 **99.9** 17.7 47.6 845.2 156.8 PAT 51.1 2.70 22.9 2.01 46.8 79.0 0.00 0.66 92.9 64.2 IDEMN=100 97.2 57.3 89.8 58.1 **36.9** 99.2 23.0 48.8 805.7 **55.2** IDEMN=500 **98.8 65.3 93.8 61.9** 37.7 99.8 **29.1 57.9 866.2** 56.0 (i.e., Brittle-BERT and PAT) yields better attack performance. However, adversarial documents generated by these prior methods, particularly PRADA and Brittle-BERT, exhibit excessively high perplexity (PPL) values. In contrast, our proposed IDEM not only achieves comparable attack performance, exemplified by its superior performance on Easy-5 target documents, but also mitigates the detrimental effects on text fluency, as evidenced by the lower PPL values on its adversarial documents. ## Idem Is More Robust And Much Less Affected by the surrogate NRM, even when it is an OOD NRM. When access to the victim NRM (i.e., MV ) is restricted, as shown in Table 2, there is a notable discrepancy between the surrogate MS2 and the victim MV , as only a limited number of indomain training samples can be used. When working with MS2 , the attack performances of all methods, particularly PAT and Brittle-BERT, are greatly diminished as can be observed in Table 1. For ex- | Model | MRR@10 | MRR@1K | Inter@10 | RBO@1K | |---------|----------|----------|------------|----------| | BM25 | 18.4 | 19.5 | 22.8 | 31.5 | | MV | 39.5 | 40.3 | - | - | | MS1 | 37.0 | 37.8 | 73.1 | 66.2 | | MS2 | 23.0 | 24.2 | 41.0 | 31.2 | | MS3 | 21.0 | 22.2 | 27.3 | 37.6 | ample, when MS1 is changed to MS2 , the attack performance (% r ≤ 10) of Brittle-BERT drops dramatically from 81.3 to 33.2 on Easy-5 target documents, and from 61.5 to 8.98 on Hard-5 target documents. Additionally, the semantic fluency of adversarial documents produced by Brittle-BERT also decreases, as evidenced by an increase in PPL from 107.9 to 131.3 on Easy-5 target documents. Furthermore, when access to the victim NRM is not available, only publicly available OOD training samples can be used, yield a more discrepant surrogate NRM, like MS3 in Table 2. In this situation, as shown in Table 1, the performances of prior attack methods are also not ideal, especially PRADA and PAT. However, when working with both MS2 and MS3 , IDEM demonstrates the best attack results among all baselines while preserving the semantic fluency of target documents. For more details, including the attack efficiency and adversarial examples for different attack methods, please refer to Appendixes A.3 and A.5, respectively. IDEM generates more general adversarial documents that can be well transferred across different victim NRMs. To examine the transferability of adversarial documents, considering that the target victim NRM is subject to continuous improvement, updates, or even changes, we conduct experiments with three different NRMs as the attack targets: ELECTRA ('ms-marco-electrabase'; Reimers and Gurevych, 2019), MonoT5 ('monot5-base-msmarco'; Nogueira et al., 2020) | Victim NRM | Method | ASR | % r≤10 | Boost | |--------------|----------|-------|----------|---------| | PRADA | 44.6 | 0.94 | 14.6 | | | Brittle-BERT | 80.0 | 23.6 | 216.4 | | | PAT | 57.9 | 4.7 | 49.5 | | | IDEMN=100 | 94.7 | 47.4 | 336.6 | | | IDEMN=500 | 97.5 | 55.9 | 371.8 | | | ELECTRA | PRADA | 51.6 | 0.83 | 9.28 | | Brittle-BERT | 85.4 | 17.2 | 253.0 | | | PAT | 67.8 | 3.71 | 113.6 | | | IDEMN=100 | 96.2 | 43.5 | 400.8 | | | IDEMN=500 | 98.2 | 51.7 | 434.9 | | | MonoT5 | PRADA | 45.8 | 0.68 | 22.1 | | Brittle-BERT | 87.9 | 17.2 | 292.8 | | | PAT | 72.2 | 3.87 | 114.1 | | | IDEMN=100 | 96.2 | 46.3 | 418.7 | | | IDEMN=500 | 98.0 | 54.7 | 453.8 | | | ColBERT | | | | | and ColBERT (Khattab and Zaharia, 2020). The adversarial documents were generated by attacking the MV ('ms-marco-MiniLM-L-12-v2') NRM and subsequently used to attack other victim NRMs. Due to space constraints, Table 3 primarily focuses on presenting the cross-attack results of all adversarial target documents generated using the surrogate NRMMS2 in both Easy-5 and Hard-5 sets. As observed in Table 3, regardless of the target victim NRM, the attack performances of different methods exhibit a consistent trend: IDEM > Brittle-BERT > PAT > PRADA. This trend is also in line with the results presented in Table 1, indicating that adversarial documents that display higher aggression towards one NRM are more likely to be effective against other NRMs due to shared characteristics among the models, such as exact matching. ## 4.3 Analysis Impact of prompt in IDEM. During the generation of connection sentence, a prompt (in Eq.1) is utilized to guide the BART model to output more informative text. Herein, we examine the impact of prompt on the final attack results of IDEM. As seen in Table 4, four different prompts are analyzed, in which "It is known that" produces better attack results, particularly on Hard-5 target documents, and also generates lower PPL values. In contrast, "The fact is that" and "We know that" perform slightly worse on Hard-5 target documents, and "It is about that" performs even worse. Overall, the choice of prompt has marginal impact on the ASR but mainly | Prompt | ASR | % r≤10 | Boost | PPL↓ | |------------------|-------|----------|---------|--------| | Easy-5 | | | | | | It is known that | 99.3 | 77.9 | 67.0 | 36.2 | | It is about that | 98.3 | 60.9 | 60.3 | 39.1 | | We know that | 99.3 | 74.4 | 65.8 | 36.8 | | The fact is that | 99.6 | 77.2 | 66.9 | 36.8 | | Hard-5 | | | | | | It is known that | 99.5 | 41.4 | 875.6 | 54.6 | | It is about that | 99.5 | 20.0 | 798.9 | 57.3 | | We know that | 99.4 | 33.9 | 851.1 | 54.9 | | The fact is that | 99.5 | 37.2 | 870.0 | 54.9 | | NRM | Index | | | | | |--------|---------|-------|-------|-------|------| | % v=0 | % v=1 | % v=2 | % v=3 | % v≥4 | | | Easy-5 | | | | | | | MS1 | 83.9 | 12.2 | 2.62 | 0.88 | 0.48 | | MS2 | 74.1 | 15.0 | 4.77 | 1.56 | 4.59 | | MS3 | 22.3 | 29.5 | 22.3 | 11.7 | 14.2 | | Hard-5 | | | | | | | MS1 | 83.2 | 13.5 | 2.12 | 0.76 | 0.40 | | MS2 | 76.3 | 14.6 | 4.32 | 1.12 | 3.68 | | MS3 | 9.76 | 37.7 | 24.8 | 12.9 | 14.9 | affects the degree of promotion in the attack. Position of connection sentence in IDEM. As mentioned in Section 3.2, IDEM can automatically place an adversarial connection sentence within the target documents. As summarized in Table 5, we count the positions (i.e., index v) of connection sentences. When the surrogate NRM is MS1 or MS2 , it is observed that less than 30% of the connection sentences are positioned in the middle (i.e., v≥1). Meanwhile, when the surrogate NRM is MS3 , the distribution of insert positions is found to be more uniform, with only 10-20% of the connection sentences being appended to the beginning (i.e., v= 0). These findings indicate that IDEM can place the adversarial text at any positions within the target document, making it harder to be detected. IDEM based on other GLMs. In addition to evaluating IDEM using BART, we also examine how well IDEM performs when other GLMs with blank-filling capabilities are instructed to generate connection sentences, including T5-Base (Raffel et al., 2020) and ILM (a fine-tuned version of GPT2; Donahue et al., 2020). As presented in Table 6, | GLM | ASR | % r≤10 | % r≤50 | Boost | PPL↓ | |--------|-------|----------|----------|---------|--------| | Easy-5 | | | | | | | BART | 99.3 | 77.9 | 97.0 | 67.0 | 36.2 | | T5 | 98.7 | 75.3 | 95.4 | 65.2 | 38.0 | | ILM | 93.9 | 45.7 | 81.5 | 50.6 | 41.5 | | Hard-5 | | | | | | | BART | 99.5 | 41.4 | 66.9 | 875.6 | 54.6 | | T5 | 97.8 | 37.1 | 60.1 | 820.9 | 56.8 | | ILM | 98.3 | 16.0 | 35.6 | 693.2 | 59.8 | Table 6: Attack results of IDEMN=100 under MS1 with other generative language models (GLMs). | Method | #Correctness↓ | #Suggestions↓ | Quality | |--------------|-----------------|-----------------|-----------| | Original | 1.80 | 4.23 | 59 | | Query+ | 2.39 | 6.50 | 49 | | PRADA | 5.51 | 11.0 | 22 | | Brittle-BERT | 3.74 | 7.12 | 41 | | PAT | 2.40 | 6.21 | 52 | | IDEMN=100 | 2.29 | 5.06 | 57 | | IDEMN=500 | 2.31 | 5.18 | 57 | our empirical results show that while the BARTbased IDEM is found to be the overall best, all three models achieve high attack success rates, indicating the general applicability of the IDEM approach. Online grammar checking. In order to examine the extra errors introduced by different attack methods, we employ the popular online grammar checker *Grammarly*2to evaluate the quality of adversarial documents. Specifically, we collect 100 adversarial documents (with the same id) produced by each attack method for 50 Dev queries. We report three evaluation metrics given by Grammarly, including the average number of issues in correctness (e.g., spelling, grammar, and punctuation) and suggestions (e.g., wordy or unclear sentences, etc.) in each adversarial document, and the quality score of all adversarial documents. As shown in Table 7, IDEM introduces the fewest issues and obtains the highest quality score among all attack methods, indicating that the adversarial documents produced by IDEM are more machine-imperceptible. Human evaluation. To further prove that the adversarial documents generated by IDEM are more natural to readers, a human-subject evaluation was conducted to assess the imperceptibility of adversarial text. Specifically, 32 adversarial documents were randomly selected for each attack method, 2https://app.grammarly.com | Method | Human-Imperceptibility Avg. Kappa | | |--------------|-------------------------------------|------| | Original | 0.69 | 0.44 | | Query+ | 0.36 | 0.55 | | PRADA | 0.45 | 0.56 | | Brittle-BERT | 0.50 | 0.43 | | PAT | 0.55 | 0.69 | | IDEMN=100 | 0.78 | 0.11 | | IDEMN=500 | 0.64 | 0.54 | | Method | Linguistic Acceptability | Accuracy | |--------------|----------------------------|------------| | Original | 0.76 | 0.87 | | Query+ | 0.50 | 0.52 | | PRADA | 0.47 | 0.58 | | Brittle-BERT | 0.19 | 0.97 | | PAT | 0.42 | 0.70 | | IDEMN=100 | 0.66 | 0.27 | | IDEMN=500 | 0.65 | 0.28 | and 32 original unaltered documents were also selected. All these documents were then mixed and randomly divided into two groups, with each group being evaluated by two annotators who are computer science graduate students with the necessary knowledge to understand the nature of the ranking attack. The annotators were tasked with determining whether the document content had been attacked (0) or was normal (1). We averaged all annotations on 32 samples from 2 annotators as the final imperceptibility score, and also computed Kappa coefficient for the annotation consistency. As can be seen in Table 8, IDEM receives the highest score for human imperceptibility among all attack methods. Additionally, the Kappa values of almost all attack methods are larger than 0.4 (considered as "moderate agreement"), while IDEMN=100 has the smallest Kappa value (i.e., 0.11), which seems reasonable since it is hard to reach an agreement on the more imperceptibly attacked documents. Mitigation by linguistic acceptability. In an effort to mitigate the attacks, we make additional attempts using a classification model3trained on the CoLA dataset (Warstadt et al., 2019). The same adversarial documents subjected to online grammar 3https://huggingface.co/textattack/ roberta-base-CoLA ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) ![8_image_2.png](8_image_2.png) checking are evaluated for linguistic acceptability (LA) using this model. In addition to the LA values, the classification accuracy is reported as an indicator of the mitigation effectiveness. As indicated in Table 9, IDEM's adversarial documents exhibit linguistic acceptability closest to that of the original documents, while Brittle-BERT's adversarial documents are deemed the most unacceptable. Moreover, this LA model can successfully identify over 50% of the adversarial documents from Query+, PRADA and PAT, and even 97% of the adversarial documents from Brittle-BERT. In contrast, only 27-28% of IDEM's adversarial documents are filtered out, implying that more than 70% of IDEM's attacks remain effective against this mitigation. Also, the misclassification rate of this LA model on original documents is only 13%. The impact of hyper-parameters. We evaluate IDEM with two hyper-parameters to study how they affect the attack performance: the max length (L) and max number (N) of connection sentences. For L analysis, we maintain N at 100, while for N analysis, we keep L at 12. In Figure 2, the improvement in IDEM's attack performance becomes less significant as L increases from 12 to 24 compared to that observed from 6 to 12, leading us to select L as 12. Similarly, in Figure 3, IDEM's performance improves as N increases, but the time required to generate one adversarial document increases at a faster rate. Hence, we set N to 500 for a balance between attack effectiveness and efficiency. ## 5 Related Work The adversarial IR has been studied for an extended period, such as black-hat SEO, which refers to the intentional manipulation of Web pages with the goal of obtaining an unjustified ranking position, resulting in a decline in the quality of search results and an inundation of irrelevant pages (Gyöngyi and Garcia-Molina, 2005). In this context, mainstream research focuses on studying the adversarial manipulations through various aspects, such as detection (Dalvi et al., 2004; Ntoulas et al., 2006), theoretical and empirical analysis (Raifer et al., 2017), robustness of LTR-based ranking functions (Goren et al., 2018), automatic content modification (Goren et al., 2020), and other research directions (Kurland and Tennenholtz, 2022). Recently, there has been significant progress in NRMs, particularly those leveraging PLMs, which have shown exceptional performance in text ranking (Lin et al., 2021). Concurrently, an increasing number of studies have shed light on the robustness concerns of NRMs in various scenarios, including the presence of query typos or variations (Zhuang and Zuccon, 2021; Penha et al., 2022; Chen et al., 2022), textual noises (Chen et al., 2023), and adversarial attacks (Raval and Verma, 2020; Song et al., 2022). Although current attack methods like PRADA (Wu et al., 2022), Brittle-BERT (Wang et al., 2022), and PAT (Liu et al., 2022) have shown the ability to deceive NRMs successfully, they introduce additional quality issues and often heavily rely on surrogate NRMs for document manipulation. Instead, our proposed IDEM effectively overcomes the limitations of these existing attack methods and showcases remarkable attack performance. ## 6 Conclusion In this study, we introduce a document manipulation framework named IDEM, which is engineered to produce adversarial documents that are not easily detected by both humans and algorithms. Our experiments on the MS MARCO dataset show that IDEM can not only achieve a high level of attack performance, but also generate correct and fluent adversarial documents as evaluated by both automatic and human assessments. ## Limitations In our experiments, as NRMs with cross-encoder are widely used, we focus on evaluating the textual adversarial robustness during the re-ranking stage and do not currently take into account the effect on the retrieval stage. But actually, in a "first retrieval then re-ranking" ranking paradigm, the attack is effective only when the adversarial documents are passed into the top retrieval results. Meanwhile, dense retrieval (DR) models have been widely studied, and they may also inherit adversarial vulnerabilities due to the basics of PLMs. Besides, due to limitations in our computing resources, we only tested adding adversarial text to relatively short documents (i.e., passage-level), but the document content in real-world applications could be much longer. Therefore, further comprehensive investigations on examining the NRMs with different architectures, the effects of attacks on the retrieval models, and the manipulations on longer documents are left for future work. Finally, it is important to note that mitigation and defense methods against adversarial ranking attacks are currently understudied, making it a significant area for future research. ## Ethics Statement In this paper, we investigate the potential vulnerability concerns of the neural information retrieval (IR) systems and propose a document manipulation framework that generates adversarial documents that are not easily detected by both humans and IR systems. We hope that this study could inspire further exploration and design of adversarial ranking defense/detection methods and aid in the development of robust real-world search engines. ## Acknowledgements This work is supported by the National Natural Science Foundation of China under Grants no. U1936207 and 62272439. ## References Carlos Castillo and Brian D. Davison. 2010. Adversarial web search. *Found. Trends Inf. Retr.*, 4(5):377– 486. Xuanang Chen, Ben He, Kai Hui, Le Sun, and Yingfei Sun. 2023. Dealing with textual noise for robust and effective BERT re-ranking. *Inf. Process. Manag.*, 60(1):103135. Xuanang Chen, Jian Luo, Ben He, Le Sun, and Yingfei Sun. 2022. Towards robust dense retrieval via local ranking alignment. In *Proceedings of the Thirty-First* International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 1980–1986. ijcai.org. Nilesh N. Dalvi, Pedro M. Domingos, Mausam, Sumit K. Sanghai, and Deepak Verma. 2004. Adversarial classification. In *Proceedings of the Tenth* ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 99–108. ACM. Brian D. Davison, Marc Najork, and Tim Converse. 2006. Adversarial information retrieval on the web (airweb 2006). *SIGIR Forum*, 40(2):27–30. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2492– 2501, Online. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Saad Farooq. 2019. A survey on adversarial information retrieval on the web. *CoRR*, abs/1911.11060. Gregory Goren, Oren Kurland, Moshe Tennenholtz, and Fiana Raiber. 2018. Ranking robustness under adversarial document manipulations. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 395– 404. ACM. Gregory Goren, Oren Kurland, Moshe Tennenholtz, and Fiana Raiber. 2020. Ranking-incentivized quality preserving content modification. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 259–268. ACM. Zoltán Gyöngyi and Hector Garcia-Molina. 2005. Web spam taxonomy. In *AIRWeb 2005, First International Workshop on Adversarial Information Retrieval on the Web, co-located with the WWW conference, Chiba, Japan, May 2005*, pages 39–47. Sebastian Hofstätter, Aldo Lipani, Sophia Althammer, Markus Zlabinger, and Allan Hanbury. 2021. Mitigating the position bias of transformer models in passage re-ranking. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, volume 12656 of Lecture Notes in Computer Science, pages 238–253. Springer. Zhiying Jiang, Raphael Tang, Ji Xin, and Jimmy Lin. 2021. How does BERT rerank passages? an attribution analysis with information bottlenecks. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 496–509, Punta Cana, Dominican Republic. Association for Computational Linguistics. Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-level fluency evaluation: References help, but can be spared! In *Proceedings of the* 22nd Conference on Computational Natural Language Learning, pages 313–323, Brussels, Belgium. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 39–48. ACM. Oren Kurland and Moshe Tennenholtz. 2022. Competitive search. In *SIGIR '22: The 45th International* ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, pages 2838–2849. ACM. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. *Trans. Assoc. Comput. Linguistics*, 7:452– 466. Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, and Mykola Pechenizkiy. 2022. Phrase-level textual adversarial attack with label preservation. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1095–1112, Seattle, United States. Association for Computational Linguistics. Alex Goh Kwang Leng, Ravi Kumar Patchmuthu, Ashutosh Kumar Singh, and Anand Mohan. 2012. Link-based spam algorithms in adversarial information retrieval. *Cybern. Syst.*, 43(6):459–475. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021. Pretrained Transformers for Text Ranking: BERT and Beyond. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Jiawei Liu, Yangyang Kang, Di Tang, Kaisong Song, Changlong Sun, Xiaofeng Wang, Wei Lu, and Xiaozhong Liu. 2022. Order-disorder: Imitation adversarial attacks for black-box neural ranking models. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022, pages 2025–2039. ACM. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of *CEUR* Workshop Proceedings. CEUR-WS.org. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 708–718, Online. Association for Computational Linguistics. Rodrigo Frassetto Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. *CoRR*, abs/1901.04085. Alexandros Ntoulas, Marc Najork, Mark S. Manasse, and Dennis Fetterly. 2006. Detecting spam web pages through content analysis. In Proceedings of the 15th international conference on World Wide Web, WWW 2006, Edinburgh, Scotland, UK, May 23-26, 2006, pages 83–92. ACM. Gustavo Penha, Arthur Câmara, and Claudia Hauff. 2022. Evaluating the robustness of retrieval pipelines with query variation generators. In *Advances in Information Retrieval - 44th European Conference on* IR Research, ECIR 2022, Stavanger, Norway, April 10-14, 2022, Proceedings, Part I, volume 13185 of Lecture Notes in Computer Science, pages 397–412. Springer. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Nimrod Raifer, Fiana Raiber, Moshe Tennenholtz, and Oren Kurland. 2017. Information retrieval meets game theory: The ranking competition between documents? authors. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017, pages 465–474. ACM. Nisarg Raval and Manisha Verma. 2020. One word at a time: adversarial attacks on retrieval models. *CoRR*, abs/2008.02197. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Stephen E. Robertson, Steve Walker, Micheline Hancock-Beaulieu, Mike Gatford, and A. Payne. 1995. Okapi at TREC-4. In *Proceedings of* The Fourth Text REtrieval Conference, TREC 1995, Gaithersburg, Maryland, USA, November 1-3, 1995, volume 500-236 of *NIST Special Publication*. National Institute of Standards and Technology (NIST). Congzheng Song, Alexander Rush, and Vitaly Shmatikov. 2020. Adversarial semantic collisions. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 4198–4210, Online. Association for Computational Linguistics. Junshuai Song, Jiangshan Zhang, Jifeng Zhu, Mengyun Tang, and Yong Yang. 2022. TRAttack: Text rewriting attack against text retrieval. In *Proceedings of* the 7th Workshop on Representation Learning for NLP, pages 191–203, Dublin, Ireland. Association for Computational Linguistics. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yumeng Wang, Lijun Lyu, and Avishek Anand. 2022. BERT rankers are brittle: A study using adversarial document perturbations. In *ICTIR '22: The 2022* ACM SIGIR International Conference on the Theory of Information Retrieval, Madrid, Spain, July 11 - 12, 2022, pages 115–120. ACM. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Trans. Assoc. Comput. Linguistics, 7:625–641. William Webber, Alistair Moffat, and Justin Zobel. 2010. A similarity measure for indefinite rankings. ACM Trans. Inf. Syst., 28(4):20:1–20:38. Chen Wu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, and Xueqi Cheng. 2022. PRADA: practical black-box adversarial attacks against neural ranking models. *CoRR*, abs/2204.01321. Jincheng Xu and Qingfeng Du. 2020. Texttricker: Loss-based and gradient-based adversarial attacks on text classification models. *Eng. Appl. Artif. Intell.*, 92:103641. Shengyao Zhuang and Guido Zuccon. 2021. Dealing with typos for BERT-based passage retrieval and ranking. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 2836–2842, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Lixin Zou, Shengqiang Zhang, Hengyi Cai, Dehong Ma, Suqi Cheng, Shuaiqiang Wang, Daiting Shi, Zhicong Cheng, and Dawei Yin. 2021. Pre-trained language model based ranking in baidu search. In *KDD '21:* The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, Singapore, August 14-18, 2021, pages 4014–4022. ACM. ## A Appendix A.1 Details Of Surrogate Nrms As mentioned in Section 4.1, we conduct black-box attacks against the victim neural ranking model (NRM) MV using three types of surrogate NRMs. Since the training data of the victim NRM is typically unavailable, we use Eval queries in the MS MARCO dataset rather than train queries to construct the surrogate training data TS. As we move from MS1 to MS2 , the number of in-domain Eval queries used for ranking imitation decreases from 6,837 to 200, it means the frequency of querying the victim NRM is greatly reduced. For each Eval query qi, we collect all 406 document pairs (di,j , di,k) (1≤*j < k* ≤29) from the top-29 of the re-ranking list produced by the victim NRM over the top-1K BM25 candidates, and use this to construct the surrogate training data TS as described in Section 2. As a result, the in-domain surrogate training data for MS1 and MS2 contains 81.2 thousand and 2.77 million training triples, respectively. Additionally, MS3 implies the scenario where no in-domain data from the victim NRM is available, and we collect 3.72 million training triples from the NQ dataset as the out-of-domain (OOD) surrogate training data. The surrogate NRMs are based on the pre-trained BERT-Base model and fine-tuned for two epochs with a learning rate of 3e-6 and batch size of 16 for MS1 and MS2 , and one epoch for MS3 with the same learning rate and batch size. It is important to note that the goal of this work is not to develop a new training method for the surrogate NRMs, and thus we directly adopt the hinge loss with a margin of 1 for ranking imitation as per previous work (Wu et al., 2022). ## A.2 Details Of Attack Baselines In our adversarial attack experiments, we examine the following baseline methods: Query+ (Liu et al., 2022) is an intuitive baseline that directly appends the query text to the beginning of the target document. Although the query text can be placed at any position in the target document, or even determined by our proposed positionaware mechanism, appending to the beginning usually produces greater attack results due to the positional bias in Transformer-based NRMs (Jiang et al., 2021; Hofstätter et al., 2021). Thus, Query+ acts as a baseline method that does not take into account the invisibility aspect of the attack. PRADA (Wu et al., 2022) is a Word Substitution Ranking Attack (WSRA) method, it first finds important words (i.e., sub-word tokens) in the target document according to the gradient magnitude (Xu and Du, 2020), and then greedily replaces them with the synonyms found in a perturbed word embedding space via PGD (Madry et al., 2018). Based on our observations, when attacking different random target samples, PRADA is able to attain attack performance (ASR) that is close to the results reported in its original publication. This is particularly true when the victim NRM has relatively poor ranking performance, due to the obvious fact that it is much easier to attack a weaker victim NRM. Despite this variability, the overall conclusions drawn from our experimentation remain unchanged. Brittle-BERT (Wang et al., 2022) studies both local (i.e., a particular query-document instance) and global (i.e., an entire workload of queries) ranking attack to cause a large rank demotion or promotion by adding/replacing a small number of tokens. In our work, we only adopt the local setting and add tokens to the beginning of the target document as it usually produces better attack results. Specifically, Brittle-BERT first initializes a few placeholder tokens at the beginning of the target document and then employs HotFlip (Ebrahimi et al., 2018) algorithm to update them as being more adversarial. PAT (Liu et al., 2022) generates and adds several trigger tokens at the beginning of the target document. In addition to the ranking-incentivized objective, the search objective of PAT is equipped with semantic and fluency constraints using the pretrained BERT model. The surrogate NRMs trained with hinge loss have a one-class prediction layer, but PAT needs a surrogate NRM with a two-class prediction layer, namely, 'Pairwise BERT' as denoted in PAT (Liu et al., 2022), so we use the same surrogate training data to obtain 'Pairwise BERT' using the default imitation loss in PAT. In order to evaluate these baselines, we utilize their publicly available implementations and ensure that all settings are consistent with those described in their respective official publications. ## A.3 Time Cost Of Attack In previous PRADA, Brittle-BERT and PAT, the replacement, selection, and search of tokens are carried out one by one using the surrogate NRM to produce an adversarial document, so it needs a large amount of time to complete the attack process. However, in our proposed IDEM framework, ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) the adversarial text is first generated via a GLM (e.g., BART), and then combined in the sentence level, which is more efficient than the token level. As seen in Figure 4, we summarize the total time cost of different attack methods in producing 5,000 adversarial documents using one Titan RTX GPU. When at most 100 candidate connection sentences are generated for the position-wise combination, IDEMN=100 only takes 11.3 hours to achieve great attack performance. By comparison, PRADA and PAT consume more time but still perform worse in attack, Brittle-BERT takes a huge time cost (near 800 hours) even though its attack performance is considerable. Additionally, compared with BrittleBERT, IDEMN=500 can produce comparable attack performance but take much less time cost. Therefore, our proposed IDEM method lays equal stress on attack efficiency and performance. ## A.4 The Impact Of Α **In The Merging Score** In our IDEM framework, after generating a series of connection sentences between the query and the target document, a position-aware merging mechanism is employed to decide the final adversarial document, wherein a coherence score and a relevance score are added together using α as seen in Eq. 5. As shown in Figure 5, we can observe that α in a wide range (from 0 to 0.95) does not affect the attack success rate (ASR) too much on both Easy-5 and Hard-5 target document sets, and the % r ≤ 10 metric starts to decrease at α= 0.5 and α= 0.1 on Easy-5 and Hard-5 target document sets, respectively. As for the perplexity (PPL) metric (smaller is better), when α increases (more attention on the coherence), PPL value does not change a lot until α reaches about 0.9 to 1. Meanwhile, it can produce adversarial documents with lower PPL than original ones, e.g., when α is 1, the average PPL points on Easy-5 and Hard-5 target document sets are only 31.8 and 43.6, respectively. ## A.5 Adversarial Examples To better understand the workings of various attack methods, we show adversarial examples produced by them under three types of surrogate NRMs in Table 10. We can observe that Query+, and BrittleBERT, PAT, IDEM under MS1 promote the target document ranked at 88th into top-10. However, adding query text (i.e., Query+) and unnatural token sequence (i.e., Brittle-BERT and PAT) at the beginning make adversarial documents distinguishable, while the inserted adversarial text by IDEM is more semantically consistent with the original surrounding content. When the surrogate NRM degrades from MS1 to MS2 as not enough in-domain training samples are available, we can see that the ranking of the adversarial document by PRADA decreases from 27th to 65th, and the ranking of the adversarial document by Brittle-BERT also decreases from 2nd to 14th. Furthermore, when an OOD surrogate NRM (i.e., MS3 ) is used due to the forbidden access to the victim NRM, we can find out that the attack effects of PRADA, BrittleBERT and PAT are greatly suppressed. For example, although PAT under MS2 promotes the target document to the ranking of 2nd, PAT under MS3 even demotes the ranking of the target document from 88th to 107th. In contrast, under both MS2 and MS3 , IDEM robustly promotes the target document into top-10 (i.e., 9th), and the fluency and correctness of adversarial documents are still within an acceptable range. From these adversarial cases, it is evident that IDEM is less dependent on the surrogate NRM and can perform attacks more robustly than previous attack methods, indicating a flexible use condition of IDEM in real-world situations. | Method | Original or Adversarial Text | Rank↓ | PPL↓ | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|---------|--------| | Carry your baby in a sling or front carrier. Feeling your baby's warmth, smelling | 88 | 33.9 | | | his sweet scent, and looking down often to make eye contact with him can help | | | | | Original | you bond. Spend plenty of close-up face time with your baby. Smile at him, and return the smile when he smiles first. | | | | Query+ | what age can you wear baby on back in a carrier? Carry your baby in a sling or fr- | 2 | 40.4 | | ont carrier. Feeling your baby's warmth, smelling his sweet scent, and looking ... Surrogate NRM MS1 wear your baby in a sling ord front carrier. Feeling your baby's warm, smelling his sweet scent, and looking down often to make eye contact with him can help you b- | 27 | 100.8 | | | PRADA | ondage. Spend plenty of close-up face time with your baby. Smile at him, and ... | | | | Brittle-BERT | pendingerabidheartedivating aged aged 292,oning worn wear Carry your baby in a | 2 | 125.0 | | sling or front carrier. Feeling your baby's warmth, smelling his sweet scent, and ... | | | | | PAT | about 30 year old babies can carry baby carriers back to age Carry your baby in a | 2 | 52.0 | | sling or front carrier. Feeling your baby's warmth, smelling his sweet scent, and ... | | | | | IDEMN=100 | Carry your baby in a sling or front carrier. A child of any age can wear shoes with | 9 | 33.7 | | a sling. Feeling your baby's warmth, smelling his sweet scent, and looking ... | | | | | IDEMN=500 | Carry your baby in a sling or front carrier. Most parents wear infant carriers arou- | 1 | 36.5 | | nd age 3, 4, and 5. Feeling your baby's warmth, smelling his sweet scent, and ... Surrogate NRM MS2 | | | | | PRADA | Carry your baby in a slingshot ord front carrier. Feeling your baby's warm, smell- | 65 | 82.2 | | ing his sweet scent, and looking down often to make eye contact with him can ... | | | | | Brittle-BERT | modernism age hms× chestnut beyonce rappers commercially whilst wearing md r- | 14 | 175.2 | | espectively Carry your baby in a sling or front carrier. Feeling your baby's ... | | | | | PAT | be a year old and can wear around Carry your baby in a sling or front carrier. Feel- | 2 | 48.3 | | ing your baby's warmth, smelling his sweet scent, and looking down often to ... | | | | | IDEMN=100 | Carry your baby in a sling or front carrier. Wearing your baby on back is always a | 9 | 29.4 | | good idea. Feeling your baby's warmth, smelling his sweet scent, and looking ... | | | | | IDEMN=500 | Carry your baby in a sling or front carrier. You can and should be wearing baby on | 9 | 36.9 | | back in a carrier. Feeling your baby's warmth, smelling his sweet scent, and ... Surrogate NRM MS3 | | | | | PRADA | wear your baby in a slingshot ord front carrier. Feeling your baby's warmth, smell- | 28 | 67.2 | | ing his sweet scent, and looking down often to make eye contact with him can ... | | | | | Brittle-BERT | offspring coherent examples declined toys widespread adulthood noun whether bu- | 58 | 143.8 | | ckled off wear Carry your baby in a sling or front carrier. Feeling your baby's ... | | | | | PAT | for example, may carry twenty cents Carry your baby in a sling or front carrier. Fe- | 107 | 49.4 | | eling your baby's warmth, smelling his sweet scent, and looking down often to ... | | | | | IDEMN=100 | A child of any age can wear shoes with a sling. Carry your baby in a sling or front | 9 | 39.1 | | carrier. Feeling your baby's warmth, smelling his sweet scent, and looking down ... | | | | | IDEMN=500 | Carry your baby in a sling or front carrier. You can and should be wearing baby on | 9 | 36.9 | | back in a carrier. Feeling your baby's warmth, smelling his sweet scent, and ... | | | | | Table 10: Adversarial documents generated by various attack methods under three kinds of surrogate NRMs on the | | | | Table 10: Adversarial documents generated by various attack methods under three kinds of surrogate NRMs on the same related but irreverent document for the query "what age can you wear baby on back in a carrier?" from the MS MARCO Dev set. The inserted and perturbed words are marked as Red for easy comparisons. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section: Limitations ✓ A2. Did you discuss any potential risks of your work? Section: Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1: Experimental Setup ✓ B1. Did you cite the creators of artifacts you used? Section 4.1: Experimental Setup ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The MS MARCO dataset utilized in our research is freely accessible to the public, without any accompanying licenses or intellectual property restrictions. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We employ the MS MARCO dataset exclusively for non-commercial research endeavors, adhering to their intended purpose. Similarly, the data generated within our work is strictly intended for research purposes, aiming to foster progress in the field of information retrieval and related domains. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? In light of the widespread usage of the MS MARCO dataset in information retrieval tasks, we did not specifically examine the textual data contained within it for potential privacy or offensive information. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The MS MARCO dataset utilized in our research is openly accessible, and comprehensive information about it can be found on its official page. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1: Experimental Setup; Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 4: Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A.3 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1: Experimental Setup ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? The outcomes presented in our work are based on a single execution due to constraints in our computing resources. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4: Experiments; Appendix ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4.3: Analysis D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Our human annotation process does not involve such a question. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.3: Analysis ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4.3: Analysis D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Our human annotation process does not involve such a question. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Our human annotation process does not involve such a question.
zhang-etal-2023-ask
Ask an Expert: Leveraging Language Models to Improve Strategic Reasoning in Goal-Oriented Dialogue Models
https://aclanthology.org/2023.findings-acl.417
Existing dialogue models may encounter scenarios which are not well-represented in the training data, and as a result generate responses that are unnatural, inappropriate, or unhelpful. We propose the {``}Ask an Expert{''} framework in which the model is trained with access to an {``}expert{''} which it can consult at each turn. Advice is solicited via a structured dialogue with the expert, and the model is optimized to selectively utilize (or ignore) it given the context and dialogue history. In this work the expert takes the form of an LLM.We evaluate this framework in a mental health support domain, where the structure of the expert conversation is outlined by pre-specified prompts which reflect a reasoning strategy taught to practitioners in the field. Blenderbot models utilizing {``}Ask an Expert{''} show quality improvements across all expert sizes, including those with fewer parameters than the dialogue model itself. Our best model provides a {\textasciitilde}10{\%} improvement over baselines, approaching human-level scores on {``}engingingness{''} and {``}helpfulness{''} metrics.
# Ask An Expert: Leveraging Language Models To Improve Strategic Reasoning In Goal-Oriented Dialogue Models Qiang Zhang, Jason Naradowsky, Yusuke Miyao Department of Computer Science The University of Tokyo {qiangzhang714, narad, yusuke}@is.s.u-tokyo.ac.jp ## Abstract Existing dialogue models may encounter scenarios which are not well-represented in the training data, and as a result generate responses that are unnatural, inappropriate, or unhelpful. We propose the "Ask an Expert" framework in which the model is trained with access to an "expert" which it can consult at each turn. Advice is solicited via a structured dialogue with the expert, and the model is optimized to selectively utilize (or ignore) it given the context and dialogue history. In this work the expert takes the form of an LLM. We evaluate this framework in a mental health support domain, where the structure of the expert conversation is outlined by pre-specified prompts which reflect a reasoning strategy taught to practitioners in the field. Blenderbot models utilizing "Ask an Expert" show quality improvements across all expert sizes, including those with fewer parameters than the dialogue model itself. Our best model provides a ∼ 10% improvement over baselines, approaching human-level scores on "engingingness" and "helpfulness" metrics. ## 1 Introduction Dialogue systems based on pre-trained language models (PLMs) can be easily tailored via finetuning to exhibit particular characteristics, such as empathy (Roller et al., 2021) and emotion (Adiwardana et al., 2020). However, it has been previously observed that such models tend to produce vacuous "fallback" responses when presented with unfamiliar situations (e.g., extraneous (Li et al., 2016; Adiwardana et al., 2020)). For instance, we observe that fine-tuned BlenderBot (Roller et al., 2021) models have a propensity to use the response, "*Do you have any hobbies?*" as a substitute for furthering the conversation in helpful ways when the situation becomes too complicated. For goaldirected dialogues, where the discourse should consistently move towards a desired resolution or effect (Ham et al., 2020), frequent reliance on such ![0_image_0.png](0_image_0.png) Figure 1: The proposed method of consulting the expert, where the dialogue model interactively obtains advice from the LLM via prompting (e.g. GPT3). Without the aid of expert knowledge and reasoning, dialogue models are less able to generate useful and engaging responses. fallback responses may result in them performing poorly. We hypothesize that the use of fallback responses may stem from the model being unable to formulate a more suitable reply in the absence of appropriate knowledge of the situation. In this study, we propose a framework called "Ask an Expert" to enhance dialogue responses through on-the-fly knowledge acquisition. Our approach involves integrating dialogue models with an external "expert" by the following tenets: (a) the expert is a large language model (LLM) which is available both during training and inference, (b) the act of soliciting information from the expert itself takes the form of a dialogue, which can span multiple turns in order to identify relevant information and strategies, and (c) the knowledge is integrated into the dialogue model via the context. Recently many efforts have sought to utilize text as an API to chain together multiple models to perform complex tasks (Shen et al., 2023; Chase, 2022). Our approach differs in that the model interaction takes place within the optimization loop, and thus allows the dialogue model to learn to selectively choose which advice to incorporate, and when use it. We apply "Ask an Expert" to the domain of mental health support (MHS) systems. MHS is notable in being one of many domains in which practitioners are formally trained to follow specific discourse strategies (Pudlinski, 2005). We incorporate an MHS strategy into the model via a series of handcrafted prompts, which are designed to shape the expert conversation to reflect the inner monologue of a human expert (Figure 1). The resulting conversation is then provided in a structured way as conditioning context to the dialogue model. We perform human evaluations on the models following the method of ACUTE-Eval (Li et al., 2019) to assess the system on six dimensions, including the ability to both have general conversations and provide helpful suggestions. We find models with reasoning processes significantly outperform the baseline model (without reasoning) in providing constructive suggestions and sharing similar experiences while remaining engaging and empathetic. Contributions of this work are as follows: - We propose a novel way of formulating knowledge acquisition in dialogue models via a chatbased interaction with a LLM expert, both during training and inference. - We explore several design decisions for structuring the expert reasoning process, and evaluate the effect of different prompts and formats, - We demonstrate that our approach results in dialogues that are deemed more engaging and helpful as evaluated by human judges. - We study the effect of different experts on dialogue quality and present ablation experiments on expert model size. ## 2 Related Work Incorporating Knowledge in Dialogue Models Various approaches have been proposed to incorporate external knowledge into dialogue models. Within the scope of deep learning-based models, information may be retrieved from a knowledge base using key-value lookups (Eric et al., 2017) or as relation tuples (Young et al., 2018), or as encoded vectors from knowledge bases (Madotto et al., 2018). Similar to our work, on-the-fly acquisition of knowledge is possible using the internet as an expert, and integrating search results into the model (Wu et al., 2020; Komeili et al., 2022). In addition to relying on external knowledge sources, dialogue models can incorporate knowledge sources, such as pre-trained language models, directly into the decoding process to produce responses grounded in knowledge. (Roller et al., 2021; Xu et al., 2022; Shuster et al., 2022). Our approach instead leverage advances in promptbased text generation and the increasing capacity of LLMs to serve as knowledge bases in order to acquire knowledge as a set of dialogue responses. LLMs as Source of Expert Knowledge Large language models (LLMs) exhibit a remarkable capacity to extract and retain knowledge embedded in the training data. Prior studies have demonstrated their ability to extract different forms of general knowledge, including factual knowledge (Petroni et al., 2019) and commonsense knowledge (Sap et al., 2020), without requiring fine-tuning. Furthermore, LLMs can effectively store and retrieve domain-specific knowledge, such as physical knowledge (Bisk et al., 2020) and biomedical knowledge (Yuan et al., 2021b), through knowledge distillation training (Qin et al., 2022). Prominent models like ChatGPT 1and Bard 2 demonstrate impressive proficiency across various natural language processing (NLP) tasks and find practical applications in diverse domains, such as healthcare (Biswas, 2023) and finance (Zaremba and Demir, 2023). These models not only possess extensive knowledge access but also effectively express this knowledge in natural language, benefiting from instruct-tuning technology (Ouyang et al., 2022) and reinforcement learning from human feedback (RLHF) (Christiano et al., 2017). LLMs for Data Generation and Augmentation LLMs can be used to generate additional examples to augment datasets across various NLP tasks and domains, such as text classification task (Wang et al., 2021), textual similarity task (Schick and Schütze, 2021b), and knowledge distillation task (West et al., 2022). Unlike previous works, we focus on the data augmentation task for a dialogue dataset in the domain of mental peer support, ESConv (Liu et al., 2021) with additional annotations that come in the form of reasoning support (emotion identification, cause, solution). Chatbots for Mental Health Given the complexity of providing mental support, rule-based approaches are commonly employed to ensure the generated text adheres to the common behavior of practitioners in the domain. For MHS, these guiding rules and principles are agreed upon and proposed by human experts, such as PTSD Checklist (DeVault et al., 2013), Cognitive Behavioural Therapy (CBT) (Fitzpatrick et al., 2017), Solutionfocused Brief Therapy (SFBT) (Fulmer et al., 2018) and mindfulness (Lee et al., 2019). However, such an approach requires significant efforts to be spent on designing rules and can not handle nonpredefined situations. Our approach differs in that we reduce the reliance on handcrafting rules by turning to simpler prompt templates, which can then be used together with an LLM to acquire relevant expert knowledge and reasoning for a broad range of different scenarios. An alternative is a data-driven approach, wherein deep learning-based dialogue models (Zhang et al., 2019b; Adiwardana et al., 2020; Roller et al., 2021) are trained or fine-tuned on emotion-related datasets such as DailyDialogue (Li et al., 2017), EmpatheticDialogues (Rashkin et al., 2019), and EDOS (Welivita et al., 2021). Such models are able to produce more empathetic responses, however, possibly due to the lack of explicit strategy, they frequently generate vacuous or unrelated responses. ## 3 Ask An Expert The architecture we propose, Ask an Expert, consists of a dialogue model, and a separate expert model. In this work the expert is a (presumably larger or specialized) LLM. The key distinction between ours and other work which uses additional knowledge acquisition in dialogue systems is that ours takes the form of another dialogue, in which we utilize prompts to guide the expert towards providing important reasoning to guide the dialogue system's response. The dialogue model is trained to optimize dialogue quality while working together ![2_image_0.png](2_image_0.png) with the expert suggestions, and can therefore learn how best to make use of advice in a context-specific manner. ## 3.1 Knowledge Acquisition Via Dialogue In mental health support (MHS), a seeker (person seeking help) engages in conversation with a supporter (the MHS practitioner) as a way of seeking medical help. Like other medical professionals, guidelines and strategies exist for providing mental health support. Following the literature, we identify a three-part strategy which involves: (1) identifying the emotional status of the seeker, (2) identifying the reason for that state if undesirable, and (3) providing suggestions that aim to alleviate the underlying cause of the distress (Pudlinski, 2005; Tietbohl, 2022). By designing prompts to collect this information and provide it to the dialogue model, we aim to improve the model's ability to provide useful support and reduce the extent to which it relies on unhelpful fallback responses. Designing Prompts We compare two different styles of prompts. The first, which we refer to ask question-answering (QA), phrases the prompts in the form of questions (e.g., "*Why does the seeker* feel upset with her mother?"). The second, which we refer to as text-generation (TG) style echos the masked language modeling objective of LLMs and tasks the model to complete a sentence with missing information (e.g., "*The seeker feels upset with* her mother because..."). Results of our initial experiments comparing the two prompt styles can be found in Appendix A. The remainder of the experiments in this paper use TG-style prompts following the previous works as in Schick and Schütze (2021a); Mishra et al. (2022a). The second consideration in prompt design is the available length of the prompt. We evaluate the Ask an Expert architecture on a variety of base LLMs, ranging in size from GPT to GPT3, meaning that the length of prompts that can fit within the contextual window of the LLMs will vary greatly. Hence we designed two different levels of prompt: dialogue-level prompt, in which the instances and context conversation are given as multi-turn dialogue pieces to provide more conversation context, and utterance-level prompt, in which they are reduced to a two-turn dialogue reflecting the current seeker input and the previous supporter's reply. Figure 2 shows examples of these prompt styles. Both types of prompts begin with a guideline to describe the task because providing instructions helps LLMs to interpret the task better (Mishra et al., 2022b). The guideline could also help LLMs to generate the results with the required format as shown in Appendix B. The context conversation is the history of the preceding dialogue. In the utterance-level prompt, several utterances at the beginning of the conversation are trimmed to fit the input length of the LLM. The result of this prompted conversation with the expert is a piece of useful information that a human practitioner may very well consider when shaping their responses to the human seeker. For instance, a generated reasoning process may be as follows: "The seeker feels overwhelmed and stressed. He is worried about his upcoming test. The supporter should mention the idea of a study group or a zoom study group. The supporter could also mention Facetime with friends. " ## 3.2 Data Collection We generate a training set consisting of partial dialogues annotated with the additional reasoning information provided by the expert at each step. The dialogues are obtained from ESConv (Liu et al., 2021), a dataset of mental health support dialogues. ESConv is especially well-suited for our research because crowdsourcing workers are trained to become supporters when collecting the dataset, and the original annotations on emotion, situation, and strategy can be referred to when designing prompts. The Ask an Expert architecture is modular, and many models (or humans) could theoretically take the role of the expert. In this work we wish to assess the importance of model size on reasoning ability and quality of dialogue, and we use the following LLMs as experts: OpenAI GPT (GPT1) (Radford et al., 2018), GPT2 (Radford et al., 2019), and GPT3 (ada and davinci) (Brown et al., 2020). We balance the data by selecting batches of 8 instances with different combinations of 5 emotion states and 5 problem types (identified from the original annotations in ESConv) with respect to the optimal length of the prompt. In utterance-level prompt situations, the instances are 16 two-turn short conversations. We also empirically adjust the order of instances given the potential influence it could have on the final results (Lu et al., 2022). We preprocess the conversations in the ESConv dataset, in which speakers can make multiple consecutive utterances, into a turn-based dialogue format by grouping consecutive utterances (if a speaker said, "Why?", and then, "Did anything happen?", they would be combined into a single utterance: "Why? Did anything happen?"). The resulting dataset consists of 9k annotated pairs of seeker-supporter utterances, encompassing 1.5k conversations. We partition the data using a ratio of 70%/10%/20% for training, validation, and testing, respectively. ## 4 Training Dialogue Models To evaluate the effect of incorporating our knowledge acquisition procedure into a state-of-the-art dialogue model, we train the following: Vanilla BlenderBot 2.7B (BB) The transformer based baseline BlenderBot model fine-tuned on EmpatheticDialogues, ConvAI, WizardofWiki, and BlendedSkillTalks in a multi-task style. We choose | Expert Model | Similarity Scores | Entailment scores | | | | | |----------------|---------------------|---------------------|-----------|---------|---------|-------| | BLEU-4 | ROUGE-L | BERTScore | BARTScore | RoBERTa | DeBERTa | | | GPT1 | 0.00 | 0.17 | 86.37 | - 5.27 | 0.74 | 0.24 | | GPT2 | 0.06 | 0.24 | 88.14 | - 4.41 | 1.23 | 0.74 | | ada | 0.08 | 0.29 | 89.23 | - 4.04 | 2.81 | 4.06 | | davinci | 0.23 | 0.46 | 92.03 | - 3.06 | 27.40 | 24.44 | Table 1: Results of automatic evaluation on the reasoning processes from different PLMs. | Expert Model | Voting rates | | | | |--------------------|----------------------|-----------------------|-------|-------| | Emotion Prediction | Reason Summarization | Suggestion Generation | Total | | | GPT1 | 32.23 | 27.69 | 21.90 | 27.27 | | GPT2 | 44.63 | 42.15 | 36.36 | 41.05 | | ada | 61.98 | 57.85 | 57.85 | 59.23 | | davinci | 93.39 | 89.26 | 88.17 | 90.22 | this model as the base model because it shows stateof-the-art performance on being empathetic and knowledgable (Smith et al., 2020). ## Blenderbot For Mental Health (Bbmh) A BlenderBot model fine-tuned on the original ESConv dataset, to serve as an in-domain baseline model. BBMH is fine-tuned in a multi-task style on both BlendedSkillTalks and ESConv with equal training weight. This allows BBMH to have a similar conversational ability to BB while having access to mental health-related conversations. ## Blenderbot For Mental Health With Reasoning (BBMHR) This is a model utilizing the Ask an Expert architecture as applied to mental health support systems, fine-tuned on the reasoning processes that are collected through prompting as described in Section 3.1. At training time, seeker utterances and associated reasoning processes that we collected from LLM expert models are concatenated as inputs. At inference time, we modify the ParlAI framework to allow communications between the dialogue model and the LLM experts to get ad-hoc reasoning annotations. Like BBMH, BBMHR is also fine-tuned in a multi-task style on both BlendedSkillTalks and ESConv (with reasoning) for the same purpose. All models are fine-tuned with ParlAI framework (Miller et al., 2017) using BlenderBot-BST 2.7B (Roller et al., 2021) as the initial model 3. Both BBMH and BBMHR are trained on 4 Tesla v100 GPUs for 96 hours. To be noticed, we train multiple BBMHR models with reasoning processes from different LLMs. In the following, BBMHR + *LLMs* denote the dialogue model with reasoning processes from the specific LLM (e.g. BBMHR + GPT1 denotes the BBMHR model with reasoning processes from GPT1). ## 5 Evaluation & Results 5.1 Assessing The Expert Advice The first question we aim to answer is: how good is the mental health support advice provided by the LLM experts? We perform both automatic evaluation and human evaluation to assess the quality of reasoning processes. We randomly select 50 conversations and manually label the conversations (via Mechanical Turk) with reasoning processes. Automatic Evaluation We calculate the similarity and entailment scores between generated reasoning processes and human labels. For similarity, we calculate ROUGE (Lin, 2004), BLEU (Papineni et al., 2002), BERTScore (Zhang et al., 2019a) and BARTScore (Yuan et al., 2021a). Entailment scores are calculated using inferences models, RoBERTa (Zhuang et al., 2021) and DeBERTa (He 3The code and data for this work are available at: https://github.com/QZx7/BBMHReasoning/tree/main | Model | Model Winning Percentages Against Human | | | | | | | |--------------------|-------------------------------------------|---------|-------------|-------------|------------|---------|---------| | Engagingness | Humanness | Empathy | Specificity | Helpfulness | Experience | Total | | | in-context davinci | - 35.87 | - 28.89 | - 24.29 | - 14.33 | - 29.65 | - 24.29 | -47.30 | | BB | - 36.78 | - 22.92 | - 15.67 | - 28.91 | - 30.15 | - 17.64 | - 42.68 | | BBMH | - 26.07 | - 21.60 | - 11.95 | - 10.53 | - 22.90 | - 12.47 | - 30.19 | | BBMHR: GPT1 | - 23.17 | - 9.89 | - 12.51 | -18.48 | - 20.07 | - 10.43 | - 26.20 | | GPT2 | - 24.82 | - 8.15 | - 3.64 | - 14.02 | - 19.65 | - 9.21 | - 22.33 | | ada | - 24.02 | - 7.04 | - 7.16 | - 11.52 | - 15.59 | - 2.48 | - 19.41 | | davinci | - 12.10 | - 1.96 | + 1.26 | - 8.60 | - 7.09 | + 0.91 | - 10.93 | et al., 2020) to score the possibilities of the entailment relationship between generated and manual labels by treating it as a textual inference task. Table 1 shows the results of automatic evaluation on reasoning processes. We can observe clear improvement in both similarity and entailment scores from GPT1 to davinci, where the gap between davinci and other models is especially large. Human Evaluation We perform human evaluation to assess the LLMs' ability to generate each piece of information generated in the reasoning processes generation task. More specifically, we measure the quality of reasoning processes with three sub-tasks: emotional prediction, reason summarization and suggestions generation. Each sub-task is used to assess one piece of information in the reasoning processes. Crowdsourcing workers are then asked to vote for each sub-task by answering questions such as "*Does the annotation contain correct* emotion description of the seeker?" We report the voting rates on each sub-task for each expert model used in the prompting phase. A complete list of the questions can be found in Appendix C. Table 2 shows the results of human evaluation with an average inter-rater agreement of 83.7%, and we are able to observe similar results as in automatic evaluation. Davinci outperforms other models on all three sub-tasks, which shows that davinci may have more knowledge of the reasoning processes. Such results hint that the reasoning knowledge by consulting LLMs can provide valid reasoning information to be used for dialogue models, especially those generated by expert models ## With A Larger Size. 5.2 Evaluation On Dialogue Models We perform the human evaluation on the models following the ACUTE-Eval (Li et al., 2019) method, in which conversations generated by two different models are collected, and annotators are asked to make binary judgments between two models. We set up experiments and compare conversations between humans in ESConv to conversations generated by different models. The compared models are divided into three groups: human vs. BB, human vs. BBMH, and human vs. BBMHR. For each group, we perform ACUTE-Eval and calculate the win percentages of the models, where positive numbers represent that models win and negative numbers represent that human wins. As comparison, we also follow the methods in (Zheng et al., 2022) and prompt in-context davinci with the same prompts to generate conversations in the domain of emotional support. Self-Chats We perform self-chats (Jaques et al., 2020; Bao et al., 2019) to collect conversations from models following the experiments in ACUTEEval (Li et al., 2019). Self-chats could reduce the efforts of collecting objective conversations and show high agreements with human-model evaluations (Li et al., 2019). For each model, we collect 100 conversations across 5 known topics in ESConv, 20 for each topic. Initial utterances of the conversations are pre-defined to generate diverse dialogue content for each topic (Bao et al., 2021). The generated conversations are compared against human-human conversations with the same topic ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ## In Esconv For Evaluation. Questionnaire Annotators are asked to answer 17 questions across 6 dimensions: engagingness, humanness, empathy, specificity, helpfulness, and experience. Engagingness and humanness are used to evaluate the ability to have general and long conversations. Questions for these two dimensions are same as the questions used in (Li et al., 2019). Empathy represents the model's ability to catch the emotional status and feelings of the seekers. Specificity reflects the ability to produce task-specific responses. Helpfulness indicates the feasibility of suggestions given by the models. Experience is used to measure the ability to share relevant and similar experiences based on the seeker's problems. We adapted the evaluation method in O'Leary et al. (2018) and crafted questions for the newly added four dimensions based on the components of the "guided chat tool", which proved to be more effective in terms of problem-solving. A complete list of questions can be found in Appendix D. Results Table 3 shows the results of human evaluation, with an average inter-rater agreement of 80.4%. Both BBMH and BBMHR outperform vanilla BB in terms of all 6 dimensions, owing to the use of additional in-domain data. When assessing the effect of the knowledge acquisition procedure, BBMHR outperforms BBMH in most aspects, especially humanness, helpfulness, and experience, which are the primary criteria that we aim to improve as being especially useful to the goaloriented aspects of the dialogue model as a mental health support system. Additionally, we find a strong correlation with the degree of improvement on these metrics and the size of the model. Other attributes , such as specificity, do not appear to benefit strongly from additional reasoning information. Among all BBMHR models, BBMHR + davinci ![7_image_1.png](7_image_1.png) achieves the best performance in almost all aspects which also shows that consulting better reasoning models contributes to better responses. ## 5.3 Crowdsourcing & Filtering Details The workers are required to be fluent in English in both evaluation tasks of the reasoning processes and dialogue models. For reasoning process evaluation, the workers are asked to answer some questions about the content of the conversation to ensure that they clearly understand the context. For each question, they also need to provide justifications for their answer to be valid. For dialogue model evaluation, while answering the binary selective questions, the workers are asked to write down brief justifications from time to time (Q2, Q5, Q8, Q12, Q14, and Q17) to ensure that they are engaging. We perform filtering on the annotations to remove the annotations that are completed in an extremely short time (less than 300 seconds) and with invalid justifications (samples of invalid justifi- ![7_image_0.png](7_image_0.png) cations can be found in Appendix E). The workers are paid an average of 10$ per hour in line with regional guidelines on ethical compensation. ## 6 Sample Conversations & Failure Cases Sample Conversations Figure 3 shows the conversational strategies used by different models when the seeker looks for mental support because of a breakup. BBMHR is able to provide suggestive responses based on strategies provided in the reasoning process. We also find that BBMHR provides more empathetic and engaging responses when initializing the conversation (In Figure 4, BB tends to ask non-engaging questions such as "Do you have any hobbies?"). More samples can be found in Appendix G. Failure Cases Figure 5 shows a failure case where the responses can occasionally be short and not empathetic. All models have a tendency to default to such cases at the opening of conversations, when the conversation history is limited and the expert would have difficulty inferring any additional useful details (similar errors are observed in Ung et al. (2022); Tyen et al. (2022)). Moreover, we observe that the frequency of such failure cases decreases as size of LLM increases, and implies that some of these mistakes may be resolved with better experts. For instance, an expert practitioner in this case may be more pro-active in gathering the necessary details to form an analysis. By interfacing with the expert purely by text prompts, and collecting the expert advice as text (and inserting it into the dialogue model context window), we allow for the opportunity for the expert model to also help the dialogue model take a more active role in progressing the conversation toward the goal when necessary. ## 7 Discussion What are the advantages of utilizing LLMs for strategic reasoning? Goal-oriented dialogue systems not based upon LLMs often rely on inferring dialogue states to carry out only meaningful conversations, and thus significantly rely on the definition of the task and an ontology of possible dialogue trajectories (Xie et al., 2022). This makes the systems brittle and open to catastrophic errors when the dialogue breaks significantly from the categories of the ontology. LLMs show similar ontological knowledge and planning ability in many domains, but are more flexible. As language models, interfacing with LLM experts is as straightforward as establishing a short goal-oriented conversation, and incorporating their responses into the dialogue model via the model's context is similarly easy. In that sense, utilizing LLMs greatly reduces the efforts defining a complicated ontology and dialogue state tracking module by providing necessary reasoning power and knowledge. ## Why Not Use Gpt-3 Directly For Dialogue Generation? Is The Dialogue Model Still Necessary when there is an expert model? Our results (Table 3) show that utilizing LLMs as dialogue models directly can lead to worse performance than even baseline dialogue models such as Blenderbot. We find that in-context davinci performs worse than BB both in terms of generating human-like and empathetic dialogues. One alternative is to finetune LLMs specifically for dialogue generation, but this process often requires expensive hardware, time, and training data (Shuster et al., 2022). It is unclear whether fine-tuning even larger models would uncover the heuristic strategies inherent in goal-oriented conversations, which can be easily specified via prompts using an "Ask an Expert" architecture. Deploying Ask an Expert? A natural restriction in the Ask an Expert is that it requires the expert to be present at inference time and during deployment. If a motivation of Ask an Expert is to allow dialogue models to be deployed on simpler hardware, having a large expert model limits its usefulness in such situations. However, recent advancements in technology, such as ChatGPT and Bard, offer API services that facilitate convenient access to expert knowledge. Furthermore, software tools like LangChain efficiently manage prompts, computations, and knowledge, presenting an alternative to local deployment of extensive expert models. Another scenario that imposes limitations on the adoption of Ask an Expert pertains to certain domains where the system must be deployed locally to uphold privacy concerns, such as mental health systems aiming to safeguard patient data. In such instances, relying on external API services becomes less feasible. However, it is not always necessary to utilize all the knowledge of large expert models. And for specific domain use cases, such as mental health, it is unlikely that the full size of the model is indispensable. Given the effectiveness of our approach, in future work we would like to explore the extent to which the expert model can be distilled (Sanh et al., 2019; Schick and Schütze, 2021c) into models which are able to run locally on consumer-grade hardware. ## 8 Conclusion In this work we propose the "Ask an Expert" framework for building more robust dialogue systems using external knowledge obtained via prompt-based conversations with LLM "experts". The prompts are designed to elicit a step-by-step expert analysis of the current discourse context, intended to mimic the inner monologue of a human professional counselor, and provide it at each turn to the dialogue model. As the expert consultation process occurs both during training and inference time, the dialogue model itself can learn useful strategies for flexibly incorporating the advice of the expert. We have shown in both human and automatic evaluations that the addition of such reasoning knowledge results in models which are more suggestive, helpful, and engaging than comparable baseline models which do not consult the expert. Our result supports the hypothesis that current dialogue models often fail to implicitly learn effective goal-oriented strategies from dialogue data alone, and provides evidence that combination with other models may help alleviate current shortcomings. ## 9 **Limitations And Ethical Considerations** Limitations Our proposed approach relies heavily on LLMs and is subject to the same limitations, namely, known biases in the training data and the ability to hallucinate incorrect information. Additionally, we perform the research in English only. It is known that for different cultures, the strategies of showing empathy can be very diverse which requires cultural background knowledge and reasoning processes (Atkins et al., 2016). Pertinent to our intended use-case where models would be deployed locally, LLMs remain computationally intensive even during inference. Despite demonstrating that even smaller models (such as GPT1 and GPT2) do yield performance enhancements for BBMHR, their performance scales with their parameter size and even small-scale models can require expensive hardware for deployment. Consequently, it becomes imperative to explore alternative approaches, such as domain-specific lightweight reasoning models, or distilled or lowprecision inference models, as viable alternatives to resource-intensive LLMs. Ethical Considerations Working within the field of mental health support demands additional considerations. In terms of safety, we acknowledge the limitations of the proposed models and the potential risks associated with directly deploying them to emotionally vulnerable individuals. We do not recommend the deployment of the models presented in this work. Consequently, we emphasize that the models presented in this study are intended to (at most) function in a human-in-the-loop capacity, serving as an assistant to trained mental health practitioners. Furthermore, we take into account the possibility of negative impacts that the present research could have on the community. Despite our intention to develop models for social good, it is important to acknowledge that the dataset contains content that could be problematic (inputs from seekers, and reasoning processes that could potentially be exploited to generate negative or offensive content). We release all data collected for this work to help support future work towards improving MHS systems. ## Acknowledgements We thank the anonymous reviewers for their helpful suggestions and feedback. This work was supported by JSPS KAKENHI Grant Number ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. David Atkins, Ayse K Uskul, and Nicholas R Cooper. 2016. Culture shapes empathic responses to physical and social pain. *Emotion*, 16(5):587. Siqi Bao, Huang He, Fan Wang, Rongzhong Lian, and Hua Wu. 2019. Know more about each other: Evolving dialogue strategy via compound assessment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5382– 5391, Florence, Italy. Association for Computational Linguistics. Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhen Guo, Zhibin Liu, and Xinchao Xu. 2021. PLATO-2: Towards building an opendomain chatbot via curriculum learning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 2513–2525, Online. Association for Computational Linguistics. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In *Proceedings of the* AAAI conference on artificial intelligence, volume 34, pages 7432–7439. Som S Biswas. 2023. Role of chat gpt in public health. Annals of Biomedical Engineering, pages 1–2. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Harrison Chase. 2022. Langchain. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. David DeVault, Kallirroi Georgila, Ron Artstein, Fabrizio Morbini, David Traum, Stefan Scherer, Albert Skip Rizzo, and Louis-Philippe Morency. 2013. Verbal indicators of psychological distress in interactive dialogue with a virtual human. In Proceedings of the SIGDIAL 2013 Conference, pages 193–202, Metz, France. Association for Computational Linguistics. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In *Proceedings* of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49, Saarbrücken, Germany. Association for Computational Linguistics. Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR mental health, 4(2):e7785. Russell Fulmer, Angela Joerin, Breanna Gentile, Lysanne Lakerink, Michiel Rauws, et al. 2018. Using psychological artificial intelligence (tess) to relieve symptoms of depression and anxiety: randomized controlled trial. *JMIR mental health*, 5(4):e9782. Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583–592, Online. Association for Computational Linguistics. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. *arXiv preprint* arXiv:2006.03654. Natasha Jaques, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. 2020. Humancentric dialog training via offline reinforcement learning. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing* (EMNLP), pages 3985–4003, Online. Association for Computational Linguistics. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. Minha Lee, Sander Ackermans, Nena Van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn. 2019. Caring for vincent: a chatbot for self-compassion. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–13. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. *arXiv* preprint arXiv:1909.03087. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In *Proceedings* of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3469–3483, Online. Association for Computational Linguistics. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In *Proceedings of the 56th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478, Melbourne, Australia. Association for Computational Linguistics. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2022a. Reframing instructional prompts to GPTk's language. In Findings of the Association for Computational Linguistics: ACL 2022, pages 589–612, Dublin, Ireland. Association for Computational Linguistics. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022b. Cross-task generalization via natural language crowdsourcing instructions. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland. Association for Computational Linguistics. Kathleen O'Leary, Stephen M. Schueller, Jacob O. Wobbrock, and Wanda Pratt. 2018. "suddenly, we got to become therapists for each other": Designing peer support chats for mental health. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, page 1–14, New York, NY, USA. Association for Computing Machinery. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In *Advances in Neural Information* Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Christopher Pudlinski. 2005. Doing empathy and sympathy: Caring responses to troubles tellings on a peer support line. *Discourse studies*, 7(3):267–288. Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. Knowledge inheritance for pre-trained language models. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 3921–3937, Seattle, United States. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reasoning for natural language processing. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 27–33, Online. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021b. Generating datasets with pretrained language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6943– 6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021c. It's not just size that matters: Small language models are also fewshot learners. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352, Online. Association for Computational Linguistics. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. *arXiv preprint arXiv:2303.17580*. Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 2021–2030, Online. Association for Computational Linguistics. Caroline K Tietbohl. 2022. Empathic validation in physician–patient communication: An approach to conveying empathy for problems with uncertain solutions. *Qualitative Health Research*, 32(3):413–425. Gladys Tyen, Mark Brenchley, Andrew Caines, and Paula Buttery. 2022. Towards an open-domain chatbot for language practice. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 234–249, Seattle, Washington. Association for Computational Linguistics. Megan Ung, Jing Xu, and Y-Lan Boureau. 2022. SaFeRDialogues: Taking feedback gracefully after conversational safety failures. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6462– 6481, Dublin, Ireland. Association for Computational Linguistics. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. arXiv preprint arXiv:2109.09193. Anuradha Welivita, Yubo Xie, and Pearl Pu. 2021. A large-scale dataset for empathetic response generation. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 1251–1264, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4602–4625, Seattle, United States. Association for Computational Linguistics. Sixing Wu, Ying Li, Dawei Zhang, and Zhonghai Wu. 2020. Improving knowledge-aware dialogue response generation by using human-written prototype dialogues. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1402– 1411, Online. Association for Computational Linguistics. Tian Xie, Xinyi Yang, Angela S Lin, Feihong Wu, Kazuma Hashimoto, Jin Qu, Young Mo Kang, Wenpeng Yin, Huan Wang, Semih Yavuz, et al. 2022. Converse–a tree-based modular task-oriented dialogue system. *arXiv preprint arXiv:2203.12187*. Jing Xu, Arthur Szlam, and Jason Weston. 2022. Beyond goldfish memory: Long-term open-domain conversation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5180–5197, Dublin, Ireland. Association for Computational Linguistics. Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Augmenting end-to-end dialogue systems with commonsense knowledge. In *Proceedings of the AAAI conference* on artificial intelligence, volume 32. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021a. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263–27277. Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, and Fei Huang. 2021b. Improving biomedical pretrained language models with knowledge. In *Proceedings of the 20th Workshop on Biomedical Language Processing*, pages 180–190, Online. Association for Computational Linguistics. Adam Zaremba and Ender Demir. 2023. Chatgpt: Unlocking the future of nlp in finance. Available at SSRN 4323643. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019a. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019b. Dialogpt: Large-scale generative pre-training for conversational response generation. *arXiv preprint arXiv:1911.00536*. Chujie Zheng, Sahand Sabour, Jiaxin Wen, and Minlie Huang. 2022. Augesc: Large-scale data augmentation for emotional support conversation with pre-trained language models. *arXiv preprint* arXiv:2202.13047. Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A robustly optimized BERT pre-training approach with post-training. In *Proceedings of the 20th Chinese* National Conference on Computational Linguistics, pages 1218–1227, Huhhot, China. Chinese Information Processing Society of China. ## A Different Prompts Table 4 shows the results by different styles of prompt. We attempted two types of prompt, questions answering (QA) and text generation (TG). In the QA style, we design a series questions asking the information needed for reasoning processes. And for TG style, we prompt PLMs to generate full reasoning processes as a paragraph of natural text. As shown in the table (text in red color represents the errors), answers by QA style are less accurate and less suggestive than TG style. ## B Expert Advice Samples Table 5 shows the examples of reasoning processes generated by different PLMs used in the experiments. ## C Questions For Human Evaluation Of Expert Advice Table 6 shows the questions we asked the crowdsourcing workers in the human evaluation of reasoning processes. Each question is targeting at each piece of information. ## D Questions For Dialogue Evaluation Table 7 shows the questions we used in the ACUTEEval of the dialogue models. For each dimension, we design 2-3 questions and we calculate the interrater agreement for each question to be valid. Bold font indicates that the inter-rater agreements are higher than 85% and thus are selected for the results calculation. Style Sample Context: seeker: I was recently let go from my job due to the covid pandemic, and am now jobless before the holidays. supporter: I'm sorry, that's tough anytime but that's gotta be brutal because of the year and upcoming holidays.. what kind of work were you doing? seeker: It really is, I was in a family owned cafe as a waitress. Due to the rise of the virus they ordered everyone to shut back down. QA Q1: How did the seeker feel? A1: The seeker feels tough because of the holiday season. Q2: Why did the seeker feel that way? A2: The seeker was recently let go from her job and is now jobless. Q3: What could the supporter do? A3: The supporter could tell the seeker that she will help her to find a job. TG In this conversation, the seeker feels down because of being jobless. The supporter could look for some job openings or tell the seeker to start a small business. ## Context: seeker: I'm very upset. supporter: I'm sorry. Would you like to tell me about it? seeker: Yes. I invited my friend over to watch my new puppy while I painted my room because my dog has separation anxiety and can't be alone. then she showed up with her new boyfriend. supporter: What bothered you about that? seeker: Well I don't know him and my stuff was all out in the living room where he was alone for most of the day. I actually didn't suspect anything would happen at first. QA Q1: how did the seeker feel? A1: The seeker feels upset. Q2: why did the seeker feel that way? A2: The seeker was surprised that her friend brought her new boyfriend over without warning. Q3: what could the supporter do? A3: The supporter should ask the seeker how she would like to handle the situation. TG In this conversation, the seeker feels upset because her friend brought the new boyfriend over to the seeker's house without notifying. The supporter could tell the seeker that it is better to talk to the friend about the issue. Table 4: Some sample reasoning processes by different styles of the prompt. Model Sample Context: seeker: I am ok, I'm having a hard time dealing with the pandemic though. ... seeker: What do you like to do for yourself when you feel bad? supporter: It happens anyway, but not all the time. when did it start and what do you think that can cause that. seeker: It started with the pandemic. I think it is because we have to stay home to be safe but other people don't listen and more people are dying because of it. | seeker: It started with the pandemic. I think it is because we have to stay home to be safe but other people don't listen and more people are dying because of it. | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | GPT-1 | the seeker tried to keep her words from being too painful but it was hard and the source of the pain was more difficult, because her voice was so low. the seeker was afraid to talk about this because it was too painful to admit that she had to be truthful. | | GPT-2 | feels scared and depressed about losing his job. The supporter could share some experience and suggest the seeker to stay positive. | | Ada | feels bad because of the pandemic. The supporter could offer a suggestion. | | Davinci | feels bad because of the pandemic. The supporter could tell the seeker to focus on the things that the seeker can control and try to make a plan for each day. The supporter could also suggest the seeker to join an online community to share experiences. Context: seeker: Ok I guess. I do not know how to tell my husband that I am lonely and I want out of the marriage seeker: He would go to sleep, and then he thinks I am crazy he says we are no kids anymore that need to go on a date we are married. supporter: Do you think you both might be open to talking to a marriage counselor? seeker: I want that but I doubt he will talk he tells me he has no problems I have them, and all it does is I doubt myself. I turned this situation in every direction and I just am not able to find a solution. | | GPT-1 | the seeker felt depressed. she thought about what she had to do now. she did not know how to talk to her boyfriend. she could not get him to talk to her but she could not stop him from talking to her. she tried to calm down. i 'll talk to him. she thought. | | GPT-2 | is worried about losing the job and getting laid off. The supporter could share some related knowledge and advice. | | Ada | is worried about being alone and not having a husband. The supporter should tell the seeker that marriage counseling is available. | | Davinci | feels lonely and frustrated. The supporter could suggest the seeker to talk to a marriage counselor. | Table 5: Samples of reasoning processes generated by different models. Does the reasoning describe correct emotion status of the seeker? Does the reasoning summarize the seeker's problem correctly? Does the conversational advice for the supporter make sense? Question | Question | Choice 1 | |------------------------------------------------------------------------------------------------------------------|--------------------------------------------------| | Engagingness | | | Which supporter is more engaging to talk to? | Supporter 1 is more engaging | | Who would you prefer to talk to for a | I would prefer to talk to Supporter 1 | | long conversation? Which supporter do you think is more captivating? | Supporter 1 is more captivating than Supporter 2 | | Humanness | | | Which supporter sounds more human? | Supporter 1 sounds more human | | If you had to guess that one supporter is human | Supporter 1 sounds human | | and one is a bot, which do you think is human? Which supporter sounds more like a real person? | Supporter 1 sounds more like a real person | | Empathy | | | Which supporter understands the feelings | Supporter 1 understands the feeling better | | of the seeker better? If you had to say one of these supporters | Supporter 1 understands emotion better | | understands human emotion better, who would you say is better? Which supporter shows more empathy on the seeker? | Supporter 1 shows more empathy | | Specificity | | | Which supporter responds more specifically | Supporter 1 talks more relatively | | The responses of which supporter are less | Supporter 1's responses are less | | out-of-context? | out-of-context | | Which supporter do you think care more about the | Supporter 1 cares more about the. | | seeker's problem? | seeker's problem | | Helpfulness | | | Which supporter gets a stronger urge to help? | Supporter 1 gets a stronger urge to help | | Which supporter would you prefer to get | I would prefer to get suggestions | | suggestions from? | from Supporter 1 | | For the suggestions given by the two supporters, | Supporter 1's suggestion is a better fit | | which one is a better fit for the seeker? | than Supporter 2's | | Experience | | | Which supporter shares better similar experience? | Supporter 1 shares better experience | | If you were the seeker, after hearing the experience | Supporter 1's experience would make | | of which supporter would you feel better? | me feel better | Table 7: Questions for human evaluation of the dialogue models. We design 2-3 questions for each dimensions. ## E Interface For Crowdsourcing Figure 6 shows the interface for crowdsourcing that is used in the evaluation of reasoning processes. The crowdsourcing workers are first given the dialogue followed by validation questions asking some details about the conversations. The answers to these questions are then used to filter out invalid questions. Results containing non-sense answers such as "GOOD, GOOD, GOOD" are removed from the results. After answering the validation questions, the worker will read through reasoning processes, namely analyses, by different PLMs. The order of the analyses are random for each HIT so that the workers will not capture the pattern for further annotations. Then for each analysis, the workers are asked to answer the questions in Table 6. To be noticed, for each question, the workers will also need to provide a brief justification which will be used as future validation judgement evidence. Figure 7 shows the interface we used for ACUTE-Eval of the dialogue models. The workers are first shown two conversations, in which one is directly taken from ESConv, namely humanhuman and one is generated by the self-chats of the model. The order of the conversations are randomly selected for each HIT. After reading the two conversations, the workers are then asked to answer the questions listed in Table 7. From time to time, we ask the workers to provide brief justifications for their choice and such justifications will be used to filter out invalid results. ## F Responses That Apply 'Online' Strategy In Esconv The responses tend not to follow the reasoning from PLMs when same strategies are frequently repeated in the training data of ESConve for the conversation with same context. From the collected conversations, we are able to find that in most cases, BBMHR will follow the suggestions in annotations. And for all the cases where BBMHR doesn't follow the suggestions, they follow frequently repeated strategies applied in the training data of ESConv. For instance, one case where BBMHR tends to not follow the reasoning annotations is in the topic of ongoing depression. When the seeker inputs like "I feel really depressed because of the pandemic. ", BBMHR tends to produce a response like "Have you tried hanging out with your friends online?" even the reasoning annotation is like "The supporter could suggest the seeker to go out and take a break." And in ESConv, we are able to find that more than 75% of conversations with the topic of ongoing depression have applied similar responses. Such ignorance of reasoning annotations also happens in the context of job crisis where "searching for online information" is a repeated strategy. However, the ignorance of reasoning annotations do not appear for other topics that do not share a frequently repeated strategy. Table 8 shows examples of frequently repeated answers and strategies in the ESConv dataset that can affect the responses. When the BBMHR models take such context as input, they tend to ignore the reasoning processes from PLMs and follow the strategies stated in the dataset. Dlalogue: supporter: Hi, how are you doing today? seeker: Hi, I am doing ok, how are you? supporter: Good thank you. Why only ok? What is bothering you?. seeker: Well 2020 is bothering me, like everyone else.. earlier this year my job was terminated, then for 3 months I was not bringing in income. Now that COVID is still out there, I fear that my current job will be terminated. supporter: I'm sorry to hear that. 2020 has been tough for many, so you are not alone.. I have been through a similar situation in the beginning of the year where I have lost my job. I was fearful but I kept thinking positive thoughts and it helped me get through tough times. seeker: True, I have been trying to think of other things I can do, like start a business. But I have been working for 20 years, that I feel like I am supposed to work for someone else and not for myself. supporter: That is a great idea. The government has great support programs for new business owners like you.. I think working for yourself is great. You can set your own hours, chart your own path. It would help with your employment situation for sure. seeker: I feel that I have not ideas. I feel like i have been thinking about a business idea for myself and family for some time now. I wonder how others decide on what to do. Like how to get that motivation in their head to start something really new. ![18_image_0.png](18_image_0.png) | Please read the instruction before doing this task. | | |---------------------------------------------------------------------------|----------------------------------------------------------------------| | ❏ I have read the through the instruction. | | | Dialogue 1 | Dialogue 2 | | seeker: Hi | seeker : Hi, how are you? | | supporter 1 : hello. | supporter 2 : Doing good. How are you? | | seeker : I am unable to focus on my studies due to stress. I am afraid | seeker : Oh, I'm feeling really anxious recently. | | supporter 2 : I'm sorry to hear that, could you tell me more | | | that I am gonna fail. | | | supporter 1 : so you mean you are afraid you will fail because you are | details? | | stressed out with your studies. | seeker : The online course is too difficult for me. Our school | | seeker : yes, due to stress and anxiety it is hard to concentrate. | started online course because of the Covid. But just can't | | supporter 1 : that can be hard but you can do it. knowing that you are | focus on study when I'm doing the online course. | | scared is enough motivation for you to push through. | supporter 2 : I understand how you feel. I was also very | | seeker : I can't sleep at night and having bad dreams that I will fail in | anxious when I first started taking the online course. | | my exams or I am unable to answer questions. | However, once I got used to it, it was very helpful in learning. | | supporter 1 : i can sense your feeling, i once had same mind stress | seeker : I've tried to get used to it, but it's just not working out | | before. | for me. I've tried taking breaks, but I can't focus for long. | | seeker : So what you did? how you came out of it? | supporter 2 : I have found that different people learn in | | supporter 1 : i stop reading alone. I read with my friends. girlfriend or | different ways. I am very organized and like to have | | read in a public place like the library. | everything in one place. You might be better off taking notes | | seeker : That is a great idea but due to COVID it could be risky to | in a book. Is that something you can do? | | meet up daily and library is closed where I live. | seeker : I think that's a good idea. I usually take notes while | | the strength of the strength of the strength of the strength. | a the book hat its not very | ![19_image_0.png](19_image_0.png) Figure 7: The crowdsourcing interface used for dialogue evaluation. | Seeker | Supporter | |---------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------| | Ongoing depression on pandemic Yes, I pay musical instruments but do to COVID | Could you perhaps set up Zoom meetings | | could not play with the band. | where you could play together online? | | Hmm what specific hobbies would you | Whichever you enjoy.. pick one. There are a | | recommend? | lots of online resources you cloud use. | | Do you have any suggestions? | You can play online games with your friends. | | That actually sounds like a good idea. I hope | If you are not comfortable going out due to | | the shelter near me will take volunteers with | COVID, you could involve some activities | | COVID and all. | online promoting dog adaption and create awareness online and through social media... | | All I have to do is think about how alone I am. | Do you have any friends or people you can set up an online zoom call with? | | I have tried to use zoom and facetime but video | There are online resources to have some fun | | chat gives me anxiety. | with friends too–many blogs suggest hosting a group game night or a shared movie night. | | Job crisis Hmm that seems like a good idea, to find video to | well for me i just searched for motivational | | help uplift me. Do you recommend anything? | speaker or top 10 online?work from home jobs. | | yes It is my main concern. | Have you consulted with a job center, a life coach, or any other resource such as online websites? These may be useful. | | Yes , I also dont want them to have to support me | with keeping your family in mind while trying | | and my family either . | to find a job have you considered looking for an online job? Just from chatting with you I can tell how much it stresses you out. | | I would be open to seeking other employment | Luckily, there are many platforms online | | online;work from home on the computer. | that allow you to work from home. I know | | any suggestions? | of several that allow you to do side gigs ¨ ¨ . Perhaps you can search and find a few of these. I, myself have had success doing these.. | | I found it really difficult finding a job right now | Have you tried searching a job from some | | because of the pandemic. | online job-hunting platforms? | | Table 8: Some sample responses under the topic of ongoing depression and job crisis because of COVID pandemic | | Table 8: Some sample responses under the topic of ongoing depression and job crisis because of COVID pandemic in ESConv. 75% percent of the responses are replying about using online resources (online meeting, online gaming, online party, etc.) ## G Sample Conversations From Different Models Figure 8 ˜ 13 show sample conversations generated by BBMHR, BBMH and BB models on various topics. We are able to observe generally more specific and suggestive responses from BBMHR models. ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) ![24_image_0.png](24_image_0.png) ![24_image_1.png](24_image_1.png) ![25_image_1.png](25_image_1.png) ![25_image_0.png](25_image_0.png) ![26_image_0.png](26_image_0.png) ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) ![27_image_2.png](27_image_2.png) ![27_image_3.png](27_image_3.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8,9 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✓ A4. Have you used AI writing assistants when working on this paper? Section 7. We use ChatGPT to purely paraphrase and polish the content. The input to ChatGPT is the text we wrote. The prompt is: Rephrase the following paragraph to fix the grammatical errors while keep the exactly same semantics <paragraph>. The output is a grammatically correct same-meaning paragraph of text. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 1,4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 1 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 3, 5 B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 5.1, 5.2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C, D, E ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3, 5.3 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix E D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 5.3
kasanishi-etal-2023-scireviewgen
{S}ci{R}eview{G}en: A Large-scale Dataset for Automatic Literature Review Generation
https://aclanthology.org/2023.findings-acl.418
Automatic literature review generation is one of the most challenging tasks in natural language processing. Although large language models have tackled literature review generation, the absence of large-scale datasets has been a stumbling block to the progress. We release SciReviewGen, consisting of over 10,000 literature reviews and 690,000 papers cited in the reviews. Based on the dataset, we evaluate recent transformer-based summarization models on the literature review generation task, including Fusion-in-Decoder extended for literature review generation. Human evaluation results show that some machine-generated summaries are comparable to human-written reviews, while revealing the challenges of automatic literature review generation such as hallucinations and a lack of detailed information. Our dataset and code are available at [\url{https://github.com/tetsu9923/SciReviewGen}](\url{https://github.com/tetsu9923/SciReviewGen}).
# Scireviewgen: A Large-Scale Dataset For Automatic Literature Review Generation Tetsu Kasanishi1 Masaru Isonuma1 Junichiro Mori1,2**Ichiro Sakata**1 1 The University of Tokyo 2 RIKEN Center for Advanced Intelligence Project {kasanishi, isonuma, isakata}@ipr-ctr.t.u-tokyo.ac.jp mori@mi.u-tokyo.ac.jp ## Abstract Automatic literature review generation is one of the most challenging tasks in natural language processing. Although large language models have tackled literature review generation, the absence of large-scale datasets has been a stumbling block to the progress. We release *SciReviewGen*, consisting of over 10,000 literature reviews and 690,000 papers cited in the reviews. Based on the dataset, we evaluate recent transformer-based summarization models on the literature review generation task, including Fusion-in-Decoder (Izacard and Grave, 2021) extended for literature review generation. Human evaluation results show that some machine-generated summaries are comparable to human-written reviews, while revealing the challenges of automatic literature review generation such as hallucinations and a lack of detailed information. Our dataset and code are available at https://github.com/ tetsu9923/SciReviewGen. ## 1 Introduction Scientific document processing has been a topic of interest in the frontiers of natural language processing (NLP) (Cohan et al., 2022). Although neuralbased NLP models have achieved remarkable success in diverse areas, scientific documents present distinct challenges, such as longer inputs, technical terms, and complex logic. These challenges have motivated NLP researchers to undertake various studies on scientific documents, such as scientific document summarization, retrieval, and information extraction (Cohan et al., 2018; Beltagy et al., 2019; Cohan et al., 2020; Yuan et al., 2022). Automatic literature review generation is one of the most attractive research topics in scientific document processing. A literature review is a summary of scientific papers written by experts to comprehend previous findings (Jaidka et al., 2013a). Bornmann and Mutz (2015) investigated that the number of published scientific papers doubles every nine 6695 ![0_image_0.png](0_image_0.png) years, increasing the demand for literature reviews in diverse research areas. Automatic literature review generation significantly benefits researchers by expanding their studies into new research fields. However, only a few studies have addressed automatic literature review generation. For example, Taylor et al. (2022a) recently proposed GALACTICA, a large-scale language model trained on 48 million scientific papers. GALACTICA was made publicly available to demonstrate its ability to generate literature reviews; however, it was shut down within a few days owing to the hallucination problem (Taylor et al., 2022b). As there are no large-scale literature review datasets, applying data-hungry supervised neural summarization models is difficult. The absence of large-scale datasets is a significant bottleneck in research on automatic literature review generation. In this study, we pioneer the research of automatic literature review generation by providing a large-scale dataset based on the Semantic Scholar Open Research Corpus (S2ORC; Lo et al., 2020). We release *SciReviewGen*, which consists of over 10,000 literature reviews in the field of computer science and 690,000 papers cited in the reviews. As our dataset is created in a domain-agnostic way, it is possible to create datasets in other scientific fields, such as medical and biological sciences. Figure 1 shows an overview of the literature review generation task. We regard it as a queryfocused multi-document summarization (MDS) task. The inputs are the abstracts of papers cited in the reviews, and the queries are the titles of the reviews and chapters, which specify the topics in the reviews. In the actual writing process of a literature review, we need to decide on the papers to cite in the review and group them into several chapters. As the first step for automatic literature review generation, we exclude those processes from our scope and focus on summarization given the cited papers and chapter division. As SciReviewGen and S2ORC include bibliographic information on reviews and their cited papers (e.g., DOI, citation, and chapter division), our dataset can be used for end-to-end literature review generation. Based on our dataset, we evaluate recent transformer-based summarization models for the literature review generation task. As current summarization models cannot simultaneously generate the entire text of reviews, we split a review into chapters and evaluate each of the generated chapters. In addition to recent models, such as Big Bird (Zaheer et al., 2020) and Fusion-in-Decoder (FiD; Izacard and Grave, 2021), we propose Queryweighted Fusion-in-Decoder (QFiD), a simple extension of FiD for query-focused MDS. As shown in the experimental results, our proposed model outperforms the other models by focusing on the contents concerning the query. Finally, we conduct a human evaluation of the generated reviews and compare them with humanwritten reviews. The human evaluation results clarified that we have not reached the fully automatic literature review generation stage due to issues such as hallucinations and less informativeness. However, we obtained promising results, showing that approximately 30% of the generated chapters are competitive or superior to human-written reviews. Our dataset and evaluation results provide a basis for future research on automatic literature review generation. ## 2 Related Work 2.1 Datasets For Scientific Document Summarization The most common datasets for document summarization are based on news articles, such as CNN/Daily Mail (Nallapati et al., 2016), XSum (Narayan et al., 2018), and Multi-News (Fabbri et al., 2019). On the other hand, there are many datasets for scientific document summarization. Cohan et al. (2018) released arXiv and PubMed datasets, commonly used for abstract generation tasks. Lu et al. (2020) proposed Multi-XScience, which aims to generate a related work section by using the abstract of a subject paper and papers cited in its related work section. While related work section generally describes the position of the subject paper w.r.t. the previous studies, literature reviews generally provide the comprehensive summary of a research field. Furtheremore, the length of input/output text of SciReviewGen is significantly longer than that of Multi-XScience (see Section 3.3). Hence, our dataset has distinct challenges from Multi-XScience. DeYoung et al. (2021) proposed MSˆ2 for the automatic generation of systematic reviews in biomedical science. Systematic reviews integrate findings from all relevant studies to answer clearly formulated questions, such as the safety of public water fluoridation (Khan et al., 2003). In contrast, literature reviews include various topics, such as the motivations behind the research topic, technical details of the methods, and their real-world applications. Furthermore, the target summaries in MSˆ2 are very short and are written under an explicit methodology (Khan et al., 2003). In contrast, literature reviews are significantly longer, and the writing style varies according to the author (Jaidka et al., 2013a,b). Therefore, SciReviewGen is more challenging than MSˆ2 in terms of output diversity. ## 2.2 Automatic Literature Review Generation Few studies have addressed the automatic generation of literature reviews. For example, Mohammad et al. (2009) applied unsupervised summarization methods, such as LexRank (Erkan and Radev, 2004), to generate technical surveys of scientific papers. Agarwal et al. (2011) proposed clusteringbased extractive methods for generating summaries of co-cited papers. However, these methods do not aim to generate literature reviews, and only a few dozen gold summaries are used for evaluation instead of existing literature reviews. While Jaidka et al. (2013a) claimed that they conducted literature review generation, no technical details of the model are described. In contrast to these studies, we first release a large-scale dataset for literature review generation and intensively evaluate recent models by both automatic and human evaluation. ## 2.3 Transformer-Based Long Document / Query-Focused Summarization Transformers (Vaswani et al., 2017) have shown remarkable success in document summarization. Standard Transformer-based models can accept up to only 512-1024 tokens at once due to the high computational cost of the self-attention mechanism. Recently, various methods have been proposed to overcome this limitation (Beltagy et al., 2020), such as the sparse attention mechanism used in Big Bird (Zaheer et al., 2020). FiD (Izacard and Grave, 2021) is a Transformer encoder-decoder model that allows multiple documents to be input. Although initially designed for open-domain question answering, it can be applied to MDS tasks (DeYoung et al., 2021; Vig et al., 2022). Query-focused summarization (QFS) aims to generate summaries related to user-specified queries (Vig et al., 2022). Recent studies have applied Transformers to QFS, but most of them simply concatenate queries into input documents (Vig et al., 2022; Laskar et al., 2022). As mentioned in Section 3, SciReviewGen has an average input length longer than 1024 tokens and contains the titles of literature reviews and chapters as queries. Therefore, we extend FiD for query-focused summarization to tackle the task of literature review generation. Our proposed QFiD explicitly considers the relevance of each input document to the queries. ## 3 Task Definition & Dataset We now describe the literature review generation task and the SciReviewGen dataset, which is created using S2ORC. The data collection process and dataset statistics are presented below. ## 3.1 Task Definition As there are no previous datasets for literature review generation, we first describe the definition of the literature review generation task. Target Text Ideally, the entire text of a literature review should be used as target. However, as current summarization models can generate relatively short summaries of less than a thousand tokens (Fabbri et al., 2019; Narayan et al., 2018), it is difficult to generate the entire text of literature reviews simultaneously. Therefore, we split a review paper into chapters in the following experiments and use each chapter as a target text. In addition, as each chapter of the literature review generally discusses different topics, we assume that each chapter can be generated independently as the first step for automatic literature review generation. Input Text The following data are input for the literature review generation task: abstracts of cited papers, *titles of literature reviews*, and titles of chapters. Here, *cited papers* refer to those cited in each chapter. The abstracts of the cited papers are used as the primary sources for the contents of the generated chapter. Although it is desirable to input the full text of the cited papers, we use only abstracts, as approximately 30% of them do not have access to the full text in the S2ORC dataset. The titles of the review and chapter serve as queries. They suggest the topics described in each chapter. Additional Inputs As SciReviewGen contains citation information, such as citation sentences and citation networks, they can be used as information sources that complement abstracts. The citation sentences provide the cited paper's actual impact on the research community (Yasunaga et al., 2019), whereas citation networks provide the relationships between the cited papers. Furthermore, SciReviewGen are linked to S2ORC by paper_id. Therefore, various metadata in S2ORC (e.g., DOI, journal, and semantic scholar URL) attached to the literature review and cited papers can be accessed. ## 3.2 Dataset Construction We constructed SciReviewGen based on S2ORC (Lo et al., 2020), a large corpus of English academic papers. First, as candidates for literature reviews, we extracted papers with access to fulltext data where the field of study includes "Computer Science," and the title contains either "survey," "overview," "literature review," or "a review." This yielded 13,984 candidates for the literature reviews. As the above candidates still contain many papers unrelated to literature reviews, we trained a SciBERT-based classifier (Beltagy et al., 2019) to extract appropriate literature reviews from the candidates. We first created a gold-standard dataset of literature reviews to train the classifier. We asked three annotators with computer science research backgrounds to annotate whether each candidate paper was suitable as a literature review following the two criteria: 1) Reviewing multiple scientific papers. Not reviewing general tools or books and not explaining a specific project or shared task; | dataset | train/valid/test | input len. | target len. | # inputs | unigrams | bigrams | trigrams | 4-grams | |-------------------------|--------------------|--------------|---------------|------------|------------|-----------|------------|-----------| | Multi-News | 44,972/5,622/5,622 | 2,103 | 264 | 2.79 | 16.87% | 55.57% | 74.44% | 81.23% | | MSˆ2 | 14,188/2,021/1,667 | 6,930 | 61 | 22.80 | 15.24% | 62.35% | 87.23% | 95.27% | | Multi-XScience | 30,369/5,066/5,093 | 778 | 116 | 4.42 | 35.28% | 81.57% | 94.88% | 97.89% | | SciReviewGen (original) | 9,187/484/459 | 12,503 | 8,082 | 68.00 | 17.88% | 64.86% | 90.56% | 97.20% | | SciReviewGen (split) | 84,705/4,410/4,457 | 1,274 | 604 | 7.01 | 32.74% | 80.23% | 95.16% | 98.09% | 2) Only reviewing scientific papers. Not proposing new methods, re-testing previous studies, or conducting questionnaires (i.e., the paper does not contain contents that cannot be generated only by the cited papers' information). The above criteria were set so that the annotators could judge only from the title and abstract of a candidate paper. They classified whether each paper was suitable as a literature review, and the class in which most annotators voted was used as the final annotation result. The annotators classified 583 of 889 candidate papers as suitable and 306 as unsuitable, resulting in Cohen's kappa = 0.66. The annotated papers were then split into a train/valid/test set containing 589/150/150 papers for training the SciBERT-based classifier. Using the train/valid split, we fine-tuned the SciBERT classifier, which achieved precision = 88%, recall = 97%, and f1 = 92% on the test split. Using this classifier, we extracted 10,269 papers from 13,984 candidate papers, including 210,049 chapters and 698,049 cited papers. As a result, we constructed SciReviewGen (original), consisting of the entire text of literature reviews, the titles of literature reviews and chapters, and the abstracts of the cited papers. For our experiments, we split the literature reviews into chapters and excluded chapters that had access to less than two abstracts of their cited papers, leaving 93,572 chapters. This split version is denoted as SciReviewGen (split). As S2ORC does not contain the data of some cited papers, the number of filtered chapters will increase if we obtain the data of all cited papers. Finally, to ensure that the test set includes only suitable papers, we set the human-annotated papers as the test sets and created the train/valid sets by randomly splitting the rest for both original and split version. Furthermore, we removed the chapters in the test sets that have more than 20% overlap of cited papers with one or more literature reviews in the training set. ## 3.3 Dataset Statistics Table 1 presents the statistics of SciReviewGen compared with current large-scale MDS datasets, including Multi-News (Fabbri et al., 2019), MSˆ2 (DeYoung et al., 2021), and Multi-XScience (Lu et al., 2020). Regarding the split version, SciReviewGen has more than approximately twice as many summaries as the other datasets, which is more suitable for data-driven neural-based summarization models. The target length is more than twice that of the other datasets. SciReviewGen also has more input documents and a longer input length than Multi-XScience. Furthermore, the original version presents distinct characteristics, such as significantly longer input/target text and more input documents than the others. These characteristics would be the challenge for further research in automatic literature review generation. Note that the ratio of input length to target length are relatively small in both versions; however, inputs can be complemented by additional information, such as body text and citation sentences. Table 1 also lists the percentage of novel n-grams in the target summary that do not appear in the input documents. The target summaries in SciReviewGen contain more novel n-grams than those in Multi-News and MSˆ2, indicating that SciReviewGen is more challenging and suitable for abstract summarization. It is reasonable that both SciReviewGen (split) and Multi-XScience contain many novel n-grams because both the literature reviews and related work sections contain high-level summaries of the cited papers (Jaidka et al., 2013a, 2019). ## 4 Experiments We study the performance of the current document summarization models on the split version of SciReviewGen (hereinafter refered to as SciReviewGen). We use the abstracts of the cited papers, the literature review titles, and the chapter titles as inputs. As mentioned in Section 3.3, SciReviewGen has an average input length of longer than 1024 tokens and contains many novel n-grams. In addition, it contains literature review titles and chapter titles that can be used as summarization queries. Therefore, we employ query-focused abstractive summarization models that can accept long sequences for the literature review generation task. We first experiment with several transformerbased models that simply concatenate queries into documents as encoder inputs. We then propose the Query-weighted Fusion-in-Decoder (QFiD) that extends Fusion-in-Decoder (FiD) to explicitly consider each paper's relevance to queries. ## 4.1 Baseline Methods We use LEAD, LexRank (Erkan and Radev, 2004), ext-oracle, Big Bird (Zaheer et al., 2020), and FiD (Izacard and Grave, 2021) as the baseline methods. LEAD-k selects the first k sentences from each input document and concatenates them as a summary. LexRank is a graph-based unsupervised extractive method that considers a graph in which the sentences are nodes, and the similarities between the sentences are edges. It calculates the importance of sentences using PageRank algorithm (PAGE, 1998) and extracts the top l sentences with high importance as a summary. Ext-oracle greedily selects l sentences that maximize the ROUGE-2 scores between the selected sentences and the target summary. Its results show the upper bound of an extractive system on SciReviewGen. We set k = 1 and l = 5 such that the average summary length is the same as that of the abstractive models. Big Bird simplifies the self-attention computation in the Transformer using the sparse attention mechanism, supporting longer inputs of up to approximately 16K tokens. In our experiments, we use the model that was fine-tuned for summarization on arXiv dataset (Cohan et al., 2018) 1. We further fine-tuned it on SciReviewGen. FiD is a Transformer encoder-decoder model that allows multiple documents to be input. As shown in the 1https://huggingface.co/google/ bigbird-pegasus-large-arxiv upper part of Figure 2, FiD separately encodes multiple documents and concatenates their hidden states. The hidden states are then input into the decoder together, which enables multiple documents to be simultaneously processed while capturing the relations among documents. In our experiment, we initialized the weights of FiD using the BARTLarge model (Lewis et al., 2020) fine-tuned for summarization on the CNN/Daily Mail dataset2, and further fine-tuned it on SciReviewGen. Note that we also evaluated the performance of GPT-3 model davinci (Brown et al., 2020) on SciReviewGen with the prompt "Summarize the above scientific papers focusing on the title and chapter title." However, it yielded almost no meaningful sentences, resulting in significantly lower ROUGE scores (ROUGE-1 = 9.77, ROUGE-2 = 1.25, ROUGE-L = 8.67). ## 4.2 Query-Weighted Fusion-In-Decoder (Qfid) This section describes our QFiD model that extends FiD to explicitly consider each paper's relevance to the queries. As mentioned in Section 3.1, the titles and chapter titles serve as queries that suggest the topic in each chapter. The baseline methods simply concatenate these queries with the abstract of each cited paper. For FiD, this simple approach makes the encoder consider the local relation between the queries and the words of each abstract. However, the model cannot explicitly identify which cited papers are related to the queries. In the literature review generation task, not all cited papers are related to a chapter's topic. For example, when the chapter describes machine learning methods, it typically cites papers that describe datasets or evaluation metrics along with experimental results. However, these papers are not directly related to the methods, and their contents should be less focused on. For the aforementioned reason, we improved FiD to explicitly consider the relevance of each cited paper to the queries. Our model weights each cited paper according to its similarity to the query to identify which papers are more related to the topic in the chapter. Specifically, as shown in the lower part of Figure 2, let n be the number of the cited papers, rm be the input token sequence of the m-th cited paper, and lm be its length for m ∈ {1*, . . . , n*}. Let q be the query that con- ![5_image_0.png](5_image_0.png) catenates the title and chapter title, and lq be its length. The hidden states of the m-th cited paper Hm ∈ R d×(lq+lm)and query Hq ∈ R d×lq are obtained as follows: $$\begin{array}{r l}{={}}&{{}\operatorname{Enc}\left(\mathbf{q}+\mathbf{r}_{m}\right)}\\ {={}}&{{}\operatorname{Enc}\left(\mathbf{q}\right)}\end{array}$$ $${\begin{array}{c}{H_{m}}\\ {H_{q}}\end{array}}$$ where Enc is the BART encoder, and d denotes the dimension of each hidden state. Then, the feature vectors of the m-th cited paper hm ∈ R dand query hq ∈ R dare obtained as follows: $$\begin{array}{r l}{={}}&{{}\operatorname{Avgpool}\left(H_{m}\right)}\\ {={}}&{{}\operatorname{Avgpool}\left(H_{q}\right)}\end{array}$$ $$\begin{array}{c}{{h_{m}}}\\ {{h_{q}}}\end{array}$$ where Avgpool is the operation for computing the average of the hidden states. The similarity between the query and the m-th cited paper wm ∈ R is obtained as the inner product of these vectors. Subsequently, the hidden states of the m-th cited paper are weighted by wm and input to the BART decoder. $$\begin{array}{c c}{{w_{m}=1+\frac{\exp(\mathbf{h}_{m}^{\top}\mathbf{h}_{q})}{\sum_{m=1}^{n}\exp(\mathbf{h}_{m}^{\top}\mathbf{h}_{q})}}}&{{\quad\quad\quad(5)}}\\ {{\mathbf{c}\sim\mathrm{Dec}\left([w_{1}H_{1};...;w_{n}H_{n}]\right)}}&{{\quad\quad\quad(6)}}\end{array}$$ where [w1H1; ...; wnHn] is the concatenation of matrices. Dec denotes the BART decoder, and c is the generated chapter of the literature review. ## 4.3 Implementation Details The input data format is shown in Table 2. We concatenated the title and chapter title of the literature Literature review title <s> Chapter title <s> Abstract of paper 1 <s> BIB001 </s> Literature review title <s> Chapter title <s> Abstract of paper 2 <s> BIB002 </s> ... </s> Literature review title <s> Chapter title <s> Abstract of paper N <s> BIB00N (1) (2) $\frac{1}{2}$ $$\begin{array}{l}{(3)}\\ {(4)}\end{array}$$ review, the abstract of the cited paper, and an identifier to distinguish the different cited papers. They are separated by the token "<s>," and each cited paper's inputs are separated by the token "</s>." In Big Bird, the information on all cited papers is concatenated and input into the model. In FiD/QFiD, the information on each cited paper is input into the encoder separately. In LEAD, LexRank, and ext-oracle, only the abstract of each paper is input because titles are noise for the extractive methods. The models were implemented using PyTorch (Paszke et al., 2019) and HuggingFace Transformers (Wolf et al., 2020) libraries. The number of parameters of Big Bird and FiD/QFiD is approximately 577M and 406M, respectively. These models were trained for ten epochs with a single run, and the final checkpoints were selected based on the ROUGE-2 scores on the validation dataset. Training required approximately three days on one NVIDIA A100 GPU (40GB). For validation, 1,000 chapters were randomly sampled from 8,217 chapters in the original validation dataset owing to time constraints. The AdamW optimizer (Loshchilov and Hutter, 2019) was used as the learning optimizer, with β1 = 0.9, β2 = 0.999, and learning_rate = 5e − 5. The model output 6700 | Models | ROUGE-1 | ROUGE-2 | ROUGE-L | |-------------|-----------|-----------|-----------| | LEAD | 23.09 | 4.68 | 11.72 | | LexRank | 24.40 | 5.02 | 12.52 | | Ext-oracle | 29.43 | 10.13 | 14.88 | | Big Bird | 24.25 | 4.08 | 15.30 | | FiD | 32.40 | 6.75 | 16.17 | | QFiD (ours) | 34.00 | 7.75 | 16.52 | Table 3: ROUGE evaluation results on SciReviewGen. was decoded by a beam search with beam_size= 4. These hyperparameters were determined based on the validation performance. ## 5 Experimental Results 5.1 Automatic Evaluation We report the ROUGE scores (Lin, 2004) 3for the baseline methods and our QFiD on the SciReviewGen dataset in Table 3. The evaluation results show that FiD-based models outperform the others except for ext-oracle, whereas Big Bird is comparable to LEAD and LexRank. As Big Bird is pretrained on abstract generation (single document summarization; SDS), it results in significantly lower performance in literature review generation. These results contrast with those reported in MSˆ2 and Multi-XScience, where SDS models are competitive with MDS models. As simply fine-tuning the SDS model does not work, the literature review generation presents distinct characteristics from the above datasets. In contrast, FiD uses an encoder pretrained on a SDS task and encodes each cited paper separately, leading to significantly higher performance. Furthermore, FiD-based models outperform ext-oracle regarding ROUGE-1 and ROUGE-L, which shows the difficulty of our task since simply copying sentences from the the cited papers does not work well. Table 3 show that QFiD outperforms all the baseline methods, including vanilla FiD. This improvement suggests that QFiD can generate more appropriate reviews by considering the relevance of each cited paper to the queries. ## 5.2 Human Evaluation We conducted a human evaluation of QFiD, which yielded the highest ROUGE score. The generated and ground truth chapters were compared following the five criteria. - *Relevance*: relevance to the title of the paper and chapter - *Coherence*: how well the text is structured and coherent - *Informativeness*: whether the text mentions concrete information in the cited papers, not only general information - *Factuality*: whether the text does not contradict the content of the cited papers - *Overall*: which of the texts is preferable as a literature review? As the evaluation required expert knowledge, we asked three annotators with graduate-level computer science backgrounds to perform the evaluation. All annotators had at least one year of research experience in computer vision. We asked them to rate the generated chapters superior, comparable, or inferior to the ground truth chapter according to each criterion. The generated chapters, ground truth chapters, cited papers' abstracts, cited papers' body text (as needed), literature review titles, and chapter titles were provided for the annotators. They were not informed which of the two chapters was the ground truth. We selected five literature reviews in the computer vision domain for the evaluation (Wang et al., 2020; Jiao and Zhao, 2019; Hossain et al., 2019; Laga, 2019; Tian et al., 2020). All of them had less than 20% overlap of cited papers with any literature review in the training set. Since a considerable amount of time is required to evaluate long scientific texts, we randomly selected 30 chapters for the evaluation, where the total number of words in the cited papers' abstracts was less than 1,000, and that of the ground truth was less than 400. The papers/chapters used for the evaluation were chosen regardless of the quality of the generated text. Table 4 shows the human evaluation results. The percentages indicate the proportion of the ground truth chapters that are rated superior to the generated chapters (Ground truth > Generated), comparable, and inferior to the generated chapters (Generated > Ground truth) w.r.t. each criterion. The interannotator agreement is scored as Cohen's kappa = 0.212, which is reasonable because the number of categories is three (Hallgren, 2012). The ground truth outperforms the generated chapters for all | Evaluation results | Relevance | Coherence | Informativeness | Factuality | Overall | |--------------------------|-------------|-------------|-------------------|--------------|-----------| | Ground truth > Generated | 25.6% | 48.9% | 64.4% | 40.0% | 68.9% | | Comparable | 56.7% | 31.1% | 20.0% | 48.9% | 8.9% | | Generated > Ground truth | 17.8% | 20.0% | 15.6% | 11.1% | 22.2% | Table 4: Human evaluation results on the SciReviewGen dataset. We show the percentage of the ground truth chapters rated superior/comparable/inferior to the chapters generated by QFiD. criteria, indicating that automatic literature review generation does not achieve human-level performance. However, regarding *overall*, 68.9% of the ground truth outperforms the generated chapters, whereas 22.2% of the generated chapters outperforms the ground truth. This result is surprising because some machine-generated chapters are more sophisticated than those written by experts. The generated chapters achieve relatively high scores for *relevance* and *coherence*. Specifically, for *relevance*, 74.5% of the generated chapters are comparable or superior to the ground truth, indicating that our QFiD can generate coherent summaries concerning the titles of papers and chapters. However, for *informativeness* and *factuality*, the generated chapters remarkably underperform the ground truth. This underperformance suggests that generated reviews tend to describe general or sometimes incorrect information. Specifically, while the total percentage of generated chapters comparable or superior to the ground truth is 60.0% w.r.t. *factuality*, the percentage is only 35.6% for *informativeness*. We elaborate on these causes in Section 5.3. Table 5 shows an example of a chapter describing *progressive upsampling super-resolution*, one of the techniques for upsampling operation. Two annotators rate the generated chapter superior to the ground truth w.r.t. *overall*. The ground truth first mentions the general upsampling operation (BIB001 and BIB003) and then explains progressive upsampling super-resolution in detail (BIB002, BIB004, and BIB005). In contrast, the generated chapter consistently focuses on progressive upsampling super-resolution by referring to BIB002, BIB004, and BIB005, and explains the details of the papers with sufficient fluency. This example suggests that the generated chapter appropriately focuses on content related to the titles while maintaining sufficient consistency. For more examples, see Appendix A and B. ![7_image_0.png](7_image_0.png) ## 5.3 Discussions As shown in Section 5.2, the generated chapters considerably underperform the ground truth concerning *informativeness* and *factuality*, which may be attributed to the lack of source information. Since only abstracts are input into the model, it is difficult to describe the details of the cited papers. As discussed in Ji et al. (2022), hallucinations tend to occur when the target text contains a large amount of information absent from the source. Therefore, adding other input information, such as body text, citation sentences, and the text of cocited papers, will improve both *informativeness* and *factuality*. In addition to textual information, citation networks can be used to determine which cited papers should be focused on concerning the topic. While this study uses only abstracts and titles as the first step for literature review generation, using the aforementioned information would be required in future research. The human evaluation results clarified that we have not yet reached the stage of fully automatic literature review generation without manual modifications. At the same time, we show some promising results that approximately 30% of the generated chapters are competitive or superior to humanwritten reviews concerning *overall*. This result suggests that a fully automatic generation of literature reviews will be possible if the remaining issues, such as hallucinations and less informativeness, are solved. Currently, automatic literature review generation can be effectively utilized with human revision, such as writing assistance tools, by providing drafts of literature reviews. ## 6 Conclusion We propose SciReviewGen, a large-scale dataset for automatic literature review generation. We also introduce an extension of FiD (Izacard and Grave, 2021) for query-focused summarization and show that our QFiD model outperforms naive FiD. The human evaluation results show that some generated texts are comparable to human-written literature reviews. However, challenges such as hallucinations and a lack of detailed information still remain to be addressed. We hope that our study will serve as a basis for future research on the challenging task of automatic literature review generation. ## Limitations In our experiment, we use only abstract text as the input text for literature review generation However, in writing literature reviews, a writer reads the full text of each cited paper and even other papers related to the research area. Therefore, the input data are insufficient to write a complete literature review. As only 70% of the cited papers have access to the body text in our SciReviewGen, a dataset containing full-text information is required for further research. In human-written literature reviews, the chapters complement each other and are not redundant. However, as our QFiD and baseline models generate each chapter independently, they cannot consider the relationships between chapters. Furthermore, the relations between each cited paper are considered in the actual literature review writing process (e.g., which paper is the first on the topic and which is the following). However, these relationships are not considered in the models. In future research, a literature review generation model that can consider the relations between chapters and cited papers by using additional information, such as the contents of other chapters, citation networks, and citation sentences, should be investigated. As mentioned in Section 5.2, the generated text contains incorrect information to a certain extent. Therefore, we cannot publish it without human revision. Currently, the model can be utilized as a writing assistance tool, not as a complete literature review generation model. ## Ethics Statement Potential Risks As discussed in Section 5.2, our model risks generating incorrect information. Currently, it can be effectively utilized with human revision, such as writing assistance tools, by providing drafts of literature reviews. However, a literature review with wrong information could be published if abused. Licenses We used S2ORC (Lo et al., 2020, CC BY-NC 4.0), PyTorch (Paszke et al., 2019, BSDstyle license), and HuggingFace Transformers (Wolf et al., 2020, MIT for facebook/bart-large-cnn, Apache-2.0 for all materials) as scientific artifacts. All artifacts can be used for research purposes. We release the SciReviewGen dataset based on S2ORC, as CC BY-NC 4.0 allows users to adapt and share licensed material for noncommercial purposes. Annotation Procedures The annotation procedures complied with the ACL Ethics Policy. Prior to the annotation, we informed the ethics review board in our university of the annotation procedures and were notified that it was exempt from ethics review. More details are presented in Section C. ## Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback. This work was supported by NEDO JPNP20006, JST ACT-X JPMJAX1904, JST CREST JPMJCR21D1, Japan. ## References Nitin Agarwal, Ravi Shankar Reddy, Kiran Gvr, and Carolyn Penstein Rosé. 2011. Towards multi-document summarization of scientific articles:making interesting comparisons with SciSumm. In Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, pages 8– 15, Portland, Oregon. Association for Computational Linguistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150. Lutz Bornmann and Rüdiger Mutz. 2015. Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66(11):2215–2222. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621, New Orleans, Louisiana. Association for Computational Linguistics. Arman Cohan, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Drahomira Herrmannova, Petr Knoth, Kyle Lo, Philipp Mayr, Michal ShmueliScheuer, Anita de Waard, and Lucy Lu Wang. 2022. Overview of the third workshop on scholarly document processing. In *Proceedings of the Third Workshop on Scholarly Document Processing*, pages 1–6, Gyeongju, Republic of Korea. Association for Computational Linguistics. Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel Weld. 2020. SPECTER: Document-level representation learning using citation-informed transformers. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2270–2282, Online. Association for Computational Linguistics. Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, and Lucy Lu Wang. 2021. MSˆ2: Multidocument summarization of medical studies. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7494– 7513, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. *Journal of artificial intelligence research*, 22:457–479. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Kevin A Hallgren. 2012. Computing inter-rater reliability for observational data: an overview and tutorial. *Tutorials in quantitative methods for psychology*, 8(1):23–34. MD Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, and Hamid Laga. 2019. A comprehensive survey of deep learning for image captioning. ACM Computing Surveys, 51(6):1–36. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics. Kokil Jaidka, Christopher Khoo, and Jin-Cheon Na. 2013a. Deconstructing human literature reviews – a framework for multi-document summarization. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 125–135, Sofia, Bulgaria. Association for Computational Linguistics. Kokil Jaidka, Christopher SG Khoo, and Jin-Cheon Na. 2013b. Literature review writing: how information is selected and transformed. In *Aslib Proceedings*, volume 65, pages 303–325. Kokil Jaidka, Christopher SG Khoo, and Jin-Cheon Na. 2019. Characterizing human summarization strategies for text reuse and transformation in literature review writing. *Scientometrics*, 121(3):1563–1582. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. *ACM Computing Surveys*. Just Accepted. Licheng Jiao and Jin Zhao. 2019. A survey on the new generation of deep learning in image processing. IEEE Access, 7:172231–172263. Khalid S Khan, Regina Kunz, Jos Kleijnen, and Gerd Antes. 2003. Five steps to conducting a systematic review. *Journal of the royal society of medicine*, 96(3):118–121. Hamid Laga. 2019. A survey on deep learning architectures for image-based depth reconstruction. arXiv preprint arXiv:1906.06113. Md Tahmid Rahman Laskar, Enamul Hoque, and Jimmy Xiangji Huang. 2022. Domain adaptation with pre-trained transformers for query-focused abstractive text summarization. *Computational Linguistics*, 48(2):279–320. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. *Neural computation*, 1(4):541–551. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Ruijun Liu, Yuqian Shi, Changjiang Ji, and Ming Jia. 2019. A survey of sentiment analysis based on transfer learning. *IEEE Access*, 7:85401–85412. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Yao Lu, Yue Dong, and Laurent Charlin. 2020. MultiXScience: A large-scale dataset for extreme multidocument summarization of scientific articles. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8068–8074, Online. Association for Computational Linguistics. Saif Mohammad, Bonnie Dorr, Melissa Egan, Ahmed Hassan, Pradeep Muthukrishan, Vahed Qazvinian, Dragomir Radev, and David Zajic. 2009. Using citations to generate surveys of scientific paradigms. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 584–592, Boulder, Colorado. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gulçehre, and Bing Xiang. 2016. ˘ Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. L PAGE. 1998. The pagerank citation ranking: Bringing order to the web. In Proc. of the 7ˆ< th> WWW Conf., 1998. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems, volume 32, pages 8026–8037. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022a. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022b. Galactica demo. https://galactica.org/. (Accessed on 12/26/2022). Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo, and Chia-Wen Lin. 2020. Deep learning on image denoising: An overview. Neural Networks, 131:251–275. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30, pages 6000––6010. Jesse Vig, Alexander Fabbri, Wojciech Kryscinski, Chien-Sheng Wu, and Wenhao Liu. 2022. Exploring neural models for query-focused summarization. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1455–1468, Seattle, United States. Association for Computational Linguistics. Zhihao Wang, Jian Chen, and Steven CH Hoi. 2020. Deep learning for image super-resolution: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(10):3365–3387. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. In *Proceedings of the AAAI conference on artificial* intelligence, volume 33, pages 7386–7393. Weizhe Yuan, Pengfei Liu, and Graham Neubig. 2022. Can we automate scientific reviewing? *Journal of* Artificial Intelligence Research, 75:171–212. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In *Advances in Neural* Information Processing Systems, volume 33, pages 17283–17297. ## A Examples Of Generated Chapters Table 8 shows an example of a chapter describing stylized caption, one of the categories of image captioning methods. The generated chapter is rated competitive or superior to the ground truth w.r.t. relevance and *coherence* by all annotators. The first half describes the background of stylized captions, while the second half describes the details of BIB002 and BIB003, which are both methods of stylized captions. This example suggests that the generated chapter is a consistent and structured summary of stylized captions. Table 9 shows an example of a chapter describing *Convolutional neural networks (CNN)*. The generated chapter is rated inferior to the ground truth w.r.t. *informativeness* and *factuality* by more than two annotators. The generated chapter does not refer to BIB002 and BIB003 and contains only general descriptions of CNN. Moreover, it has a wrong description that BIB001 proposed CNN in 2012. In fact, CNN was first proposed by LeCun et al. (1989), and BIB001 proposed new pooling methods for CNN in 2014. On the other hand, it correctly states that Yann LeCun proposed CNN, although no input documents state it at all. This result indicates that the model learned knowledge about the computer vision domain during the training process and includes it in the generated text correctly. ## B Example Of A Generated Literature Review We show an example of a literature review generated based on Liu et al. (2019) at the end of the appendix. It has less than 20% overlap of cited papers with any literature review in the training set. Only chapters that have access to two or more cited papers are shown. ## C Details Of Annotation Procedures Details Of Annotation For Filtering Literature Reviews The full instructions to the participants are shown in Table 6. We recruited three graduate students from our graduate school with graduatelevel computer science backgrounds as annotators. The working hours averaged 50 hours for each annotator, and we paid 100,000 yen as rewards. The hourly wage is determined according to the university's rules and is higher than the minimum wage in our country. We informed the annotators that Referring to the titles and abstracts of 889 candidate papers, please annotate them per the criteria below. Please feel free to ask me if you have any questions. - Reviewing multiple scientific papers. - Not reviewing general tools or books. - Not explaining a specific project or shared task. - Only reviewing scientific papers. Not proposing new methods, re-testing previous studies, or conducting questionnaires (i.e., the paper does not contain content that cannot be generated only by the cited papers' information). Table 6: Full instructions to participants in the suitability annotation Please evaluate the chapters of literature reviews automatically generated by the model. A literature review is a scientific paper that summarizes existing scientific articles and provides an overview of the research field. We developed a model that takes the abstracts of the papers cited by the chapter and the titles of the paper and chapter as inputs and generates the chapter. Specifically, please evaluate which is better or comparable regarding the generated and human-written chapters following the five criteria below. We provide the abstracts and full text of the cited papers, the titles of the papers and chapters, the generated chapters, and the human-written chapters. - Relevance: Relevance to the title of the paper and chapter. - Coherence: How well the text is structured and coherent. - Informativeness: Whether the text mentions concrete information in the cited papers, not only general information. - Factuality: Whether the text does not contradict the content of the cited papers. - Overall: Which of the texts is preferable as a literature review? Table 7: Full instructions to participants in the human evaluation the data would be used to create the SciReviewGen dataset. Details of Human Evaluation The full instructions to the participants are shown in Table 7. We recruited three graduate students from our graduate school with graduate-level computer science backgrounds for the evaluation. The working hours averaged 30 hours for each annotator, and we paid 50,000 yen as rewards. The hourly wage is determined according to the university's rules and is higher than the minimum wage in our country. We informed the annotators that the data would be used to evaluate the performance of our model, and the evaluation results would be reported herein. Title: **A Comprehensive Survey of Deep Learning for Image** ![13_image_0.png](13_image_0.png) Captioning Chapter title: **Stylized Caption.** ![13_image_4.png](13_image_4.png) ![13_image_5.png](13_image_5.png) Ground truth chapter (Hossain et al., 2019) Existing image captioning systems generate captions based on only the image content, which can also be called factual descriptions. They do not consider the stylized part of the text separately from other linguistic patterns. However, the stylized captions can be more expressive and attractive than only the flat description of an image. ... Gan et al. BIB003 proposed a novel image captioning system called StyleNet. This method can generate attractive captions, adding various styles. The architecture of this method consists of a CNN and a factored LSTM that can separate factual and style factors from the captions. It uses multitask sequence-to-sequence training BIB001 for identifying the style factors and then adds these factors at runtime to generate attractive captions. ... In our day-to-day conversations, communications, interpersonal relationships, and decision making, we use various stylized and nonfactual expressions such as emotions, pride, and shame. However, Mathews et al. BIB002 claimed that automatic image descriptions are missing these nonfactual aspects. Therefore, they proposed a method called SentiCap. This method can generate image descriptions with positive or negative sentiments. It introduces a novel switching RNN model that combines two CNN+RNNs running in parallel. In each time step, this switching model generates the probability of switching between two RNNs. One generates captions considering the factual words and the other considers the words with sentiments. It then takes inputs from the hidden states of both RNNs for generating captions. This method can generate captions successfully given the appropriate sentiments. Generated chapter The stylized aspect of image description is missing in current deep learning based image captioning systems. The reason is that these systems focus on the non-factual aspects of the written description, which are missing from the current systems. One such style is descriptions with emotions, which is common in everyday communication, and influences decision-making and interpersonal relationships. Chen et al. BIB002 proposed a model that automatically generates captions with positive or negative sentiments. They use a novel switching recurrent neural network with wordlevel regularization to produce emotional image captions using only 2000+ training sentences containing sentiments. Their model is evaluated with different automatic and crowd-sourcing metrics. In 84.6% of cases, the generated positive captions were judged as being at least as descriptive as the factual captions. Another work BIB003 proposed a novel framework named StyleNet to address the task of generating attractive visual captions for images and videos with different styles. The framework consists of two sets of data: factual image/video-caption paired data and stylized monolingual text data (e.g., romantic and humorous sentences). They show experimentally that StyleNet outperforms existing approaches for generating visual captioning with different visual styles, measured in both automatic Table 8: Example of a generated chapter and corresponding ground truth. All annotators rated the generated chapter comparable or superior to the ground truth w.r.t. relevance and *coherence*. Title: **A Survey on the New Generation of Deep Learning in** ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) Image Processing ![13_image_3.png](13_image_3.png) In recent years, CNN has also made great success in image processing and object recognition. The strength of CNN lies in their shared weights. Weight sharing dramatically reduces the number of free parameters learned, thus to lower the memory requirements for running the network and allowing the training of more extensive, more powerful networks BIB002. A CNN consists of convolutional layers, pooling layers, normalization layers, and fully connected layers. At each layer, the input image X ∈ Rn×m is convolved with a set of K kernels Wk ∈ Rv×v, k = 1, 2, · · · , K and subsequently biases {bk ∈ R, k = 1, 2, · · · , K} are added, each generating a new feature map Xk by an element-wise non-linear transform σ(·). The same process is repeated for convolutional layer l, Xlk = σ Wlk ⊗ Xl−1 + b l k where symbol ' ⊗ ' denotes the discrete convolution operator, and its specific type of operation has a variety of forms, such as 'valid' convolution, 'same' convolution, 'extra' convolution, strided convolution, fractional-strided convolution, etc. Another essential layer of CNN is pooling, which is a form of non-linear down-sampling. Convolutional layers are typically alternated with pooling layers where pixel values of neighborhoods are aggregated using some permutation invariant function, usually the max or average operations, which provides another form of translation invariance BIB003-BIB001. S (l) k = Pooling X (l) k Finally, after several convolutional and max-pooling layers, the high-level reasoning in the neural network is done via fully connected layers, where weights are no longer shared. CNN is typically trained end-to-end in an entirely supervised manner. The significant reduction in the number of weights parameters and the translational invariance of the learned features contributes to the ability of CNN to be trained end-to-end. Generated chapter The convolutional neural network (CNN) is a special type of feedforward neural network proposed by Yann LeCun et al. BIB001 in 2012. CNN is a feed forward neural network that consists of a stack of convolution layers, pooling layers, and non-linearity layers. The convolution and pooling operations are performed at the same time in each layer of the CNN. The basic idea of the convolution operation is to convolve the input image with a set of filters at the beginning of the network. Then, the filters are applied to the image in order to extract the feature maps. The feature maps are then passed to the next convolution layer for further processing. Finally, the output is fed to the fully connected layer for classification. The CNN architecture is shown in Fig. 1. Fig. 2 shows the basic architecture of CNN. Fig. 3 shows the general block diagram of CNN, where convolution, max-pooling, average pooling, and fully connected layers are the basic building blocks of the architecture. The architecture of a typical CNN consists of three parts: input, hidden, and output layer. The input layer is used to extract features from the image, and the hidden layers are used Table 9: Example of a generated chapter and corresponding ground truth. All annotators rated the generated chapter comparable or inferior to the ground truth w.r.t. informativeness and *factuality*. A Survey of Sentiment Analysis Based on Transfer Learning Generated based on Liu et al. **(2019)** ## I. Introduction4 Sentiment Analysis (SA) is the process of analyzing people's opinions, attitudes, emotions, and emotions expressed in social media data, such as reviews, blogs, forum discussions, and news articles BIB01 - BIB02. Sentiment analysis is an important part of Information Retrieval (IR) and Natural Language Processing (NLP) BIB03. However, it is difficult to obtain a large amount of human-labeled data for Sentiment Analysis because of the huge amount of user-generated content on social media. Therefore, transfer learning has been proposed to overcome this problem and improve the performance of sentiment analysis. Transfer learning is a learning method that transfers knowledge from a source domain to a target domain BIB04. In other words, the target domain is different from the source domain but similar enough in some aspects. For example, if we have a dataset of reviews about a product, we can transfer the reviews about this product to the review about another product. On the other hand, if the dataset of a news event is different but similar in many aspects, then transfer learning can be used to solve the review classification problem. In this paper, we mainly focus on the transfer learning methods for sentiment analysis based on machine learning. Transfer ## Ii. The Related Methods In this section, we will introduce several related methods that are used to solve the problems in the field of sentiment analysis, including opinion mining BIB05, transfer learning and multi-task learning. These methods can be divided into two main categories: unsupervised learning methods and supervised learning methods. In this paper, we mainly introduce these methods because they have been widely used in sentiment analysis research. According to the taxonomy in BIB06, these methods are divided into three categories: dictionary-based, rule-based and feature-based methods. ## A. Traditional Sentiment Analysis Sentiment analysis is the process of analyzing opinions and sentiments expressed in natural language text. Sentiment analysis can be divided into two main categories, which are sentiment polarity and sentiment orientation. The former is the evaluation of positive or negative sentiments, and the latter is the interpretation of the sentiment of a word. In general, the polarity of a sentiment word can be positive, negative, or neutral. For example, the word "amazing" can be interpreted as expressingconstructive or negative opinion, and "good" and "bad" are interpreted asconstructive and negative opinions, respectively. On the other hand, the sentiment orientation can be used to indicate negative or positive sentiments. The polarity can be either positive (e.g., great, excellent, excellent) or negative BIB07, BIB08. The sentiment orientation of a sentence can be expressed by a single word or a set of words. For instance, the sentence "I like this camera, but it is not free of bugs" is negative because it expresses strong negative sentiment. The sentiment orientations of the words in a sentence are usually determined by the word co-occurrences in the sentence. Therefore, it is necessary to determine the semantic orientation of each word in ## B. Sentiment Analysis Based On Deep Learning In recent years, deep learning has achieved great success in many fields such as computer vision, natural language processing (NLP), speech recognition, and computer vision. The deep learning-based sentiment analysis based on deep neural networks (DNN) BIB09 - BIB10 has been proposed to solve the problems of sentiment classification and sentiment analysis. In this section, we will introduce the deep transfer learning based sentiment analysis methods in the field of NLP and deep learning. Deep transfer learning is a kind of deep neural network-based transfer learning method, which can transfer the knowledge learned from the source domain to the target domain through a set of labeled data. In general, transfer learning can be divided into two categories, i.e., inductive transfer learning and unsupervised transfer learning. - Inductive transfer learning: In this type of transfer learning, a neural network model is first trained with a large amount of unlabeled data, and then the neural network is fine-tuned with a small amount of data for a specific domain. After that, it is used to classify or predict the sentiment of a new target domain. The target domain contains a large number of different types of data, such as text, images, videos, and documents. 1) CNN-BASED MODELS Recently, CNNbased models have been widely used in NLP tasks, such as text classification BIB11 - BIB12, speech recognition , and optical image de-scattering. Compared with the traditional shallow neural network models, CNNs are able to capture the global structure of the input data. CNNs contain multiple convolutional layers, pooling layers, and fully connected layers, which can capture the local features in an end-to-end manner. Convolution and pooling operations in CNNs can be regarded as a kind of unsupervised feature learning method. In the field of NLP, CNN is one of the most successful models due to its ability to learn high-level abstractions from low-level image features. In recent years, CNN has achieved great success in computer vision and natural language processing (NLP) tasks, and has been successfully applied to many fields such as image classification, speech recognition, machine translation, and computer vision. However, due to the lack of transferability of CNNs to transfer learning tasks, most existing deep learningbased sentiment analysis models based on CNNs cannot be directly applied to sentiment analysis problems. ## 2) Rnn-Based Models Rnn-Based Models are a type of neural networks that are used for processing sequential data. RNNs are a special type of deep neural networks, in which the hidden units of the neural network are connected to each other in a manner similar to the way that neurons in the brain memorize information. In other words, the hidden unit of a neural network is a vector or a tensor, and the output of the hidden layer is a binary value indicating the strength of the relation between the input and output. To make use of RNN in sentiment analysis, some researchers have applied RNNbased models to sentiment analysis tasks, such as BIB13 - BIB14, and BIB15. In general, the architecture of these models is shown in Fig. 4. ## 3) Hybrid Neural Network Models The traditional neural network models, such as SVM, CNN, RNN, RBM, and RBM-NN, all have their pros and cons. Each model has its advantages and limitations. For instance, CNN has many layers of neurons, while RNN has only one layer of neurons. However, CNN and RNN have different levels of nonlinearity and therefore have different strengths and weaknesses. Combining these two kinds of neural networks can improve the performance of sentiment transfer learning models. In BIB16, the authors proposed a hierarchical attention network for text classification. The proposed model has a hierarchical structure that mirrors the hierarchical structure of documents and it has two levels of attention mechanisms applied at the word and sentence levels. The attention mechanism is applied to differentially attend differentially to more and less important content during the construction of the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed model outperforms previous methods by a substantial margin. BIB17 proposed a deep memory network for aspect level sentiment classification. This model explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. In this model, the importance degree and text representation are calculated with multiple computational layers, each of which is a ## C. Sentiment Analysis Based On Transfer Learning Transfer learning BIB04, BIB18 is a new branch of machine learning methods that aims to solve the problems where the training data and testing data are taken from the same or different domains. The difference between the source domain and the target domain is that the feature space and the data distribution characteristics of the source and target domains are the same. However, in some realworld situations, this assumption may not hold. Therefore, there are cases where training data are expensive or difficult to collect. There is a need to create high-performance learners trained with more easily obtained data from transfer learning. This methodology is referred to as transfer learning, which transfers knowledge from a source domain to a target domain BIB19. Transfer learning can be divided into two categories: inductive transfer learning and unsupervised transfer learning . ## 1) Parameter-Transfer Methods These methods are based on the assumption that the source domain and target domain are drawn from the same distribution, which means that the probability distributions of the source and target data are similar. The parameter-transformation based transfer learning methods can be divided into two categories: Parameter-based and model-based. In the parameter-based method, the parameters of the target domain and source domain are learned simultaneously. In other words, the target and source domains are treated as two different domains, and the model parameters are jointly optimized to improve the performance of transfer learning. In this method, a deep neural network model is first trained on the source data, and then the parameters are used to initialize the target model. After that, the model is fine-tuned on the target data to achieve the best transfer learning performance. This method is also known as stacked denoising autoencoder (SDA) BIB20, BIB21, marginalized SDA BIB22, and Universal Language Model Fine-Tune (ULMFiT) method BIB23. 1) STacked Denoising Autoencoders (SDAs): In the SDA model, the encoder and decoder are trained simultaneously. The encoder learns the ## 2) Instance-Transfer Methods The instance-transfer methods aim to transfer knowledge from a source domain to a target domain by using a small amount of labeled training data and large amount of unlabeled data in the target domain. The advantage of these methods is that the source domain data can be used to improve the performance of the target-side classifier. However, these methods may face the problem that the distribution of training data is different between the source and target domains. Therefore, it is necessary to find a balance between the training data distribution and the target distribution in order to achieve good transfer learning performance. The Instance-Transfer methods can be divided into two categories: instance-based methods and instancetracing methods. In the following, we will introduce these methods in detail. The first method is the instancebased method. In this method, the target data is firstly transformed into the source data, and then the target samples are used to train the target classifier by using the labeled target data. The second method is to use the labeled source data to initialize the training process of the new target domain, which is called the transfer learning method. For instance, in BIB24, a novel transfer learning framework called TrAdaBoost is proposed, which extends boosting-based ## 3) Feature-Representationtransfer Methods In Order To Bridge The gap between domains, feature-representation-based transfer methods can be used to transfer knowledge from the source domain to the target domain. In this kind of methods, the feature transformation matrix of the source and target domain can be obtained by mapping the feature space into a common latent space. Then, the classifiers trained on the source data can be easily applied to solve the target problem by using the common space BIB25, BIB26. Feature representation-based methods mainly include co-clustering methods, semi-supervised methods, and inductive transfer learning methods. The co- Clustering method is to find the co-occurrence patterns of words in the common latent representation space, and then the domain-independent words are used as the bridge between domains. In this method, the similarity between the feature spaces of the two domains is measured by computing the distance between the two feature spaces. The similarity measure can be based on the cosine similarity, the Kullback-Leibler divergence, or the Jaccard similarity. The Co-Clustering-based Transfer Learning (CCL) method is a semisupervised method that uses unlabeled target data and labeled source data 4) SUMMARY In this paper, we summarize the current state-of-the-art of transfer-based sentiment classification methods for sentiment transfer learning in the following three aspects: (1) classification methods, (2) clustering methods, and (3) finegrained transfer learning methods. Classification methods are mainly divided into two categories: supervised learning and unsupervised learning. Clustering methods are divided into co-clustering and hierarchical clustering. Hierarchical clustering is mainly used for dimensionality reduction and feature representation learning. Feature representation learning is used to bridge the distribution gap between different feature spaces. Transfer learning is divided into inductive and inductive transfer learning. Inductive transfer learning focuses on transferring knowledge from a source domain to a target domain. On the other hand, transfer learning can be divided into semi-supervised and supervised learning. Supervised learning is based on labeled data in the source domain and unlabeled data only in the target domain, and transfer learning is to transfer knowledge between domains. In addition, the methods based on co-occurrence matrix and spectral feature alignment (SFA) BIB27, TCT BIB28, ULMFiT, LSTM-CNN ## A. Cross-Domain Sentiment Analysis In order to solve the problem of cross-domain sentiment analysis, transfer learning has been widely used in recent years BIB26, BIB28, . The main idea of transfer learning is to build a bridge between the source domain and the target domain by transferring knowledge from a source domain to a target domain BIB15 - BIB12. Cross-domain transfer learning based sentiment analysis can be divided into two categories: (1) unsupervised transfer learning and (2) supervised transfer learning. In the following, we will introduce the two sub-tasks in more detail. ## References BIB01 Mohit Mertiya and Ashima Singh. (2016). Combining naive bayes and adjective analysis for sentiment detection on Twitter. BIB02 Erik Cambria. (2016). Affective Computing and Sentiment Analysis. BIB03 María del Pilar Salas-Zárate, José Medina-Moreira, Paul Javier Álvarez-Sagubay, Katty Lagos-Ortiz, Mario Andrés ParedesValverde, and Rafael Valencia-García. (2016). Sentiment Analysis and Trend Detection in Twitter. BIB04 Sinno Jialin Pan and Qiang Yang. (2010). A Survey on Transfer Learning. BIB05 E. Cambria, B. Schuller, Yunqing Xia, and C. Havasi. (2013). New Avenues in Opinion Mining and Sentiment Analysis. BIB06 Rui Xia, Feng Xu, Chengqing Zong, Qianmu Li, Yong Qi, and Tao Li. (2015). Dual Sentiment Analysis: Considering Two Sides of One Review. BIB07 Peter D. Turney and Michael L. Littman. (2003). Measuring praise and criticism: Inference of semantic orientation from association. BIB08 Xiaoxu Fei, Huizhen Wang, and Jingbo Zhu. (2010). Sentiment word identification using the maximum entropy model. BIB09 Huimin Lu, Yujie Li, Min Chen, Hyoungseop Kim, and Seiichi Serikawa. (2017). Brain Intelligence: Go Beyond Artificial Intelligence. BIB10 Huimin Lu, Yujie Li, Shenglin Mu, Dong Wang, Hyoungseop Kim, and Seiichi Serikawa. (2018). Motor Anomaly Detection for Unmanned Aerial Vehicles Using Reinforcement Learning. BIB11 Rie Johnson and Tong Zhang. (2014). Effective Use of Word Order for Text Categorization with Convolutional Neural Networks. BIB12 Alexis Conneau, Holger Schwenk, Loïc Barrault, and Yann LeCun. (2017). Very Deep Convolutional Networks for Text Classification. BIB13 Duyu Tang, Bing Qin, and Ting Liu. (2015). Document Modeling with Gated Recurrent Neural Network for Sentiment Classification. BIB14 Min-Yuh Day and Yue-Da Lin. (2017). Deep Learning for Sentiment Analysis on Google Play Consumer Review. BIB15 Rie Johnson and Tong Zhang. (2016). Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings. BIB16 Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. (2016). Hierarchical Attention Networks for Document Classification. BIB17 Duyu Tang, Bing Qin, and Ting Liu. (2016). Aspect Level Sentiment Classification with Deep Memory Network. BIB18 Karl Weiss, Taghi M. Khoshgoftaar, and DingDing Wang. (2016). A survey of transfer learning. BIB19 Diane Cook, Kyle D. Feuz, and Narayanan C. Krishnan. (2013). Transfer learning for activity recognition: a survey. BIB20 Xavier Glorot, Antoine Bordes, and Yoshua Bengio. (2011). Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. BIB21 Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. (2012). Marginalized Denoising Autoencoders for Domain Adaptation. BIB22 Miao Sun, Qi Tan, Runwei Ding, and Hong Liu. (2014). Cross-domain sentiment classification using deep learning approach. BIB23 Jeremy Howard and Sebastian Ruder. (2018). Universal Language Model Fine-tuning for Text Classification. BIB24 Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. (2007). Boosting for transfer learning. BIB25 John Blitzer, Ryan McDonald, and Fernando Pereira. (2006). Domain Adaptation With Structural Correspondence Learning. BIB26 Sinno Jialin Pan, Xiaochuan Ni, JianTao Sun, Qiang Yang, and Zheng Chen. (2010). Cross-domain sentiment classification via spectral feature alignment. BIB27 Joey Tianyi Zhou, Ivor W. Tsang, Sinno Jialin Pan, and Mingkui Tan. (2014). Heterogeneous Domain Adaptation for Multiple Classes. BIB28 Guangyou Zhou, Yin Zhou, Xiyue Guo, Xinhui Tu, and Tingting He. (2015). Crossdomain sentiment classification via topical correspondence transfer. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4 ✓ B1. Did you cite the creators of artifacts you used? Section 3, 4, and Ethics Statement ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Ethics Statement ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics Statement ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? In the dataset construction, as we used the dataset of academic papers, it did not contain any personally identifiable information or offensive content. In the human evaluation, as we received only the evaluation scores from the participants, we did not handle any personally identifiable information or offensive content. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3 And 5 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix C ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix C ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Ethics Statement ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3, 5, and Appendix C
chang-etal-2023-revisiting
Revisiting Sample Size Determination in Natural Language Understanding
https://aclanthology.org/2023.findings-acl.419
Knowing exactly how many data points need to be labeled to achieve a certain model performance is a hugely beneficial step towards reducing the overall budgets for annotation. It pertains to both active learning and traditional data annotation, and is particularly beneficial for low resource scenarios. Nevertheless, it remains a largely under-explored area of research in NLP. We therefore explored various techniques for estimating the training sample size necessary to achieve a targeted performance value. We derived a simple yet effective approach to predict the maximum achievable model performance based on small amount of training samples {--} which serves as an early indicator during data annotation for data quality and sample size determination. We performed ablation studies on four language understanding tasks, and showed that the proposed approach allows us to forecast model performance within a small margin of mean absolute error ({\textasciitilde}0.9{\%}) with only 10{\%} data.
# Revisiting Sample Size Determination In Natural Language Understanding Ernie Chang†∗, Muhammad Hassan Rashid‡∗**, Pin-Jie Lin**‡∗, Changsheng Zhao†, Vera Demberg‡, **Yangyang Shi**†and **Vikas Chandra**† †Reality Labs, Meta Inc. ‡Saarland Informatics Campus, Saarland University, Germany {erniecyc, cszhao, yyshi, vchandra}@meta.com hassanrashid725@gmail.com pinjie@lst.uni-saarland.de vera@coli.uni-saarland.de ## Abstract Knowing exactly how many data points need to be labeled to achieve a certain model performance is a hugely beneficial step towards reducing the overall budgets for annotation. It pertains to both active learning and traditional data annotation, and is particularly beneficial for low resource scenarios. Nevertheless, it remains a largely under-explored area of research in NLP. We therefore explored various techniques for estimating the training sample size necessary to achieve a targeted performance value. We derived a simple yet effective approach to predict the maximum achievable model performance based on small amount of training samples – which serves as an early indicator during data annotation for data quality and sample size determination. We performed ablation studies on four language understanding tasks, and showed that the proposed approach allows us to forecast model performance within a small margin of mean absolute error (∽ 0.9%) with only 10% data1. ## 1 Introduction Labeled data play an important role in creating performant machine learning models, which makes data annotation a fundamental process for any natural language application pipeline (Lewis and Catlett, 1994). Recent work has sought to reduce the annotation costs through the use of active learning (Ducoffe and Precioso, 2018; Margatina et al., 2021) and data sampling (Sener and Savarese, 2018; Coleman et al., 2019; Killamsetty et al., 2021a,b). Indeed, these approaches are shown to be effective in identifying or constructing data subsets needed to achieve a competitive model performance. For instance, the active learning paradigm adds new data iteratively to the existing set before ∗ These authors contributed equally to this work. 1Our code is available at: https://github.com/ pjlintw/sample-size. model retraining (Agarwal et al., 2020; Margatina et al., 2021), improving upon the traditional human annotation pipeline that obtains the entire labeled set all at once. Nevertheless, the data labeling process typically annotates as much data as the annotation budget permits, or by clearly defined stopping criteria to terminate the labeling process. Unfortunately, this is usually challenging as annotators do not have the knowledge of the effect of added labels to model performance nor how much more data is needed to arrive at the desired model generalizability (Killamsetty et al., 2020). The stopping condition is in fact tied to the quality of data samples w.r.t. model parameters (Hu et al., 2021), which influences the effective sample size2, and it is then beneficial to obtain an approximation of the expected performance (Vlachos, 2008; Olsson and Tomanek, 2009a; Zhu et al., 2010; Ishibashi and Hino, 2020). Therefore, knowing the approximate amount of training data needed for this particular performance would serve as an useful knowledge not only for deciding when to stop adding labeled data, but also as an early indication for the data quality. For instance, by having early label quality signals, we can decide between two different types of annotation, or even between two pools of annotators with different expertise. To this end, we explored the relationship between *data sample size* and *model performance* in the context of language understanding via learning curve modeling, which defines model performance as a function of dataset sizes. By modeling this relationship in low resource settings, we obtain useful early signals with approximated accuracies for any given the labeled set, which can provide an idea for the sample size and data quality (Olsson and Tomanek, 2009b; Figueroa et al., 2012). Previous studies have shown that nonlinear weighted 2It is the size of datasets which could have been achieved by an effective unweighted random sample (Guo et al., 2022). curve fitting methods such as inverse power laws or exponential functions can provide decent approximations of the empirical predictive performances (Frey and Fisher, 1999; Figueroa et al., 2012). We thus put forward an ensemble of these functions which we showed to display a consistently highly correlated behavior across four language understanding benchmarks and with as little as 10% of the entire training set. This work makes the following contributions: 1. We revisit the task of sample size determination in four natural language understanding benchmarks and empirically explore the correlation strengths of several successful techniques. 2. Based on our findings, we propose an ENSEM-BLE function and demonstrated across several benchmarks and low resource settings that the ensemble function is consistently providing a high correlation with the empirical learning curve plots. ## 2 Background Our method is a sample size determination technique that helps to design annotation projects by determining the necessary sample size. Previous methods have focused on identifying the sample size required to reach a specific target performance, such as a high correlation coefficient (Beal, 1989; Stalbovskaya et al., 2007; Beal, 1989), which often involves predicting the sample size necessary for a classifier to attain a specific accuracy level (Fukunaga and Hayes, 1989). There are two main approaches for predicting the sample size needed to achieve a particular classifier performance: (1) Dobbin et al. (2008) present a model-based method for predicting the number of samples required for classifying microarray data. (2) A more general approach involves fitting a classifier's learning curve to inverse power law models (Figueroa et al., 2012). Examples of this approach include algorithms proposed by Mukherjee et al. (2003); Boonyanunta and Zeephongsekul (2004); Last (2007). ## 3 The Approach Learning Curve Modeling. A learning curve is a graphical representation of how a classifier's performance changes as the size of the training set increases. The curve typically has three sections: an initial section where performance improves rapidly with increasing training set size, a middle section where the rate of improvement slows down, and a final section where the classifier reaches its maximum performance and further increases in training set size do not lead to significant improvements. This relationship can be quantified using a set of data points, each of which represents the expected performance of the classifier Eacc on a particular training set size Dk. These data points can be plotted to create the learning curve, which can help to understand the behavior of the classifier and inform decision-making about how much training data is needed to achieve a desired performance level. Task Description. Given a downstream classification task with N*total* data points, a learning curve model F predicts the expected performance Eacc when a classifier trained on the an observed range of training set size (Dk; k >= N). The empirical learning curve is assessed by the parametric models for the learning algorithm performance extrapolation. In our settings, we set k << N*total* to simulate practical settings, where few data points consisting of (Eacc, DK) are to be obtained. Types of Extrapolations. Here, we study different forms of learning curve models with few learnable parameters that have been proven as simple yet effective. The simplest type of learning curve model *exponential function* (EXP) only introduces two parameters a and b to fit the exponent behavior of learning curve (Frey and Fisher, 1999). The second form, *Inverse Power Law function* (INVERSE), fits the inverse power law (Figueroa et al., 2012) and has three parameters. The third form uses a function from the power law family - Power4 function (POW4) (Kolachina et al., 2012) with four parameters. Lastly, we propose to combine all functions into one (ENSEMBLE) so that it has all their characteristics in order to make it more robust across benchmarks. Table 1 shows the formulae of our investigated extrapolating functions. | EXTRAPOLATING FUNCTIONS | FORMULA | |---------------------------|--------------------| | EXP (A) | a · Nb | | INVERSE (B) | (1 − a) − b · Nc | | POW4 (C) | a − (b · N + c) −d | | ENSEMBLE (A+B+C) | − | Table 1: Overview of extrapolating functions ## 4 Experimental Settings We study four NLU tasks: (1) IMDB (Maas et al., 2011) is a binary classification dataset (25K/– /25K)3 where model predicts the sentiment (positive/negative) for movie reviews from IMDB; (2) SST2 (Socher et al., 2013) is also a sentiment classification datatset (67K/0.8K/1.8K) containing reviews of different movies and since the model predicts if the review is positive or negative, it also falls in the category of binary classification; (3) AG NEWS is a multi-class classification dataset (120K/–/7.6K) containing texts from different news where the model predicts whether the news text is about sports, science/technology, world or business from the four different classes. We also consider one other multi-class classification task, (4) DBPEDIA dataset (560K/–/70K) , since it could help us in testing the robustness of the methods used in our experiments. Configs. To investigate how changes in data size affect the predictiveness of the learning curves, under the assumption that the model structure and settings remain unchanged, we perform all experiments using a transformer model (Vaswani et al., 2017) and average the results over 3 initialization runs. The embedding and hidden layer dimensions are 1000 and 1820; and we use a 6-layer encoder with 4 multi-heads, and the dropout is 0.2. To find the parameters of learning curve models, we consider unweighted and for the gradient descent and non-linear least squares optimizers. The Adam algorithm (Kingma and Ba, 2014) was used as the optimizer with learning rate of 1e-5 and ReLU was used as the activation function. The crossentropy objective was used for all classification benchmarks, and we select the models using loss values. Finally, we chose a batch size of 8 with 200 number of epochs. Evaluation. We use the aforementioned functions: EXP, INVERSE, POW4 and ENSEMBLE for fitting the empirical learning curve. For each dataset, we select training set sizes ranging from 1% to 10% data sizes at an interval of 1%. The learning curve testsets were created with the data splits in the range [55, 100] at 5% interval by training the classifier, and obtaining the testset4 performance for each corresponding data split. Therefore, we collect the accuracies against different sample sizes and report the mean absolute error (MAE) as the evaluation metric for learning curve modeling. ## 5 Results And Analysis We present results of ensemble method for learning curve modeling on the NLU benchmarks. ## 5.1 Main Results Figure 1 demonstrates that by using only 10% of the data for learning curve modeling, ENSEMBLE is able to effectively predict model performance within a 0.9% margin of the actual model performance. Moreover, we observe the same trend across all four benchmarks consisting of different training set sizes (i.e. ranging from 25K to 250K) and varying number of classification classes (i.e. ranging from 2 to 14), see the appendix A for remaining figures. Our result shows that the proposed approach is not confined by the classification types and sample sizes. Table 2 shows the saturated points of the learning curve when the performance improvement is less than a threshold α = 0.2 - we found that the predicted performance with only 19% data is within 2.44 accuracy points from the trained model performance for IMDB. Another key observation is that the size (%) needed to predict a low L1 distance increases as the number of classification classes goes up, which indicates that task difficulty does influence the ease of extrapolation. An example is that AG NEWS requires up to 51% to predict a low L1 distance. Next, we perform further ablation studies to investigate the effect of sample size, types of non-linear functions used, or the effect of data weighting. | BENCHMARK | CLS (#N) | SIZE (%) | SIZE (#N) | L1↓ | 100% | |-------------|------------|------------|-------------|-------|--------| | α = 0.2 | | | | | | | IMDB | 2 | 36% | 6, 300 | 2.44 | 17K | | SST2 | 2 | 19% | 8, 958 | 5.57 | 47K | | AG NEWS | 4 | 51% | 42, 840 | 2.6 | 84K | | DBPEDIA | 14 | 51% | 199, 920 | 2.39 | 392K | ## 5.2 Ablation Study Effect of sample size. In Figure 1, we study the correlation between sample sizes and the absolute ![3_image_0.png](3_image_0.png) mean error between the learning curve model and empirical model performance trend. Surprisingly, we discovered by having more samples does not necessarily help with modeling a better learning curve5, and that with only 10% data to build the (Dk, Eacc) data points is sufficient to obtain rather small errors across all four benchmarks. Types of learning curve functions. We are also interested in seeing how each of the non-linear learning curve function fare against each other in simpler settings. To this end, we used up to 10% data to model the learning curves and obtained their respective mean absolute error values. In Figure 1, we present this comparison where we showed that on IMDB and SST2, the ENSEMBLE function consistently fit best against the empirical data. We observed a similar trend across other benchmark DBPEDIA with the exception of AG NEWS. We placed the plot for AG NEWS in appendix A.3. Influence of data weighting. Previous work (Paul et al., 2021; Guo et al., 2022) has found that not all data points are equally important in terms of curve fitting. In fact, data points at a later phase corresponding to more samples are to be given more weight compared to earlier points. We thus investigate this phenomenon in the context of our benchmark, and we observed this to be true anecdotally. The detailed result can be found in Appendix A.2. The reason for this is that the more data samples there are, the more closely they resemble the entire training set, and this makes their signals a better estimation of a point on the actual learning curve. Another perspective is that the more data samples are used, the less the effect of random sampling on the performance, which affects model performance in extremely low resource scenarios. | FUNCTION TYPE | NON-LINEAR LEAST SQUARES UNWEIGHTED WEIGHTED | | |-----------------|------------------------------------------------|---------| | EXP | 0.0417 | 0.0244 | | INV | 0.00777 | 0.00442 | | POW4 | 0.00795 | 0.00795 | ## 6 Conclusions And Future Works In this work, we investigated techniques for estimating the amount of training data needed to achieve a target performance in four natural language understanding benchmarks. We demonstrated that our approach allows for accurate prediction of model performance using only a small portion of the data, which can be useful in scenarios with limited resources. Nevertheless, we also recognize the limitation in our current study. For instance, we did not explore sampling techniques other than random sampling; while recent works (Yuan et al., 2020; Paul et al., 2021; Guo et al., 2022) have shown promising directions in data sampling that outperforms random sampling. Another interesting 5We showed this result in the Appendix A.5. direction is to explore the model architecture's influence on generalizability, and thus the learning curve, which we left for future works. ## Limitations While the effectiveness of the expressive learning curve in settings with limited data has been demonstrated, it is uncertain if this success can be replicated in more complex natural language understanding tasks, such as question answering or tasks that involve a large amount of data. Furthermore, it is assumed that all data samples have the same impact on the model's performance. However, the actual performance of the model may vary based on the method used to select the data or the specific set of tasks being performed, e.g., coreset selection. Similarly, the quality of the labels used for the data can also play a significant role in predicting the performance of the model. Overall, we plan to further investigate these questions and explore them in future studies. ## Ethics Statement We address the efficiency of data annotation by investigating learning curves to estimate the necessary training sample size to reach a desired model performance. However, it is imperative to take into consideration the potential biases that may exist in the model predictions when utilizing a reduced amount of labeled data in the system construction process. Furthermore, when addressing complex tasks such as machine translation and text summarization, it is essential to guarantee the factuality of output generated by the system trained with the suggested data sample size. ## References Sharat Agarwal, Himanshu Arora, Saket Anand, and Chetan Arora. 2020. Contextual diversity for active learning. In *ECCV*, pages 137–153. Springer. S L Beal. 1989. Sample size determination for confidence intervals on the population mean and on the difference between two population means. *Biometrics*, 45(3):969–977. Natthaphan Boonyanunta and Panlop Zeephongsekul. 2004. Predicting the relationship between the size of training sample and the predictive power of classifiers. *Knowledge-Based Intelligent Information and* Engineering Systems, 3215:529–535. Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. 2019. Selection via proxy: Efficient data selection for deep learning. arXiv preprint arXiv:1906.11829. Kevin K Dobbin, Yingdong Zhao, and Richard M Simon. 2008. How large a training set is needed to develop a classifier for microarray data? *Clin. Cancer Res.*, 14(1):108–114. Melanie Ducoffe and Frederic Precioso. 2018. Adversarial active learning for deep networks: a margin based approach. *arXiv preprint arXiv:1802.09841*. Rosa L Figueroa, Qing Zeng-Treitler, Sasikiran Kandula, and Long H Ngo. 2012. Predicting sample size required for classification performance. *BMC Med* Inform Decis Mak, 12. Lewis J. Frey and Douglas H. Fisher. 1999. Modeling decision tree performance with the power law. In Proceedings of the Seventh International Workshop on Artificial Intelligence and Statistics, volume R2 of Proceedings of Machine Learning Research. PMLR. Reissued by PMLR on 20 August 2020. K Fukunaga and R R Hayes. 1989. Effects of sample size in classifier design. *IEEE Trans. Pattern Anal.* Mach. Intell., 11(8):873–885. Chengcheng Guo, Bo Zhao, and Yanbing Bai. 2022. Deepcore: A comprehensive library for coreset selection in deep learning. Xia Hu, Lingyang Chu, Jian Pei, Weiqing Liu, and Jiang Bian. 2021. Model complexity of deep learning: A survey. *Knowledge and Information Systems*, 63(10):2585–2619. Hideaki Ishibashi and Hideitsu Hino. 2020. Stopping criterion for active learning based on deterministic generalization bounds. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 386–397. PMLR. Krishnateja Killamsetty, S Durga, Ganesh Ramakrishnan, Abir De, and Rishabh Iyer. 2021a. Grad-match: Gradient matching based data subset selection for efficient deep model training. In *ICML*, pages 5464– 5474. KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh K. Iyer. 2020. GLISTER: generalization based data subset selection for efficient and robust learning. *CoRR*, abs/2012.10630. Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, and Rishabh Iyer. 2021b. Retrieve: Coreset selection for efficient and robust semi-supervised learning. arXiv preprint arXiv:2106.07760. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Prasanth Kolachina, Nicola Cancedda, Marc Dymetman, and Sriram Venkatapathy. 2012. Prediction of learning curves in machine translation. In *Proceedings of the 50th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 22–30, Jeju Island, Korea. Association for Computational Linguistics. Mark Last. 2007. Predicting and optimizing classifier utility with the power law. *Seventh IEEE International Conference on Data Mining Workshops*, pages 219–224. David D Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In *Machine learning proceedings 1994*, pages 148–156. Elsevier. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150. Katerina Margatina, Giorgos Vernikos, Loïc Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. *arXiv preprint* arXiv:2109.03764. Sayan Mukherjee, Pablo Tamayo, Simon Rogers, Ryan Rifkin, Anna Engle, Colin Campbell, Todd R Golub, and Jill P Mesirov. 2003. Estimating dataset size requirements for classifying dna microarray data. *Comput Biol*, 10:119–142. Fredrik Olsson and Katrin Tomanek. 2009a. An intrinsic stopping criterion for committee-based active learning. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 138–146, Boulder, Colorado. Association for Computational Linguistics. Fredrik Olsson and Katrin Tomanek. 2009b. An intrinsic stopping criterion for committee-based active learning. *Proceedings of the Thirteenth Conference on Computational Natural Language Learning*, pages 138–146. Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. 2021. Deep learning on a data diet: Finding important examples early in training. *Advances* in Neural Information Processing Systems, 34:20596– 20607. Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In *ICLR*. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on* Empirical Methods in Natural Language Processing, pages 1631–1642. Viktoriya Stalbovskaya, Brahim Hamadicharef, and Emmanuel C Ifeachor. 2007. Sample size determination using ROC analysis. In 3rd International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2007). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *CoRR*, abs/1706.03762. Andreas Vlachos. 2008. A stopping criterion for active learning. *Computer Speech and Language*, 22(3):295–312. Michelle Yuan, Hsuan-Tien Lin, and Jordan BoydGraber. 2020. Cold-start active learning through selfsupervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935–7948, Online. Association for Computational Linguistics. Jingbo Zhu, Huizhen Wang, Eduard Hovy, and Matthew Ma. 2010. Confidence-based stopping criteria for active learning for data annotation. *ACM Trans. Speech* Lang. Process., 6(3). ## A Detailed Results A.1 Predicting The Required Data Size Table 4 presents the results of required data size prediction using threshold α = 0.1 and α = 0.3. | BENCHMARK | CLS (#) | SIZE (%) | SIZE (#N) | L1↓ | 100% | |-------------|-----------|------------|-------------|-------|--------| | α = 0.1 | | | | | | | IMDB | 2 | 19% | 16, 800 | 6.56 | 17K | | SST2 | 2 | 8% | 25, 458 | 8.27 | 47K | | AG NEWS | 4 | 28% | 82, 320, | 2.96 | 84K | | DBPEDIA | 14 | 27% | 384, 160 | 3.44 | 392K | | α = 0.3 | | | | | | | IMDB | 2 | 96% | 3, 325 | 5.84 | 17K | | SST2 | 2 | 54% | 3, 772 | 0.704 | 47K | | AG NEWS | 4 | 98% | 23, 521 | 9.9 | 84K | | DBPEDIA | 14 | 98% | 105, 840 | 9.68 | 392K | ## A.2 Data Weighting We apply data weighting on three extrapolating functions using gradient decent methods in 5. | EXTRAPOLATING | GRADIENT DESCENT | | |-----------------|--------------------|--------| | UNWEIGHTED | WEIGHTED | | | EXP | 0.0417 | 0.0342 | | INV | 0.0706 | 0.0519 | | POW4 | 0.0979 | 0.0652 | Table 5: Better curve fitting when weighting data points at latter phase. We examine the effectiveness of weighting data size on the exponential (EXP), inverse power law (INV), power4 (POW4) function using gradient decent method. The learning curves fit on 5%, 10%, 25% and 50% data sizes of IMDB and is evaluated on testing sample with mean absolute error (MAE). ## A.3 Learning Curve On 10% Data Sizes Of Ag News Figure 2 shows the learning curves fitting on 10% data sizes of AG NEWS dataset. ## A.4 Learning Curve On 10% Data Sizes Of Dbpedia Figure 3 shows the learning curves fitting on 10% data sizes of DBPEDIA dataset. ## A.5 Effect Of Sample Sizes For Learning Curve Fitting We examined the relationship between sample sizes and the difference in mean absolute error ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) (MAE) between the predicted and actual performance trends across four benchmarks. Table 6 showed MAEs when ENSEMBLE fitting on 50% and 10% of data respectively. We observed that having more samples does not necessarily lead to a better model and that using only 10% resulted in smaller MAEs on all four benchmarks. Therefore, we select 10% of data points for learning curve modeling. BENCHMARKSAMPLE SIZES 50% 10% IMDB 0.0458 **0.00961** SST2 0.0299 **0.0132** AG NEWS 0.0704 **0.0209** DBPEDIA 0.0734 **0.0158** Table 6: Learning Curve Fitting on 50% and 10% data size respectively. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 (after conclusions) ✓ A2. Did you discuss any potential risks of your work? 8 (ethics statement) ✓ A3. Do the abstract and introduction summarize the paper's main claims? 0 and 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We adopted widely-used datasets for our investigation. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. We do not collect data and we adopted widely-used datasets for our investigation. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. described in section 3 ## C ✓ **Did You Run Computational Experiments?** 4 And 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 and 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhao-etal-2023-transesc
{T}rans{ESC}: Smoothing Emotional Support Conversation via Turn-Level State Transition
https://aclanthology.org/2023.findings-acl.420
Emotion Support Conversation (ESC) is an emerging and challenging task with the goal of reducing the emotional distress of people. Previous attempts fail to maintain smooth transitions between utterances in ESC because they ignoring to grasp the fine-grained transition information at each dialogue turn. To solve this problem, we propose to take into account turn-level state Transitions of ESC (TransESC) from three perspectives, including semantics transition, strategy transition and emotion transition, to drive the conversation in a smooth and natural way. Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information. Finally, they are injected into the transition aware decoder to generate more engaging responses. Both automatic and human evaluations on the benchmark dataset demonstrate the superiority of TransESC to generate more smooth and effective supportive responses. Our source code will be publicly available.
# Transesc: Smoothing Emotional Support Conversation Via Turn-Level State Transition Weixiang Zhao, Yanyan Zhao∗**, Shilong Wang, Bing Qin** Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, China {wxzhao, yyzhao, shilongwang, qinb}@ir.hit.edu.cn ## Abstract Emotion Support Conversation (ESC) is an emerging and challenging task with the goal of reducing the emotional distress of people. Previous attempts fail to maintain smooth transitions between utterances in ESC because they ignore to grasp the fine-grained transition information at each dialogue turn. To solve this problem, we propose to take into account turn-level state **Trans**itions of ESC (**TransESC**) from three perspectives, including semantics transition, strategy transition and emotion transition, to drive the conversation in a smooth and natural way. Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information. Finally, they are injected into the transitionaware decoder to generate more engaging responses. Both automatic and human evaluations on the benchmark dataset demonstrate the superiority of TransESC to generate more smooth and effective supportive responses. Our source code is available at https:// github.com/circle-hit/TransESC. ## 1 Introduction Emotional Support Conversation (ESC) is a goaldirected task which aims at reducing individuals' emotional distress and bringing about modifications in the psychological states of them. It is a desirable and critical capacity that an engaging chatbot is expected to have and has potential applications in several areas such as mental health support, customer service platform, etc. Different from the emotional (Zhou et al., 2018) and empathetic (Rashkin et al., 2019) conversation, ESC is always of long turns, which requires skillful conversation procedures and support strategies to achieve the goal. For example, as shown in Figure 1, the supporter should firstly explore the situation to identify the problems faced by the seeker, and ∗Corresponding author ![0_image_0.png](0_image_0.png) then try to comfort him. In the end, helpful suggestions are provided to help the seeker get rid of the tough. Intuitively, for such a complex and challenging task, a question is left: *how to maintain smooth transitions between utterances from* different procedures and drive the conversation in a natural way? Previous works (Liu et al., 2021; Peng et al., 2022; Tu et al., 2022) fail to deal with this issue because they treat the dialogue history as a long sequence, ignoring to grasp the fine-grained transition information at each dialogue turn. We argue that considering such turn-level transition information plays the crucial role in achieving effective ESC, navigating the conversation towards the expected goal to reduce the seeker's distress in a smooth way. To achieve this, we model the transition information in ESC from three perspectives and refer to each one of them as a state. First, it is a common phenomena that, even focusing on the same topic, the help seeker may tell different aspects or meanings as the conversation goes. We refer to it as **semantics transition** and take the example in Figure 1. To begin with, the help seeker feels sad to break up with the partner and does not know the reason (e.g. sad, walked out, *struck me*). After receiving the warm and skillful emotional support from the supporter, he is relieved and encouraged to move forward (e.g. *agree*, worth, *move on*). Thus, to fully comprehend the dialogue content with the goal of achieving effective emotional support, it is crucial to grasp such finegrained semantic changes at each dialogue turn. Second, the timing to adopt proper support strategies constitutes another important aspect to achieve effective emotional support. In Figure 1, the supporter attempts to understand the seeker's problem via a *Question* and comfort him by *Reflection of feelings*. And the emotional support ends with the strategy *Providing Suggestion* to help the seeker get through the tough. Such flexible combination and dependencies of different strategies forms the **strategy transition** in ESC, driving the conversation in the more natural and smooth way to solve the dilemma faced by the seeker. Finally, it is also of vital importance to track the emotional state of the seeker as conversation develops. The seeker in Figure 1 comes with a bad mood and suffers from the *tough* that his partner chooses to leave. As the ESC goes, his emotional state is changed and becomes *calm down* to move on. Grasping such **emotion transition** can provide the supporter clear signals to apply proper strategies and offer immediate feedbacks to be aware of the effectiveness of the emotional support.. In this paper, in order to maintain smooth transitions between utterances in ESC and drive the conversation in a natural way, we propose to take into account turn-level state **Trans**itions of ESC (**TransESC**), including semantics transition, strategy transition and emotion transition. To be more specific, we construct the state transition graph for the process of emotional support. Each node consists of three types of states, representing semantics state, strategy state and emotion state of the seeker or the supporter at each dialogue turn. And seven types of edges form the path for information flow. Then we devise a two-step way, called transit-then-interact, to explicitly perform state transitions and update each node representation. During this process, ESC is smoothed through turn-level supervision signal that keywords of each utterance, adopted strategies by the support and immediate emotional states of the seeker are predicted by the corresponding state representations at each turn. Finally, we inject the obtained three transition information into the decoder to generate more engaging and effective supportive response. The main contributions of this work are summarized as follows: - We propose to smooth emotional support conversation via turn-level state transitions, including semantics transition, strategy transition and emotion transition. - We devise a novel model TransESC to explicitly transit, interact and inject the state transition information into the process of emotional support generation. - Results of extensive experiments on the benchmark dataset demonstrate the effectiveness of TransESC to select the exact strategy and generate more natural and smooth responses. ## 2 Related Works 2.1 Emotional Support Conversation Liu et al. (2021) propose the task of emotional support conversation and release the benchmark dataset ESCONV. They append the support strategy as a special token into the beginning of each supportive response and the following generation process is conditioned on the predicted strategy token. Peng et al. (2022) propose a hierarchical graph network to utilize both the global emotion cause and the local user intention. Instead of using the single strategy to generate responses, Tu et al. (2022) incorporate commonsense knowledge and mixed response strategy into emotional support conversation. More recently, Cheng et al. (2022) propose look-ahead strategy planning to select strategies that can lead to the best long-term effects and Peng et al. (2023) attempt to select an appropriate strategy with the feedback of the seeker. However, all existing methods treat the dialogue history as a lengthy sequence and ignore the turn-level transition information that plays critical roles in driving the emotional support conversation in a more smooth and natural way. ## 2.2 Emotional & Empathetic Conversation Endowing emotion and empathy to the dialogue systems has gained more and more attentions recently. To achieve the former goal, both generationbased methods (Zhou et al., 2018; Zhou and Wang, 2018; Shen and Feng, 2020) and retrieval-based (Qiu et al., 2020; Lu et al., 2021) methods attempt ![2_image_0.png](2_image_0.png) to incorporate emotion into dialogue generation. However, it merely meets the basic quality of dialog systems. And to generate empathetic response, previous works incorporate affection (Alam et al., 2018; Rashkin et al., 2019; Lin et al., 2019; Majumder et al., 2020; Li et al., 2020, 2022), cognition (Sabour et al., 2022; Zhao et al., 2022) or persona (Zhong et al., 2020) aspects of empathy. Intuitively, expressing empathy is only one of the necessary steps to achieve effective emotional support. By contrast, emotional support is a more high-level ability that dialogue systems are expected to have. ## 3 Preliminaries 3.1 Esconv Dataset Our research is carried out on the Emotional Support **Conv**ersation dataset, ESCONV (Liu et al., 2021). In each conversation, the seeker with a bad emotional state seeks help to go through the tough. And the supporter is supposed to identify the problem that the seeker is facing, console the seeker, and then provide some suggestions to help the seeker to overcome their problems. The support strategies adopted by the supporter are annotated in the dataset and there are eight types of strategies (e.g., question, *reflection of feelings* and *providing* suggestions). However, ESCONV dataset does not contain keyword sets of each utterance and emotion labels 1for the seeker's turn, we leverage external tools to automatically annotate them. More details about annotation are provided in Appendix A. ## 3.2 Task Definition Formally, let D = [X1, X2, · · · , XN ] denotes a dialogue history with N utterances between the seeker and the supporter, where the i-th utterance Xi = [w i1 , wi2· · · , wim] is a sequence of m words. And each utterance is provided with the extracted set of top k keywords Ki = [k i1 , ki2· · · , kik ]. Besides, the adopted support strategy Si of the supporter and the emotional state label Ei of the seeker are also available for the turn-level supervision. The goal is to generate the next utterance Y from the stand of the supporter that is coherent to the dialogue history D and supportive to reduce the seeker's distress. ## 4 Methodology The overall architecture of our proposed TransESC is shown in Figure 2. The dialogue representations are first obtained through context encoder. Then we grasp and propagate the fine-grained transition information, including semantics transition, strategy transition and emotion transition, in the Turn-Level State Transition Module. Finally, to generate more natural and smooth emotional support responses, such transition information is clearly injected into the Transition-Aware Decoder. ## 4.1 Context Encoder We adopt Transformer encoder (Vaswani et al., 2017) to obtain the contextual representations of the dialogue history. Following previous works (Tu et al., 2022), the dialogue is flattened into a word sequence. Then we append the special token [CLS] to the beginning of each utterance and another one for the upcoming response. And the context encoder produces the contextual embeddings Hc ∈ R N×dh . ## 4.2 Turn-Level State Transition In this section, we propose to grasp the turn-level transition information, including semantics transition, strategy transition and emotion transition, to explicitly smooth the emotional support and drive the conversation in a natural way. Specifically, we construct the state transition graph, with three types of state for each node and seven types of edges, to propagate and update the transition information. And all the three states are supervised at each dialogue turn to predict the keyword set of each utterance, the adopted strategy of the supporter and the emotional state of the seeker. State Transition Graph. We construct the state transition graph to grasp and propagate transition information at each dialogue turn. To alleviate the impact of lengthy and redundant dialogue history, we perform the state transition within a fixed window size w. Specifically, we regard the current turn of supporter's response ue as the end and the w-th latest utterance us spoken by the supporter as the start. All the utterances between us and ue constitute the transition window. Nodes: There are three types of states in total, making up each node in the transition graph. Since the adopted strategy and the emotional state are specified for the supporter and the seeker respectively, for the nodes from the supporter's turn, they include the semantics state and the strategy state, while the semantics state and the emotion state constitute the nodes for the seeker's turn. Edges: We build edges to connect each node with all previous ones. Since there are two roles in ESC, it leads to four types of connection ways (e.g. Seeker-Seeker) between any two nodes. And seven types of edge are divided into two groups, the transition edges T and the interaction edges I. For the former ones, they function to transit previous influences and grasp dependencies between states of the same type (e.g. Strategy-Strategy), while the later ones are devised to perform the interaction between different state types (e.g. Strategy-Emotion). The idea behind the interaction types is that decisions of the supporter to choose a certain strategy should focus on what the seeker said and are largely determined by emotional states of him/her. Also, what the supporter expressed and the adopted strategy could directly have impact on the emotional state of the seeker, leading the seeker into the better mood. Graph Initialization. Here we introduce the way to initialize three states for each node. For the **semantics state** and the **strategy state** of each node, they are both initialized by the corresponding [CLSi] token of each utterance. And for the **emotion state**, in addition to initialized by the [CLSi] token, we also leverage commonsense knowledge from the external knowledge base ATOMIC (Sap et al., 2019) to imply the emotional knowledge of the seeker at each dialogue turn. Concretely, the generative commonsense transformer model COMET (Bosselut et al., 2019) is adopted to obtain the knowledge. We select relation type *xReact* to manifest the emotional feelings of the seeker. Then the hidden state representations from the last layer of COMET are obtained as the emotional knowledge cski. The final representation of the emotion state is the sum of [CLSi] and cski. Please refer to the Appendix B for the detailed implementation of COMET and definitions of the knowledge relation types in ATOMIC. Transit-Then-Interact. In order to explicitly grasp the turn-level transition information of the three states, we devise the two-step way TransitThen-Interact (TTI) to propagate and update state representations of each node. Specifically, inspired by Li et al. (2021a), the relation-enhanced multihead attention (MHA) (Vaswani et al., 2017) is applied to update node representations from the information of the connected neighbourhoods. The formulation of vanilla MHA could be written as: $${\hat{v}}_{i}=\operatorname*{MHA}_{j\in{\mathcal{N}}}(q_{i},k_{j},v_{j}),\qquad\qquad(1)$$ where MHA(*Q, K, V* ) follows the implementation of multi-head attention (Vaswani et al., 2017) And the key of relation-enhanced multi-head attention (R-MHA) is that we incorporate the embeddings of edge types into the query and the key. Thus, the two-step Transit-Then-Interact process operated on semantics states could be written as: $$s_{i}^{\prime}=\operatorname{R-MHA}(s_{i}+r_{i j},s_{j}+r_{i j},s_{j}),$$ $$s_{i}^{\prime\prime}=\operatorname{R-MHA}(s_{i}^{\prime}+r_{i j},s_{j}^{\prime}+r_{i j},s_{j}^{\prime}),$$ (2) $\frac{1}{2}$ (3) . where eij is the edge type between the semantics states at i-th turn and that of j-th turn. T and I are the transition edge types and the interaction edge types, respectively. rij is the embedding of eij . Then we dynamically fuse the results of transition s′i and interaction s′′ i to obtain the updated semantics state sˆi: $$\begin{array}{c}{{\hat{s}_{i}=g^{t t i}\odot s_{i}^{\prime}+(1-g^{t t i})\odot s_{i}^{\prime\prime}}}\\ {{g^{t t i}=\sigma([s_{i}^{\prime};s_{i}^{\prime\prime}]W^{t t i}+b^{t t i})}}\end{array}\qquad(4)$$ where Wtti ∈ R 2dh×dh and b tti ∈ R dh are trainable parameters. Similarly, the ways to obtain the updated strategy state stˆi and emotion state eˆi are identical to that of the above semantics state sˆi. ## 4.3 State Prediction We utilize the turn-level annotation to supervise the transition information, driving the emotional support conversation in a smooth and natural way. Semantic Keyword Prediction. In order to measure the semantics transition more concretely, inspired by Li et al. (2021b), we calculate the difference ∆i = ˆsi − si between the semantics state before and after the operation TTI. Then we devise a bag-of-words loss to force ∆ito predict the semantics keyword set Ki = [k i1 , ki2· · · , kik ] of the corresponding utterance. $$\begin{array}{l}{{\mathcal{L}_{SEM}=-\sum_{i=1}^{N}\sum_{j=1}^{k}\log p(k_{j}^{i}|\Delta_{i})}}\\ {{\qquad=-\sum_{i=1}^{N}\sum_{j=1}^{k}\log f_{k_{j}^{i}}}}\qquad\qquad(5)}\end{array}$$ where fk i j denotes the estimated probability of the j-th keyword k i j in the utterance ui. The function f serves to predict the keyword set of the utterance uiin a non-autoregressive way: $$f=\text{softmax}(W^{sem}\Delta_{i}+b^{sem})\tag{6}$$ where $W^{sem}\in\mathbb{R}^{d_{h}\times|V|}$, $b^{sem}\in\mathbb{R}^{|V|}$ and $V$ refers where Wsem ∈ R sem ∈ R|V |and V refers to the vocabulary size. Supporter Strategy Prediction. After the TTI module, we attempt to explicitly model the dependencies among the adopted supportive strategy during the ESC. Then we utilize the strategy label Si to specify the strategy state at each dialogue turn. $${\hat{y}}_{s t r}=\mathrm{softmax}(W^{s t r}{\hat{s}}t_{i}+b^{s t r})\qquad(7)$$ where yˆstr ∈ R ns, Wstr ∈ R dh×ns and b sem ∈ R ns. ns is the number of total available strategy. Cross entropy loss is utilized and the loss function is defined as: $${\mathcal{L}}_{S T R}=-{\frac{1}{N}}\sum_{i=1}^{N}\sum_{j=1}^{n_{s}}{\hat{y}}_{s t r,i}^{j}\cdot l o g(y_{s t r,i}^{j})\quad\quad(8)$$ where y j str,i stands for the ground-truth strategy label of the utterance i from the supporter. Seeker Emotion Prediction. Similarly, the emotion states ei of each seeker's dialogue turn are also fed into another linear transformation layer: $${\hat{y}}_{e m o}=\mathrm{softmax}(W^{e m o}{\hat{e}}_{i}+b^{e m o})\qquad(9)$$ where yˆemo ∈ R ne, Wemo ∈ R dh×ne and b emo ∈ R ne. ne is the number of total available emotion. Cross entropy loss is also utilized for training: $${\mathcal{L}}_{E M O}=-{\frac{1}{N}}\sum_{i=1}^{N}\sum_{j=1}^{n_{e}}{\hat{y}}_{e m o,i}^{j}\cdot l o g(y_{e m o,i}^{j})\ \ (10)$$ where y j emo,i is the ground-truth emotion label of the utterance i from the seeker. ## 4.4 Transition-Aware Decoder Finally, based on the vanilla Transformer decoder (Vaswani et al., 2017), we devise the transition aware decoder to inject the turn-level transition information into the process of response generation. To make the generation process grounded on the selected strategy, we dynamically fuse the last strategy state stˆ (the adopted strategy for the upcoming response) with the embeddings of the utterance sequence as the input of the decoder: $$\begin{array}{c}{{\hat{E_{i}}=g^{s t r}\odot E_{i}+(1-g^{s t r})\odot\hat{s t}}}\\ {{g^{s t r}=\sigma([E_{i};\hat{s t}]W^{1}+b^{1})}}\end{array}\tag{11}$$ where W1 ∈ R 2dh×dh and b 1 ∈ R dh are trainable parameters and Eiis the i-th embedding token of the response. And for the emotion transition information, we dynamically combine it with the output of the context encoder Hcto explicitly incorporate the emotional states of the seeker. Specifically, the emotion states ei of the seeker and commonsense knowledge e oR iof the supporter, which is generated by the COMET model under the relation type *oReact* to imply what the emotional effect would exert on the seeker after the i-th utterance of the supporter, constitutes the emotional state sequence Hemo. $$\begin{array}{c}{{\hat{H}=g^{e m o}\odot H^{c}+(1-g^{e m o})\odot\hat{H}^{e m o}}}\\ {{\hat{H}^{e m o}=\mathrm{Cross-Att}(H^{c},H^{e m o})}}\\ {{g^{e m o}=\sigma([H^{c};\hat{H}^{e m o}]W^{2}+b^{2})}}\end{array}\tag{12}$$ where $W^{2}\in\mathbb{R}^{2d_{h}\times d_{h}}$ and $b^{2}\in\mathbb{R}^{d_{h}}$ are trainable parameters. parameters. Thus, for the target response Y = [y1, y2, · · · , yM], to generate the t-th token yt, the hidden representation of it from the decoder can be obtained: $$h_{t}=\mathrm{Decoder}({\hat{E}}_{y<t},{\hat{H}})$$ ht = Decoder(Eˆy<t, Hˆ ) (13) In the end, we dynamically inject semantics transition information via the fusion of the last semantics difference representation ∆i (latent semantic information for the upcoming utterance) and the hidden representation ht of the t-th token: $$\hat{h}=g^{sem}\odot h_{t}+(1-g^{sem})\odot\Delta_{i}\tag{14}$$ $$g^{sem}=\sigma([h_{t};\Delta_{i}]W^{sem}+b^{sem})$$ where $W^{3}\in\mathbb{R}^{2d_{h}\times d_{h}}$ and $b^{3}\in\mathbb{R}^{d_{h}}$ are trainable parameters. The distribution over the vocabulary for the t-th token can be obtained by a softmax layer: $$P(y_{t}\mid y_{<t},D)=\mathrm{softmax}(W\hat{h}+b)$$ where D is the input dialogue history. We utilise the standard negative log-likelihood as the response generation loss function: $$L_{g e n}=-\sum_{t=1}^{M}\log P\left(y_{t}\mid D,y_{<t}\right).\qquad(16)$$ A multi-task learning framework is adopted to jointly minimize the response generation loss, the semantic keyword, strategy and emotion loss. $$\mathcal{L}=\gamma_{1}\mathcal{L}_{GEN}+\gamma_{2}\mathcal{L}_{SEM}+\gamma_{3}\mathcal{L}_{STR}+\gamma_{4}\mathcal{L}_{EMO}\tag{17}$$ where $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$ and $\gamma_{4}$ are hyper-parameters. ## 5 Experiments 5.1 Baselines We compare our proposed TransESC with the following competitive baselines. They are four empathetic response generators: **Transformer** (Vaswani et al., 2017), **Multi-Task Transformer (MultiTRS)** (Rashkin et al., 2019), **MoEL** (Lin et al., 2019) and **MIME** (Majumder et al., 2020); and two state-of-the-art models on ESC task: **BlenderBotJoint** (Liu et al., 2021), **GLHG** (Peng et al., 2022) and **MISC** (Tu et al., 2022). More details of them are described in Appendix C. ## 5.2 Implementation Details $$(13)$$ To be comparable with baselines, we implement our model based on BlenderBot-small (Roller et al., 2021) with the size of 90M parameters. The window size w of turn-level transition is 2. The hidden dimension dh is set to 300 and the number of attention heads in relation enhanced multi-head attention and emotion aware attention graph are 16 and 4. Loss weights γ1, γ2, γ3 and γ4 are set to 1, 0.2, 1 and 1, respectively. AdamW (Loshchilov and Hutter, 2017) optimizer with β1 = 0.9 and β2 = 0.999 is used for training. We vary the learning rate during the training process with the initial learning rate of 2e-5 and use a linear warmup with 120 warmup steps. And the training process is performed on one single NVIDIA Tesla A100 GPU with a minibatch size of 20. For inference, following Tu et al. (2022), we also adopt the decoding algorithms of Top-p and Top-k sampling with p=0.3, k=30, temperature τ=0.7 and the repetition penalty 1.03. $$(14)$$ $$(15)$$ ## 5.3 Evaluation Metrics Automatic Evaluation. We apply four kinds of automatic metrics for evaluation: (1) Perplexity (PPL) measures the general quality of the generated responses; (2) BLEU-2 (B-2), BLEU-4 (B-4) (Papineni et al., 2002) and ROUGE-L (R-L) (Lin, 2004) evaluate the lexical and semantic aspects of the generated responses; (3) Distinct-n (**Dist**-n) (Li et al., 2016) evaluates the diversity of the generated responses by measuring the ratio of unique n-grams; (4) Accuracy (Acc) of the strategy prediction is utilised to evaluate the model capability to choose the supportive strategy. Human Evaluation. Following Liu et al. (2021), we recruit three professional annotators to interact with the models for human evaluation. Specifically, 100 dialogues from the test set of ESCONV are randomly sampled. Then we ask the annotators to act as seekers under these dialogue scenarios and chat with the models. Given TransESC and a compared model, the annotators are required to choose which one performs better (or tie) following five Model Acc PPL D-1 D-2 B-1 B-2 B-3 B-4 R-L Transformer - 89.61 1.29 6.91 - 6.53 - 1.37 15.17 Multi-TRS - 89.52 1.28 7.12 - 6.58 - 1.47 14.75 MoEL - 133.13 2.33 15.26 - 5.93 - 1.22 14.65 MIME - 47.51 2.11 10.94 - 5.23 - 1.17 14.74 BlenderBot-Joint 17.69 17.39 2.96 17.87 18.78 7.02 3.20 1.63 14.92 GLHG - **15.67** 3.50 **21.61 19.66** 7.57 3.74 2.13 16.37 MISC 31.67 16.27 4.62 20.17 16.31 6.57 3.26 1.83 17.24 TransESC (Ours) **34.71** 15.85 **4.73** 20.48 17.92 **7.64 4.01 2.43 17.51** Table 1: Comparison of our model against state-of-the-art baselines in terms of the automatic evaluation. The best results among all models are highlighted in bold. TransESC vs. **BlenderBot-Joint MISC** Win Lose Tie Win Lose Tie Fluency **54.7**‡18.0 27.3 **65.7**‡10.7 23.7 Identification **37.3**‡16.0 46.7 **32.0** 19.3 48.7 Empathy **39.3**‡7.0 53.7 **48.0**‡5.7 46.3 Suggestion **37.0** 27.7 35.3 **46.7**†17.3 36.0 Overall **51.7**‡26.0 22.3 **64.0**‡17.7 18.3 aspects: (1) **Fluency**: which model generates more coherent and smooth responses; (2) **Identification**: which model explores the seeker's problems more effectively; (3) **Empathy**: which model is more empathetic to understanding the seeker's feelings and situations; (4) **Suggestion**: which model offers more helpful suggestions; (5) **Overall**: which model provides more effective emotional support. ## 6 Results And Analysis 6.1 Overall Results Automatic Evaluation. As shown in Table 2, TransESC achieves the new state-of-the-art automatic evaluation results. Benefiting from the grasp of three types of transition information in ESC, TransESC is capable of generating more natural and smooth emotional support responses in terms of almost all the metrics compared to the baselines. Compared with the empathetic response generators, the significant performance gain of TransESC demonstrates that eliciting empathy is only one of the critical procedures of ESC, while identifying the problems faced by the seeker and offering helpful suggestions also constitute the important aspects in ESC. Moreover, although the process of strategy prediction is also explored in BlenderBotJoint and MISC, the prominent performance on strategy selection of TransESC can be ascribed to the explicit turn-level strategy transition modeling, which sufficiently capture the dependencies of different strategies adopted at each supporter's turn. As shown in Figure 3, TransESC also outperforms baselines in terms of all the top-n accuracy. Human Evaluation. For the evaluation setting, it is worth to mention that MISC takes the preconversation "situation" of the seeker as the input, which is not rational because the supporter can only comprehend what the seeker is facing as conversation goes. Thus, for the fair comparison, we do not input the "situation" for all three models. As shown in Table 2, TransESC outperforms them in terms of all evaluation aspects. Specifically, it generates more fluent and smooth responses in terms of higher Fluency score, which verifies the benefits of incorporating turn-level transition information to maintain smooth transition between utterances. Also, although all three models may be comparable to identify problems of the seeker, TransESC could elicit more empathetic responses to comfort the seeker and then offer more helpful suggestions. ## 6.2 Ablation Study To explore the impact of three types of transition information, we remove the corresponding state representation with edges in the transition graph, the | Model | Dist-1 | B-2 | B-4 | R-L | |-----------------|----------|-------|-------|-------| | TransESC | 4.73 | 7.64 | 2.43 | 17.51 | | w/o Sem. Trans | 4.55 | 7.04 | 2.13 | 17.37 | | w/o Stra. Trans | 4.29 | 6.68 | 2.01 | 17.15 | | w/o Emo. Trans | 4.82 | 7.14 | 2.22 | 17.45 | | w/o T-L. Trans | 4.19 | 6.35 | 1.94 | 16.88 | | Situation | There is no hope, I am struggling with the pandemic and loneliness Supporter: [Affirmation and Reassurance] I know that days can be really hard. I think ... Seeker: Yeah, I just kind of feel like a failure in life | |------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Context | Seeker: But I am trying, thanks. Supporter: [Affirmation and Reassurance] I understand that there are things in your life ... | | BlenderBot-Joint | [Self-disclosure] I can understand why you are feeling this way. It is very difficult to see people be put down for the things that are bothering you. | | MISC | [Others] I think you are doing the right thing! | | TransESC | [Providing Suggestions] I think that you should try to focus on what is important to you. I know it can be hard to do that when you are feeling down but I believe that you can do it! | | Ground-Truth | [Providing Suggestions] When you feel up to it, do a search for temp agencies near you and hopefully they can give you some leads about a job. | Table 4: Case study of the generated supportive responses by our proposed TransESC and the baselines. | Win. Size | Dist-1 | B-2 | B-4 | R-L | |-------------|----------|-------|-------|-------| | w = 1 | 4.68 | 7.49 | 2.27 | 17.25 | | w = 2 | 4.73 | 7.64 | 2.43 | 17.51 | | w = 3 | 4.49 | 6.52 | 2.26 | 17.29 | | w = 4 | 4.39 | 7.04 | 2.12 | 17.29 | | w = 5 | 4.71 | 6.98 | 2.17 | 17.24 | turn-level label prediction and the injection into the decoder. Besides, to explore the effect of turn-level transition process, we also discard it by predicting three states with the whole dialogue history. As shown in Table 3, the ablation of any types of transition information can lead to a drop in the automatic evaluation results, demonstrating the effectiveness of each one of them. To be more specific, the ablation of the strategy transition (w/o Stra.Trans) causes the most significant performance drop. The reason is that selecting the proper strategy to support the seeker plays the most pivotal role in ESC. And the impact of emotion transition (w/o Emo.Trans) is relatively small. It may be attributed to the noise of annotated emotion labels and the generated emotional knowledge. Moreover, when we remove the whole process of turn-level state transition, the significant performance drop verifies our contribution that grasping the fine-grained transition information can drive the ESC in a more smooth and natural way. ## 6.3 Case Study In Table 4, we show a case with responses generated by TransESC and two baselines. With the emotion transition and strategy transition, after several turns of comforting, TransESC senses the emotion state joy of the seeker and it is time to offer help- ![7_image_0.png](7_image_0.png) ful suggestions with the correct predicted strategy. And through semantics transition, it grasp the determination of the seeker to suggest him to have a try and encourage him to face the failure. By contrast, MISC and BlenderBot-Joint drive the conversation improperly, leading to the ineffective responses. ## 6.4 Length Of Transition Window We adjust different lengths of transition window for a deeper analysis of the impact of transition information modeling. Results are shown in Table 5. The model with the transition window length of 2 achieves the best performance. On the one hand, capturing the transition information in the shorter window could not sufficiently comprehend dependencies of utterance transition in the dialogue history. On the other hand, much more redundant transition information may be incorporated by the model with longer transition window, which would weaken the performance of our model. ## 7 Conclusion And Future Work In this paper, we propose TransESC to generate emotional support via turn-level state transition information incorporated, including semantics transition, strategy transition and emotion transition. We construct the transition graph with the two-step way, transit-then-interact, to grasp and supervise the transition information at each dialogue turn. Experimental results on both automatic and human evaluation demonstrate the superiority of TransESC to generate more smooth responses. In the future, we will explore more characteristics in ESC such as persona to generate more natural responses. ## 8 Limitations Although our proposed method exhibits great performance to generate more smooth and natural emotional support than baseline models, we argue that the research on this field still has a long way to go. We conclude three aspects that may inspire further exploration. First, the automatically annotated emotion labels may be a little bit coarse and may not accurately manifest the emotional states of the seeker. Second, since various types of commonsense knowledge are not introduced, the current chatbots always generate general and safe responses, failing to provide specific and personalized suggestions to help the seeker get over the dilemma. Finally, current automatic evaluation metrics are still not rational and proper to measure the ability of chabots to provide emotional support. It is desirable to build better evaluation metrics for this. ## 9 Ethics Statement The open-source benchmark dataset ESCONV (Liu et al., 2021) used in our experiments is wellestablished and collected by employed crowdsourced workers, with user privacy protected and no personal information involved. And for our human evaluation, all participants are volunteered and transparently informed of our research intent, with reasonable wages paid. Moreover, our research only focuses on building emotional support systems in daily conversations, like the one to seek the emotional support from our friends or families. It is worth to mention that we do not claim to construct chatbots that can provide professional psycho-counseling or professional diagnosis. This requires particular caution and further efforts to construct a safer emotional support system, which is capable of detecting users who have tendencies of self-harming or suicide. ## Acknowledgements We thank the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Key RD Program of China via grant 2021YFF0901602 and the National Natural Science Foundation of China (NSFC) via grant 62176078. ## References Firoj Alam, Morena Danieli, and Giuseppe Riccardi. 2018. Annotating and modeling empathy in spoken conversations. *Computer Speech & Language*, 50:40– 61. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4762–4779. Association for Computational Linguistics. Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng. 2022. Improving multi-turn emotional support dialogue generation with lookahead strategy planning. arXiv preprint arXiv:2210.04242. Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan S. Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4040–4054. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI* 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6384–6392. AAAI Press. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119. The Association for Computational Linguistics. Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, and Ji-Rong Wen. 2021a. Knowledge-based review generation by coherence enhanced text planning. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 183–192. ACM. Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, and Zhumin Chen. 2020. Empdg: Multi-resolution interactive empathetic dialogue generation. In *Proceedings of the 28th International* Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4454–4466. International Committee on Computational Linguistics. Qintong Li, Piji Li, Zhaochun Ren, Pengjie Ren, and Zhumin Chen. 2022. Knowledge bridging for empathetic dialogue generation. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 36, pages 10993–11001. Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021b. Conversations are not flat: Modeling the dynamic information flow across dialogue utterances. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 128–138. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. Moel: Mixture of empathetic listeners. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 121–132. Association for Computational Linguistics. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3469–3483. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *CoRR*, abs/1711.05101. Xin Lu, Yijian Tian, Yanyan Zhao, and Bing Qin. 2021. Retrieve, discriminate and rewrite: A simple and effective framework for obtaining affective response in retrieval-based chatbots. In *Findings of the Association for Computational Linguistics: EMNLP 2021,* Virtual Event / Punta Cana, Dominican Republic, 1620 November, 2021, pages 1956–1969. Association for Computational Linguistics. Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander F. Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: mimicking emotions for empathetic response generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8968–8979. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311–318. ACL. Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun, and Yunpeng Li. 2022. Control globally, understand locally: A global-to-local hierarchical graph network for emotional support conversation. In *Proceedings* of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4324–4330. ijcai.org. Wei Peng, Ziyuan Qin, Yue Hu, Yuqiang Xie, and Yunpeng Li. 2023. Fado: Feedback-aware double controlling network for emotional support conversation. Knowledge-Based Systems, 264:110340. Lisong Qiu, Yingwai Shiu, Pingping Lin, Ruihua Song, Yue Liu, Dongyan Zhao, and Rui Yan. 2020. What if bots feel moods? In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1161–1170. ACM. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In *Proceedings of the 57th Conference of* the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5370–5381. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 300–325. Association for Computational Linguistics. Sahand Sabour, Chujie Zheng, and Minlie Huang. 2022. CEM: commonsense-aware empathetic response generation. In *Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI* 2022 Virtual Event, February 22 - March 1, 2022, pages 11229–11237. AAAI Press. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: an atlas of machine commonsense for if-then reasoning. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3027–3035. AAAI Press. Lei Shen and Yang Feng. 2020. CDL: curriculum dual learning for emotion-controllable response generation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 556–566. Association for Computational Linguistics. Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. MISC: A mixed strategyaware model integrating COMET for emotional support conversation. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 308–319. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Weixiang Zhao, Yanyan Zhao, Xin Lu, and Bing Qin. 2022. Don't lose yourself! empathetic response generation via explicit self-other awareness. arXiv preprint arXiv:2210.03884. Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao. 2020. Towards persona-based empathetic conversational models. *arXiv preprint* arXiv:2004.12316. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In *Proceedings of the* Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 730–739. AAAI Press. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1128–1137. Association for Computational Linguistics. | Category | Train | Dev | Test | |----------------------------|---------|--------|--------| | # dialogues | 14116 | 1763 | 1763 | | Avg. # words per utterance | 18.16 | 18.01 | 18.01 | | Avg. # turns per dialogue | 8.61 | 8.58 | 8.48 | | Avg. # words per dialogue | 156.29 | 154.58 | 152.79 | Table 6: The statistics of processed ESConv dataset. ## A Esconv **Dataset** A.1 Keyword And Emotion Annotation Since the original ESCONV dataset does not contain keyword sets of each utterance and emotion labels for the seeker's turn, we leverage external tools to annotate them. To obtain the keyword set of each utterance, we use TF-IDF method. The vocabulary and IDF term are learned from the training set of ESCONV. Then for each utterance, we apply TF-IDF to obtain the top k keywords. For the emotion labels, we fine-tune the BERT model (Devlin et al., 2019) on a fine-grained emotion classification dataset, GoEmotions (Demszky et al., 2020). The the finetuned BERT model achieve an accuracy of 71% on test set, indicating that it is reliable for emotion classification. Then it is used to annotate an emotion label for each utterance from the seeker's turn. ## A.2 Dataset Statistics We carry out the experiments on the dataset ESCONV (Liu et al., 2021) 2. For pre-processing, following (Tu et al., 2022) we truncate the conversation examples every 10 utterances, and randomly spilt the dataset into train/valid/test set with the ratio of 8:1:1. The statistics is given in Table 6. ## A.3 Definitions Of Strategies There are overall 8 types of support strategies that are originally annotated in the ESCONV dataset: - **Question**: ask for information related to the problem to help the help-seeker articulate the issues that they face. - **Restatement or Paraphrasing**: a simple, more concise rephrasing of the supportseeker's statements that could help them see their situation more clearly. - **Reflection of Feelings**: describe the helpseeker's feelings to show the understanding of the situation and empathy. 2https://github.com/thu-coai/Emotional-SupportConversation - **Self-disclosure**: share similar experiences or emotions that the supporter has also experienced to express your empathy. - **Affirmation and Reassurance**: affirm the help-seeker's ideas, motivations, and strengths to give reassurance and encouragement. - **Providing Suggestions**: provide suggestions about how to get over the tough and change the current situation. - **Information**: provide useful information to the help-seeker, for example with data, facts, opinions, resources, or by answering questions. - **Others**: other support strategies that do not fall into the above categories. ## B Commonsense Knowledge Acquisition B.1 Description Of Atomic Relations ATOMIC (Sap et al., 2019) is an atlas of everyday commonsense reasoning and organized through textual descriptions of inferential knowledge, where nine if-then relation types are proposed to distinguish causes vs. effects, agents vs. themes, voluntary vs. involuntary events, and actions vs. mental states. We give the brief definition of each relation. - **xIntent**: Why does PersonX cause the event? - **xNeed**: What does PersonX need to do before the event? - **xAttr**: How would PersonX be described? - **xEffect**: What effects does the event have on PersonX? - **xWant**: What would PersonX likely want to do after the event? - **xReact**: How does PersonX feel after the event? - **oReact** How does others' feel after the event? - **oWant** What would others likely want to do after the event? - **oEffect** What effects does the event have on others? ## B.2 Implementation Details Of Comet The generative commonsense transformer model COMET (Bosselut et al., 2019) is adopted to obtain the knowledge. We select relation types *xReact* to manifest the emotional feelings of the seeker at each dialogue turn. Specifically, we adopt the BART-based (Lewis et al., 2020) variation of COMET, which is trained on the ATOMIC-2020 dataset (Hwang et al., 2021). And given each utterance Xi belonging to the self to form the input format (Xi*, r,* [GEN]), COMET would generate descriptions of inferential content under the relation r. Then the hidden state representations from the last layer of COMET are obtained as knowledge representation. ## C Baselines - **Transformer** (Vaswani et al., 2017): The vanilla Transformer-based encoder-decoder generation model. - **Multi-Task Transformer (Multi-TRS)** (Rashkin et al., 2019): A variation of the vanilla Transformer with an auxiliary task to perform emotion perception of the user. - **MoEL** (Lin et al., 2019): A Transformerbased model that captures emotions of the other and generates an emotion distribution with multi decoders. Each decoder is optimized to deal with certain emotions and generate an empathetic response through softly combining the output emotion distribution. - **MIME** (Majumder et al., 2020): Another Transformer-based model with the notion of mimicing the emotion of the other to a varying degree by group emotions into two clusters. It also introduces stochasticity to yield emotionally more varied empathetic responses. - **BlenderBot-Joint** (Liu et al., 2021): A strong baseline model on the ESCONV dataset, which prepends the special strategy token at the beginning of responses and conditions the generation process on it. - **GLHG** (Peng et al., 2022): A hierarchical graph neural network to model the relationships between the global user's emotion causes and the local intentions for emotional support dialogue generation. - **MISC** (Tu et al., 2022): An encoder-decoder model that leverages external commonsense knowledge to infer the seeker's fine-grained emotional status and respond skillfully using a mixture of strategy. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4,5,6 ✓ B1. Did you cite the creators of artifacts you used? Section 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 5 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 5 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We perform experiments on public datasets doing naive incremental modeling works. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 and Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 9 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 9 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 6 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 9 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 9
razdaibiedina-etal-2023-residual
Residual Prompt Tuning: improving prompt tuning with residual reparameterization
https://aclanthology.org/2023.findings-acl.421
Prompt tuning is one of the successful approaches for parameter-efficient tuning of pre-trained language models. Despite being arguably the most parameter-efficient (tuned soft prompts constitute {\textless}0.1{\%} of total parameters), it typically performs worse than other efficient tuning methods and is quite sensitive to hyper-parameters. In this work, we introduce Residual Prompt Tuning - a simple and efficient method that significantly improves the performance and stability of prompt tuning. We propose to reparameterize soft prompt embeddings using a shallow network with a residual connection. Our experiments show that Residual Prompt Tuning significantly outperforms prompt tuning across T5-Large, T5-Base and BERT-Base models. Notably, our method reaches +7 points improvement over prompt tuning on SuperGLUE benchmark with T5-Base model and allows to reduce the prompt length by 10 times without hurting performance. In addition, we show that our approach is robust to the choice of learning rate and prompt initialization, and is effective in few-shot settings.
# Residual Prompt Tuning**: Improving Prompt Tuning** With Residual Reparameterization Anastasia Razdaibiedina♢ Yuning Mao♠ **Madian Khabsa**♠ Mike Lewis♠ Rui Hou♠ Jimmy Ba♢ **Amjad Almahairi**♠ ♢University of Toronto & Vector Institute ♠Meta AI {sadalsuud, jba}@cs.toronto.edu {yuningm, rayhou, mkhabsa, mikelewis, aalmah}@meta.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Prompt tuning is one of the successful approaches for parameter-efficient tuning of pretrained language models. Despite being arguably the most parameter-efficient (tuned soft prompts constitute < 0.1% of total parameters), it typically performs worse than other efficient tuning methods and is quite sensitive to hyper-parameters. In this work, we introduce RESIDUAL PROMPT TUNING - a simple and efficient method that significantly improves the performance and stability of prompt tuning. We propose to reparameterize soft prompt embeddings using a shallow network with a residual connection. Our experiments show that RESIDUAL PROMPT TUNING significantly outperforms prompt tuning on SuperGLUE benchmark across T5-Large, T5-Base and BERTBase models. Notably, our method reaches +7 points improvement over prompt tuning with T5-Base and allows to reduce the prompt length by ×10 without hurting performance. In addition, we show that our approach is robust to the choice of learning rate and prompt initialization, and is effective in few-shot settings.1 ## 1 Introduction Pre-trained language models have achieved remarkable performance on a variety of natural language understanding tasks (Devlin et al., 2018; Liu et al., 2019; Raffel et al., 2020). Recent studies have shown that scaling up model size consistently leads to performance gains (Kaplan et al., 2020; Raffel et al., 2020; Zhang et al., 2022), and larger scale models are becoming increasingly more common, e.g. GPT-3, 175B parameters (Brown et al., 2020), MT-NLG, 530B parameters (Smith et al., 2022). Despite the significant performance improvement achieved with larger-scale models, their applicability is limited due to their size. The standard practice of *fine-tuning* becomes prohibitively expensive 1Our code is available at https://github.com/ arazd/ResidualPrompts. Figure 1: Illustration of RESIDUAL PROMPT TUN-ING and comparison with prompt tuning by Lester et al. (2021). a. RESIDUAL PROMPT TUNING reaches stronger performance than prompt tuning (performance with T5-Large model on WSC task is shown). b. Prompt Tuning tunes prompt embeddings P, which are concatenated with input embeddings X and fed into the frozen language model. c. RESIDUAL PROMPT TUN-ING passes the original prompt embeddings P through a shallow network (e.g. MLP) with a residual connection and then prepends them to the input. Embeddings P and MLP parameters are jointly tuned. since it requires storing gradients and optimizer states for all model parameters. Additionally, storing a separate copy of a fine-tuned model for each task is infeasible for billion-parameter models. To address the challenges associated with full model tuning, a line of research has focused on prompt design, where natural language prompts are used to query a frozen model (Brown et al., 2020). In this setup, all tasks are cast as language modeling tasks (e.g. 0/1 classes could be encoded as "True"/"False"), and manually selected prompts condition the frozen model to generate the desired output. Despite the fact that prompt design can achieve strong few-shot performance, manually finding optimal prompts remains challenging and time-consuming (Zhao et al., 2021). Additionally, different prompt choices often lead to large variances in the final performance (Zhao et al., 2021; Vu et al., 2021). Recently, Lester et al. (2021) proposed *prompt* tuning - a method of learning *soft prompts* through gradient descent instead of designing the prompts manually. Soft prompts are a series of continuous embeddings prepended to the input, which are updated throughout training, and typically constitute < 0.1% of the total parameters. Notably, prompt tuning has been shown to perform close to full model tuning when model size increases, closing the performance gap when the model contains over 11B parameters (Lester et al., 2021). Nevertheless, prompt tuning still underperforms with smaller models, and its performance can vary significantly depending on the choice of hyperparameters, such as prompt initialization and learning rate (Vu et al., 2021). Furthermore, prompt tuning generally requires long training and a large number of prompt tokens (over 100) to achieve stable performance (Lester et al., 2021). In this work, we present RESIDUAL PROMPT TUNING, a method that can significantly improve and stabilize prompt tuning performance through residual reparameterization of prompt embeddings (Figure 1). RESIDUAL PROMPT TUNING passes soft prompt embeddings through a shallow network with a residual connection, and subsequently prepends reparameterized prompt to the input and feeds to the language model. This reparameterization gives the model more flexibility to decide between using a separate embedding for each prompt token versus the representation obtained from the shared reparameterization network. After training is completed, the reparameterization network can be discarded and original prompt embeddings can be replaced with their projections. We conduct extensive experiments on SuperGLUE tasks with T5-Large, T5-Base and BERTBase models (Raffel et al., 2020; Devlin et al., 2018) and demonstrate that RESIDUAL PROMPT TUNING outperforms previous prompt tuningbased methods by a large margin, achieving +7 points improvement over prompt tuning on SuperGLUE with T5-Base. We also show that RESID-UAL PROMPT TUNING reduces performance variance under different learning rates or prompt initializations, and achieves strong performance with fewer training iterations. Finally, we show that RESIDUAL PROMPT TUNING significantly improves over prompt tuning in few-shot settings. ## 2 Background Fine-tuning. The predominant approach for adapting a pre-trained language model to a downstream task is to fine-tune all its parameters Θ (Devlin et al., 2018; Raffel et al., 2020). Consider a classification task T with input text x, and output scalar label y, where pΘ is a probability distribution of output classes parameterized by the full model weights Θ. The training objective is simply: $$\operatorname*{max}_{\Theta}\sum_{x,y\in T}\log p_{\Theta}(y|x).\qquad\qquad(1)$$ Despite its effectiveness, fine-tuning updates all model parameters, which can be prohibitively expensive for large language models. Prompt Tuning. Lester et al. (2021) proposed prompt tuning as a lightweight alternative to fine-tuning. The main idea is to prepend a sequence of virtual token embeddings, or a *soft* prompt P, to the input text x, and learn only them on the downstream task while keeping other model parameters fixed. The model parameters Θ are now composed of the frozen pre-trained language model parameters, and the additional soft prompt parameters θP , which are tuned on the downstream task. The training objective becomes: $$\operatorname*{max}_{\theta_{P}}\sum_{x,y\in T}\log p_{\Theta}(y|[P;x]).\qquad\quad(2)$$ Prompt tuning offers an attractive parameterefficient solution to repurpose pre-trained models for many real-world applications. However, training soft prompts often requires extensive hyperparameter tuning and longer training time to achieve the desired performance (Lester et al., 2021). ## 3 Method 3.1 Residual Prompt T**Uning** We propose to use a more flexible parameterization of soft prompts using a shallow network with a skip connection (Figure 1). Specifically, we project the sequence of prompt embeddings P consisting of n virtual tokens [P1*, ..., P*n] into a reparameterized sequence P′as follows: $$P^{\prime}=[P_{1}^{\prime},...,P_{n}^{\prime}]=[\Phi(P_{1}),...,\Phi(P_{n})],$$ where Φ(·) is a reparameterization function composed of a shallow network ϕ(·) with a residual connection. Φ(·) is applied independently to each prompt token: $$\Phi(P_{i})=\phi(P_{i})+P_{i},\;i\in\{1...n\}$$ Our ϕ(·) network is a multi-layer perceptron (MLP) that follows a "bottleneck" design, as in commonly used ResNet building blocks (He et al., 2016) and adapter modules (Houlsby et al., 2019). It consists of down-projection Wdown ∈ R d×m and upprojection Wup ∈ R m×dlayers (as shown in Figure 2), a combination of which has been thoroughly explored in literature (He et al., 2016; Houlsby et al., 2019). Here, d is the dimensionality of model embeddings and m is the bottleneck size of the MLP (hyperparameter of our approach). We train only the prompt embeddings θP and the repremeterization parameters θϕ on the downstream task, while keeping all other parameters frozen. The training objective is to maximize the log-likelihood of correct output y given the input text x concatenated with the reparameterized soft prompt P′: $$\operatorname*{max}_{\theta_{P},\theta_{\phi}}\sum_{x,y\in T}\log p_{\Theta}(y|[P^{\prime};x]).$$ ## 3.2 Design Choices We discuss here several important design choices for the reparameterization network Φ. Residual connection. We find that residual connection plays a key role in boosting performance and speeding up the convergence in RESIDUAL PROMPT TUNING (Section 5.1, Appendix B.2). Similar to ResNets (He et al., 2016), we hypothesize that residual learning gives the model more flexibility to decide between using a separate embedding for each prompt token versus the representation obtained from the shared network. We discuss further benefits of residual connection in Appendix B.2. Depth and width of MLP. We use two-layer MLP, whose up- and down-projection matrices Wup and Wdown constitute the additional trainable parameters. Increasing the dimensionality m of the hidden layer results in higher performance (see $$({\mathfrak{I}})$$ ![2_image_0.png](2_image_0.png) $$(4)$$ Section 5.6), suggesting that the overparameterization (Allen-Zhu et al., 2019) of prompt tokens is important for the performance improvement. More details on parameter-efficiency are in Appendix A.6. Non-linearity and normalization. We select LayerNorm (Ba et al., 2016) as our normalization layer and ReLU as our non-linearity. We find that LayerNorm helps to stabilize the performance, while the effect of the specific choice of the nonlinear layer is of lesser importance. Parameter sharing. In our setup, we apply a shared reparameterization network Φ to each virtual token embedding. Another design choice is to apply a separate network to each prompt embedding. We compare both variants in Section 5.6. Overall, a shared MLP is significantly more parameter-efficient and offers the benefit of knowledge sharing in limited data settings. $$(S)$$ ## 3.3 Training And Inference During training, we jointly optimize prompt embeddings P and parameters of the reparameterization network Φ(·), while keeping the backbone model frozen. The reparameterized prompt is inserted before the input text embeddings and fed into the language model (see details in Section 4.2). Importantly, we use task-specific prompts, meaning that reparameterized prompt embeddings are not dependent on the input. After training is complete, we project prompt embeddings through the learned reparameterization network Φ(·), and replace the original prompt embeddings with their corresponding projections P′ = Φ(P). **During inference, we discard the** reparameterization network and solely use the projected prompt embeddings P′. Specifically, we insert P′in front of the input text embeddings, and feed them together to the frozen pre-trained model. ## 4 Experiments 4.1 Datasets Following previous works on prompt tuning (Lester et al., 2021; Vu et al., 2021), we use NLU tasks from the SuperGLUE benchmark to assess the performance of the language model (Wang et al., 2019). Specifically, we use the following 8 datasets: BoolQ (Clark et al., 2019), CB (De Marneffe et al., 2019), COPA (Roemmele et al., 2011), MultiRC (Khashabi et al., 2018), ReCoRD (Zhang et al., 2018), RTE (Giampiccolo et al., 2007), WiC (Pilehvar and Camacho-Collados, 2018) and WSC (Levesque et al., 2012). More details on are discussed in Appendix A.1, A.2. ## 4.2 Architectures RESIDUAL PROMPT TUNING is a model-agnostic approach that can be used with any transformer architecture - similarly to the original prompt tuning (Lester et al., 2021). In our experiments, we explore the performance of our method with encoder-decoder T5 model2(Raffel et al., 2020) and encoder-only BERT model (Devlin et al., 2018). Specifically, we focus on BERT-Base (110M parameters), T5-Base (220M parameters) and T5-Large (770M parameters) model variants. BERT. For BERT experiments, we insert the trainable prompt in front of the input sequence, but before the [CLS] token, resulting in the following input xˆ to the language model: xˆ = concat[E([CLS]), P′, E(S[EOS])], where P′ is the embeddings matrix of the reparameterized soft prompt, S is the input sentence, [CLS] and [EOS] denote special tokens (for sentence classification and marking end-of-sentence), and E denotes tokenization and extraction of embeddings. To predict the class of input text xˆ, we follow the original (Devlin et al., 2018) setup and use encoder representation of the [CLS] token, h[CLS], and add a linear transformation parameterized by w and a softmax layer to predict the class of xˆ: $$p(y=c|h)={\frac{e^{\mathbf{w_{c}}h_{[\mathrm{cas}]}}}{\sum_{k\in{\mathcal{C}}}e^{\mathbf{w_{k}}h_{[\mathrm{cas}]}}}}$$ After that, we apply cross-entropy loss to perform gradient updates on the prompt embeddings, linear head, and reparameterization network. T5. For T5 experiments we cast all tasks as language modeling tasks, following Raffel et al. (2020); Lester et al. (2021). In this setup, we model the classification task as conditional generation, where output is a sequence of tokens that represent a class label. We prepend reparameterized prompt embeddings P′in front of the input text embeddings, hence total input xˆ = concat[P′, E(S)] is passed into the pre-trained language model. T5 model applies a multi-headed self-attention over the input tokens followed by position-wise feed-forward layers to output a distribution over target tokens. We train prompt embeddings and parameters of the reparameterization network with cross-entropy loss. More details on input preprocessing and prompt initialization are in Appendix A.3, A.4. ## 4.3 Baselines We compare RESIDUAL PROMPT TUNING (Res PT) with approaches from two different categories: methods for *prompt reparameterization* and parameter-efficient tuning (PEFT) methods. In our first set of experiments, we study how much residual reparameterization can improve prompt tuning performance and evaluate it against other reparameterization techniques. In sum, we compare our approach with the original prompt tuning (PT; no reparameterization Lester et al. 2021), prompt tuning with MLP reparameterization (PT w/ MLP; Li and Liang 2021), prompt tuning with LSTM reparameterization (PT w/ LSTM; Liu et al. 2021b) and fine-tuning. In our second set of experiments, we assess the benefits of RESIDUAL PROMPT TUNING method versus existing PEFT approaches. In addition to prompt tuning, we include a set of PEFT baselines: Adapter (Houlsby et al., 2019), AdapterDrop (Rücklé et al., 2020), SPoT (Vu et al., 2021), ATTEMPT (Asai et al., 2022). Adapter and AdapterDrop approaches are based on adapters by Houlsby et al. (2019), whereas SPoT and ATTEMPT are | Task → | BoolQ | CB | COPA | MultiRC | ReCoRD | RTE | WiC | WSC | Avg. | |---------------|---------|---------|--------|-----------|----------|-------|-------|-------|--------| | Method ↓ | Acc. | F1/Acc. | Acc. | F1/EM | F1/EM | Acc. | Acc. | Acc. | - | | T5-Large | | | | | | | | | | | Prompt Tuning | 83.4 | 86.4 | 54.0 | 67.9 | 73.3 | 86.4 | 67.5 | 31.0 | 68.7 | | PT w/ MLP | 83.4 | 82.1 | 37.0 | 67.9 | 68.8 | 77.4 | 66.2 | 7.0 | 61.2 | | PT w/ LSTM | 53.8 | 78.9 | 0.0 | 66.4 | 82.1 | 49.5 | 15.2 | 0.0 | 43.2 | | Residual PT | 83.5 | 86.9 | 56.3 | 68.6 | 68.1 | 86.2 | 70.8 | 50.4 | 71.4 | | Fine-tuning† | 85.4 | 93.2 | 83.4 | 67 | 86.3 | 87.8 | 69.3 | 86.3 | 82.3 | | T5-Base | | | | | | | | | | | Prompt Tuning | 78.0 | 77.4 | 58.3 | 59.2 | 59.5 | 63.7 | 66.2 | 37.7 | 62.5 | | PT w/ MLP | 77.5 | 74.8 | 57.7 | 59.5 | 60.8 | 56.0 | 65.2 | 39.5 | 61.4 | | PT w/ LSTM | 51.1 | 5.0 | 3.5 | 12.5 | 32.3 | 43.3 | 54.9 | 43.1 | 30.7 | | Residual PT | 77.9 | 79.2 | 58.3 | 59.3 | 60.2 | 70.4 | 66.8 | 49.1 | 65.2 | | Fine-tuning† | 81.4 | 86.2 | 94.0 | 71.2 | 61.4 | 74.6 | 68.3 | 80.8 | 76.2 | | BERT-Base | | | | | | | | | | | Prompt Tuning | 62.2 | 60.7 | 51.6 | 57.5 | 60.0 | 53.1 | 54.3 | 61.9 | 57.7 | | PT w/ MLP | 62.0 | 61.3 | 53.2 | 58.3 | 62.8 | 48.0 | 54.6 | 64.1 | 58.0 | | PT w/ LSTM | 62.2 | 65.2 | 52.0 | 53.1 | 62.7 | 44.6 | 59.9 | 63.5 | 57.9 | | Residual PT | 62.7 | 67.9 | 63.5 | 59.0 | 61.1 | 54.9 | 57.1 | 63.5 | 61.2 | | Fine-tuning | 73.2 | 89.9 | 65.7 | 66.9 | 62.8 | 65.1 | 67.8 | 63.8 | 69.4 | tranfer learning-based methods for prompt tuning, which find optimal prompt initializations by pretraining prompts on informative source tasks. ## 4.4 Experimental Setup For all experiments with prompt tuning-based methods, we follow standard protocol by Lester et al. (2021) and report results on the validation set. Unless otherwise specified, we use standard metrics associated with each task to report final performance (see Table 7). For experiments where we compare RESIDUAL PROMPT TUNING with PEFT methods (Section 5.1.2), we follow PEFT training protocol (Asai et al., 2022; Karimi Mahabadi et al., 2021). More experimental details are in Appendix A.5. | Prompt len. → | 10 tokens | 100 tokens | | | | | |-----------------|-------------|--------------|-------------|-------------|------|------| | Method ↓ | T5L | T5B | BERT T5L | T5B | BERT | | | Prompt Tuning | 68.7 | 62.5 | 57.7 | 74.5‡ 63.1‡ | 59.2 | | | PT w/ MLP | 61.2 | 61.4 | 58.0 | 67.8 | 62.4 | 60.8 | | PT w/ LSTM | 43.2 | 30.7 | 57.9 | 60.1 | 55.2 | 58.8 | | Residual PT | 71.4 | 65.2 | 61.2 | 74.5 | 70.5 | 61.6 | | Fine-tuning | 82.3† 76.2† | 69.4 | 82.3† 76.2† | 69.4 | | | ## 5 Results We describe our main results showing the effectiveness of RESIDUAL PROMPT TUNING compared to other prompt tuning-based methods and parameterefficient methods in Section 5.1. We study the robustness of our method to the choice of hyperparameters in Sections 5.2 and 5.3. Then, we explore the performance of RESIDUAL PROMPT TUNING in more extreme settings, including smaller prompt sizes (Section 5.4) and few-shot data regime (Section 5.5). ## 5.1 Main Results 5.1.1 Comparison With Prompt Tuning We compare RESIDUAL PROMPT TUNING with the original prompt tuning, as well as two different reparameterization methods (via MLP and LSTM). Table 1 shows results for each task with 10-token prompts, and results for 100-token prompts are presented in Appendix B.1. We perform experiments with T5-Large, T5-Base, and BERT-Base model architectures, and with two different prompt sizes: 10 and 100 tokens. Additionally, we include full model tuning results as an upper-bound performance. Table 2 summarizes the average performance on SuperGLUE with 10-token and 100-token prompts across three model variants. RESIDUAL PROMPT TUNING outperforms other methods, gaining +3 points improvement with 10-token prompts on both ![5_image_0.png](5_image_0.png) T5B and T5L models, and over +7 points improvement with 100-token prompts on T5B. Table 1 dissects the performance with 10-token prompts, showing per-task results for all SuperGLUE tasks across three model variants. RESID-UAL PROMPT TUNING leads to consistent improvement over prompt tuning across different tasks. LSTM-based reparameterization shows worse performance compared to our approach. Prompt tuning with MLP reparameterization experiences significant fluctuations depending on the task - with stronger performance on ReCoRD (+0.6 points), but substantially lower score on WiC (−9.6 points) compared to our approach. Overall, RESIDUAL PROMPT TUNING shows strong improvement over prompt tuning and other reparameterization methods across all model architectures. ![5_image_1.png](5_image_1.png) As shown in Figure 4, RESIDUAL PROMPT TUN-ING leads to faster convergence compared to other methods. Notably, the residual connection in the reparameterization network plays a key role in boosting performance - MLP-based reparameterization without skip connection leads to slower converge than vanilla prompt tuning. We discuss convergence in more detail in Appendix B.2. ## 5.1.2 Other Parameter-Efficient Methods We compare the performance of different PEFT methods on SuperGLUE benchmark. Here, for all the experiments, we follow Asai et al. (2022) setup and train T5-Base model with a 100-token prompt on a selection of 5 SuperGLUE tasks (details in Appendix A.5). Our results are shown in Table 3. Notably, RESIDUAL PROMPT TUNING achieves significant performance gains over prompt tuning, achieving over +10 points improvement in average score. A major benefit of our method is that it does not require transfer learning on source tasks to achieve strong results, contrary to two other prompt tuning-based methods: SPoT and ATTEMPT. RESIDUAL PROMPT TUNING substantially outperforms SPoT (+6.1 points), and reaches close performance to ATTEMPT (1.5 points difference) without being pre-trained on any source tasks. Further comparison is in Appendix B.3. | Task → | CB | Bool Multi WiC WSC Avg. | | | | | |-----------------|------|---------------------------|------|------|------|------| | Method ↓ | F1 | Acc. | F1 | Acc. | Acc. | Avg. | | Fine-tune∗ | 85.7 | 81.1 | 72.8 | 70.2 | 59.6 | 73.9 | | Adapter∗ | 85.7 | 82.5 | 75.9 | 67.1 | 67.3 | 75.7 | | AdaptDrop∗ 85.7 | 82.3 | 72.9 | 68.3 | 67.3 | 75.3 | | | ATTEMPT∗ 78.6 | 78.8 | 74.4 | 66.8 | 78.6 | 70.5 | | | SPoT∗ | 46.4 | 77.2 | 74.0 | 67.0 | 50.0 | 62.9 | | PT∗ | 67.9 | 61.7 | 58.7 | 48.9 | 51.9 | 57.8 | | Res-PT | 86.0 | 79.0 | 58.9 | 68.4 | 52.6 | 69.0 | ## 5.2 Robustness To The Choice Of Learning Rate We study the performance of RESIDUAL PROMPT TUNING across a wide range of learning rates. Previous works report that prompt tuning is very sensitive to the learning rate and requires extensive hyperparameter search to reach optimal performance (Lester et al., 2021; Vu et al., 2021). We evaluate the performance of our proposed approach and prompt tuning (Lester et al., 2021) with learning rates from {0.001, 0.01, 0.03, 0.3, 10} on SuperGLUE benchmark. For fair comparison, we use the most stable model variant: T5-Large model with 100-token prompt. Our results are shown in Figure 3. Notably, residual reparameterization allows stabilizing prompt tuning performance across a wide range of learning rates. Original prompt tuning often experiences fluctuations in its performance, with some tasks favoring lower learning rates (e.g. MultiRC), other tasks performing better with higher learning rates (e.g. CB), and yet other tasks achieving peak performance at a specific learning rate (e.g. WiC). In contrast to prompt tuning, RESIDUAL PROMPT TUNING is robust to the choice of learning rate - it achieves strong performance with minimal fluctuations (less than 2 points on average SuperGLUE score) with learning rates between 0.01 and 10 (over 100-fold variation). | Task → | CB | WiC | Multi | RTE Avg. | | | |--------------------|----------------|-------|---------|------------|-----------|-----| | Method ↓ | Init. ↓ F1/Acc | Acc | F1/Acc | Acc | - | | | Prompt tune | Rand. | 72.9 | 65.0 | 59.1 | 63.7 65.2 | | | Prompt tune Vocab. | 77.4 | 66.2 | 59.2 | 63.7 66.6 | | | | delta | - | 4.5 | 1.2 | 0.1 | 0.0 | 1.5 | | Res-PT | Rand. | 78.9 | 66.8 | 59.4 | 67.3 68.1 | | | Res-PT | Vocab. | 79.2 | 66.8 | 59.3 | 70.4 68.9 | | | delta | - | 0.3 | 0.0 | -0.1 | 3.1 | 0.8 | ## 5.3 Robustness To The Prompt Initialization Lester et al. (2021) finds that initialization of prompt parameters plays a major role in the final performance. Specifically, initializing prompt embeddings from sampled vocabulary embeddings can boost average SuperGLUE performance by up to +10 points compared to random uniform initialization (Lester et al., 2021). Here, we asked if RESIDUAL PROMPT TUNING performance would depend on the choice of initialization. Table 4 shows our results (initialization details are in Appendix A.4; we use T5B model with 10-token prompt). We can see that RESIDUAL PROMPT TUNING is robust to the prompt initialization method, reaching comparable results with both initialization choices: 0.8 points average performance difference between random uniform initialization and sampled vocabulary initialization. Of note, the initialization effect is more pronounced for smaller-scale dataset CB (250 samples) - random initialization attributes to −0.3 performance drop for RESIDUAL PROMPT TUNING versus −4.5 score difference for the original prompt tuning. ## 5.4 Performance And Prompt Length We evaluate the RESIDUAL PROMPT TUNING performance with smaller prompt sizes, and compare it to the original prompt tuning by Lester et al. (2021). Specifically, we explore the performance with prompts of lengths 2, 10, and 100 tokens with T5-Large model. Our results are shown in Table 5. In sum, RESIDUAL PROMPT TUNING improves performance across all prompt lengths over prompt tuning, achieving average improvement of +2.6, +1.1 and +0.8 points with 2, 10, and 100-token prompts correspondingly. | Prompt | CB | WiC | Multi | RTE Avg. | | | |----------|----------|-------|---------|------------|------|------| | Len. ↓ | Method ↓ | Acc | Acc | F1/Acc | Acc | - | | 2 | PT | 91.7 | 67.4 | 84.8 | 81.0 | 81.2 | | 2 | Res-PT | 94.0 | 70.7 | 84.9 | 85.6 | 83.8 | | 10 | PT | 92.9 | 67.7 | 85.0 | 86.4 | 83.0 | | 10 | Res-PT | 94.0 | 71.0 | 85.1 | 86.2 | 84.1 | | 100 | PT | 92.9 | 70.2 | 83.8 | 87.5 | 83.6 | | 100 | Res-PT | 95.2 | 71.3 | 83.8 | 87.0 | 84.4 | Table 5: Comparison of RESIDUAL PROMPT TUNING and prompt tuning by Lester et al. (2021) across different prompt lengths (2, 10, 100 tokens) with T5L model. ## 5.5 Prompt Tuning In Few-Shot Setting We perform further experiments in few-shot settings (Figure 5). Specifically, we sample 5, 20, and 100 samples per class. To avoid variance due to selected samples, we fix the same training subset across all runs for each task; we use T5- Large model and 100-token prompt (as it reaches strongest performance for prompt tuning baseline). RESIDUAL PROMPT TUNING is very effective ![7_image_0.png](7_image_0.png) in few-shot setup, boosting prompt tuning performance by +7 and +2 points on SuperGLUE benchmark with 5 and 20 samples per class. ## 5.6 Ablation Studies Parameter sharing. We ablate the effect of a shared reparameterization network by assessing the performance when each prompt is reparameterized through a separate MLP with a skip connection (Table 6). We select four SuperGLUE tasks of different sizes: small-scale CB and COPA (250 and 400 training examples), and larger-scale WiC and RTE (6,000 and 2,500 training examples). Interestingly, shared reparameterization network is beneficial in the low data regime, outperforming separate networks by +2 points on CB dataset. However, on larger datasets separate networks achieve slightly better performance at the expense of more trained parameters. We show more detailed results in Appendix C.1. | CB | COPA | WiC | RTE Avg. | | | |---------------|--------|-------|------------|------|------| | Acc. | Acc. | Acc. | Acc. | - | | | shared MLP | 83.1 | 58.7 | 66.7 | 71.6 | 70.0 | | separate MLPs | 81.1 | 60.3 | 67.8 | 74.5 | 70.9 | Table 6: Performance of RESIDUAL PROMPT TUNING with *shared* and *separate* embedding reparameterization networks on four SuperGLUE tasks with T5B. Overparameterization. To study the effect of overparameterization on the final performance, we ablate MLP width by varying the dimension of MLP hidden layer in the following range: {5, 10, 50, 100, 400, 1500} (Figure 6). Overall, we find that increase in dimensionality leads to performance gains, with performance saturating when the dimension reaches over 50 units. ## 6 Related Work Parameter-efficient tuning methods. Recent approaches have explored parameter-efficient tuning ![7_image_1.png](7_image_1.png) (PEFT) of language models, where only a subset of parameters is trained while the rest of the model is kept frozen. Houlsby et al. (2019) proposed adapters - small modules injected between each transformer layer. To improve over original adapter tuning, several works proposed to remove adapter modules from lower transformer layers (Rücklé et al., 2020), or use a composition of adapter modules (Pfeiffer et al., 2020). Other works focused on low-rank adaptations (LoRA) (Hu et al., 2021), and *prefix tuning* (Li and Liang, 2021). Similarly to adapters, LoRA injects additional trainable weight matrices into each transformer layer, requiring changes to the intrinsic model structure and adding a high number of extra parameters. Prompt tuning methods. To overcome the drawbacks of traditional PEFT methods, Lester et al. (2021) introduced *prompt tuning* as a highly parameter-efficient approach, where tuned soft prompts constitute < 0.1% of the total parameters and can be easily appended to the input without modifying the model. Several methods were recently introduced to improve over prompt tuning. Liu et al. (2021a) proposed adding soft prompts at every transformer layer. While their method improves performance, it requires much more trainable parameters (10x in some cases). Other works explored transfer learning-based methods to find better prompt initialization through pre-training (Vu et al., 2021; Asai et al., 2022). These methods pre-train soft prompts on a collection of source tasks and subsequently use the learned prompt embeddings to initialize prompts for target tasks. Reparameterization methods. Although reparameterization has not been traditionally used with prompt tuning, Li and Liang (2021) explored *reparameterization of embeddings* as a way to improve the performance of prefix tuning, and Liu et al. (2021b) explored reparameterizing injectable embeddings together full model tuning. With these approaches, prefix embeddings are passed through a shallow neural network, such as MLP (in prefix tuning) or LSTM (in GPT2 tuning by Liu et al. (2021b)), before being concatenated to the input embeddings (or representations) and passed into a subsequent layer of the language model. Liu et al. (2021a) explores MLP-based reparameterization for *P-tuning v2*. Despite improvements on some tasks, Liu et al. (2021a) finds that the reparameterization effect is not consistent across datasets and can hinder the performance of certain tasks. ## 7 Conclusion We propose RESIDUAL PROMPT TUNING, a new method for learning soft prompts under a frozen language model using residual reparameterization of prompt embeddings. Our method enables efficient learning of soft prompts, without the need for extensive hyperparameter search, long training times, or pre-training on source tasks. The experiments show that RESIDUAL PROMPT TUN-ING significantly outperforms prompt tuning by Lester et al. (2021) and its two variations across three model architectures (T5-Large, T5-Base and BERT-Base) on SuperGLUE benchmark. Furthermore, our method is robust to the hyperparameter choice (learning rate and prompt initialization), speeds up convergence and is highly effective in few-shot settings. ## Limitations Despite the simplicity and strong empirical results, RESIDUAL PROMPT TUNING still has few limitations. First, its performance is still not on par with fine-tuning on (e.g. 7.8 points difference with T5L model and 100-token prompt on SuperGLUE average score). Also, our method uses slightly more parameters than prompt tuning to train the reparameterization network. However, this is not a significant limitation given the full language model size. We have tried to cover several model architectures, but so far we have focused on encoder-decoder (T5) and encoder-only (BERT) models. In future work, we would like to investigate decoder-only methods (e.g. GPT). Another limitation is that our method (similarly to other prompt tuning-based methods) strives to reduce the number of trainable parameters, but uses a longer sequence than the original input text (due to the injected prompt). ## Ethics Statement The main objective of RESIDUAL PROMPT TUN-ING is to improve parameter-efficient tuning of large language models, which makes state-of-theart models more accessible to groups with limited computational and data-labeling resources. We do not believe there is any potential risk in the published code or models in this work, as all of our experiments are based on public data that is widely used in the research community. ## Acknowledgement We would like to thank Victoria Lin and extended FAIR team for their helpful feedback and comments. We also thank Akari Asai and Xiang Lisa Li for our discussions on the prompt tuning methodologies. ## References Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. 2019. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, pages 242–252. PMLR. Akari Asai, Mohammadreza Salehi, Matthew E Peters, and Hannaneh Hajishirzi. 2022. Attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing. *arXiv preprint* arXiv:2205.11961. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. *arXiv preprint* arXiv:1905.10044. Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In *proceedings of Sinn und Bedeutung*, volume 23, pages 107–124. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings of the* ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 770– 778. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *arXiv* preprint arXiv:2001.08361. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. *Advances in Neural* Information Processing Systems, 34:1022–1035. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In *Proceedings* of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In *Thirteenth international conference on the principles of* knowledge representation and reasoning. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv* preprint arXiv:2101.00190. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021a. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. *arXiv preprint arXiv:2103.10385*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterfusion: Non-destructive task composition for transfer learning. *arXiv preprint arXiv:2005.00247*. Mohammad Taher Pilehvar and Jose Camacho-Collados. 2018. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. arXiv preprint arXiv:1808.09121. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *AAAI spring symposium: logical formalizations of commonsense reasoning*, pages 90–95. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers. *arXiv preprint* arXiv:2010.11918. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2021. Spot: Better frozen model adaptation through soft prompt transfer. *arXiv preprint* arXiv:2110.07904. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. ## Appendix A Implementation And Training A.1 Implementation Details We use PyTorch (Paszke et al., 2019) and HuggingFace Transformers library (Wolf et al., 2019) for our implementation. To download data for SuperGLUE tasks, we use HuggingFace datasets (https: //github.com/huggingface/datasets) (Wang et al., 2019). In our prompt tuning and reparameterization experiments, we follow setup from the previous works on prompt tuning (Lester et al., 2021; Vu et al., 2021), and use the available validation set for each task to report the highest performance. ## A.2 Datasets Table 7 shows details of the eight datasets from SuperGLUE benchmark (Wang et al., 2019) that we used for our experiments, along with their training sizes and evaluation metrics. Following Raffel et al. (2020) and Lester et al. (2021), for tasks that have two evaluation metrics we use the average of both scores as the final performance metric. | Dataset name Train | Task | Domain | Metric | | |----------------------|-------------------------------|----------------------------|------------------|-----------| | 1. BoolQ | 9,427 | QA | Wikipedia | acc. | | 2. CB | 250 | NLI | various | F1 & acc. | | 3. COPA | 400 | QA | blogs, encyclop. | acc. | | 4. MultiRC | 5,100 | QA | various | F1 & EM | | 5. ReCoRD | 101K | QA | various | F1 & EM | | 6. RTE | 2,500 | NLI | news, Wiki | acc. | | 7. WiC | 6,000 | WSD lexical databases acc. | | | | 8. WSC | 554/259* coref. fiction books | acc. | | | ## A.3 Tokenization And Preprocessing Following common practice (Lester et al., 2021; Vu et al., 2021; Asai et al., 2022), for all our experiments, we set the maximum input length (including the prepended prompt) to 512 tokens. We use padding to maximum length and mask out the padded tokens. In case of input exceeding 512 tokens, we truncate the input. We do not perform any specific text preprocessing (e.g. removing punctuation) but instead directly tokenize the raw text from SuperGLUE datasets using the corresponding model tokenizer from HuggingFace (Wolf et al., 2019). For **BERT** experiments, we follow Devlin et al. (2018) formatting - the input sequence begins with [CLS] token, and ends with [EOS] token. For tasks with sentence pairs (e.g. RTE), we only insert our soft prompt before the first sentence, and concatenate both sentences with [SEP] token in between. For T5 experiments, we follow Raffel et al. (2020) formatting. We feed input examples along with their descriptors (e.g. "sentence1" and "sentence2"), and cast all classification tasks into textto-text format (e.g. 0 and 1 classes in BoolQ task are cast into "True" and "False") replicating guidelines from Raffel et al. (2020). ## A.4 Prompt Initialization In all our experiments, unless otherwise specified, we initialize prompt virtual tokens using randomly sampled vocabulary embeddings (Lester et al., 2021). We sample uniformly across the whole vocabulary, without limiting to top-k most common tokens. For our studies on performance robustness to the prompt initialization (Section 5.3), we also explore random initialization, where embedding values are sampled uniformly from [−0.5, 0.5] following Lester et al. (2021). ## A.5 Training Details A.5.1 Infrastucture All of our experiments were conducted with 12 GPUs, with 32 GB memory each. On each task, training took between 20 minutes and 26 hours. ## A.5.2 Hyperparameters Following Lester et al. (2021); Vu et al. (2021), we tune each method with a flat learning rate (LR) determined by hyperparameter search. Hyperparameter search was done via manual tuning and settings were selected based on the best SuperGLUE score (we use a subset of 5 tasks as in Asai et al. (2022)). For T5 models, we search LRs from {0.01, 0.1, 0.3, 0.7, 1.0}; based on the search use the following LRs: 0.7 for RESIDUAL PROMPT TUNING, MLP and LSTM-reparameterized prompt tunings, 0.3 for the original prompt tuning (this also agrees with Vu et al. (2021)). For BERT model, we search LRs from {10−6, 5×10−6, 10−5, 2×10−5, 5×10−5, 10−4}; we find LR of 2 × 10−5to achieve the best performance with RESIDUAL PROMPT TUNING and all prompt tuning variations, and use LR of 10−6for fine-tuning according to Wang et al. (2019). In all our experiments, we use batch size of 8 and AdamW optimizer (Loshchilov and Hutter, 2018) with the following hyperparameters: β1 of 0.9, β2 of 0.999, weight decay of 0.01, ϵ of 10−8and bias correction turned on. ## A.5.3 Mlp And Lstm Design For RESIDUAL PROMPT TUNING and prompt tuning w/ MLP we use two-layer MLP as shown in Figure 2. The only design difference between RESID-UAL PROMPT TUNING and prompt tuning w/ MLP is the residual connection. We set the hidden layer dimension of MLP to 250 in parameter-efficient experiments (Section 5.1.2), and to 400 in all other experiments. We use ReLU non-linearity and apply LayerNorm normalization. For prompt tuning w/ LSTM we use one-layer bidirectional LSTM with embedding dimension of 300, and dropout of 0.05, following Liu et al. (2021b). ## A.5.4 Training And Evaluation We train all prompt tuning-based methods for 15 epochs in case of 10-token prompts and for 20 epochs in case of 100-token prompts. We run finetuning experiments for 30 epochs. In Section 5.1.2, where we compare parameterefficient methods, we replicate training setup from Asai et al. (2022), and trained our method for 20 epochs (since explored datasets are small-sized and contained less than 10k examples) Since SuperGLUE tasks that we used in our study do not have a test set, we used validation set performance as a final performance metric, following previously used prompt tuning protocols by Lester et al. (2021) and Vu et al. (2021). We checkpoint the models every epoch, and report the highest validation performance. Similarly to Lester et al. (2021), for each task we used its recommended metric by Wang et al. (2019) (see Table 7); for tasks with two corresponding metrics we report the average of both scores. ## A.6 Parameter-Efficiency Of R**Esidual** Prompt T**Uning** The total number of trainable parameters in RESID-UAL PROMPT TUNING consists of 1) trainable prompt embeddings, and 2) reparameterization network, which tunes down-projection Wdown ∈ R d×m and up-projection Wup ∈ R m×dlayers, as well as LayerNorm parameters (as shown in Figure 2). We assume that d is the dimensionality of model embeddings, m is MLP bottleneck size and N is the number of prompt tokens. Hence, we have d×N soft prompt parameters, and m × d + d × m + 2d = 2dm + 2d parameters in the reparameterization network. Thus, RESIDUAL PROMPT TUNING has 2dm + 2d + dN trainable parameters. Importantly, the reparameterization network can be discarded after training, hence we only have dN task-specific parameters. ## B Performance On Superglue B.1 Performance With 100-Token Prompts Table 9 shows the performance of different approaches for prompt tuning (w/ and w/o reparameterization) with 100-token prompts, presenting pertask results for all SuperGLUE tasks across three model variants (T5-Large, T5-Base, BERT-Base). We see that our method, RESIDUAL PROMPT TUN-ING, leads to consistent performance improvement over prompt tuning and two reparameterization methods across different tasks. ## B.2 Convergence Of Different Prompt Tuning Approaches Here, we study the convergence of RESIDUAL PROMPT TUNING, prompt tuning, and prompt tuning with MLP reparameterization. We show the evolution of accuracy and loss over the course of training on several SuperGLUE tasks in Figure 7. We observe that RESIDUAL PROMPT TUNING substantially speeds up convergence over the original prompt tuning by Lester et al. (2021). Notably, the residual connection in the reparameterization network plays a key role in boosting performance - MLP-based reparameterization without skip connection is actually slower to converge than the standard prompt tuning (Figure 7). We hypothesize that this is explained by skip connection making it easier to optimize prompt embeddings. Specifically, skip connection allows to bypass learning the identity function, and learns projections "on top" of the original embeddings instead of learning them from scratch (similar observations by (He et al., 2016)). Thus, residual prompt repameterization allows to flexibly combine the original prompt embeddings with embeddings projections, resulting in faster convergence and improved performance. ## B.3 Comparison Of Different Parameter-Efficient Methods Section 5.1.2 compares RESIDUAL PROMPT TUN-ING performance to other PEFT approaches, following Asai et al. (2022). In addition to performance reported in Table 3, here we include specific details of the explored PEFT methods (see Table 8). | Method | Train. params Add. params Pre-train. | | | |-----------|----------------------------------------|------|-----| | Fine-tune | 220M | 0 | No | | Adapter | 1.9M | 1.9M | No | | AdaptDrop | 1.1M | 1.1M | No | | ATTEMPT | 223K | 223K | Yes | | SPoT | 77K | 77K | Yes | | PT | 77K | 77K | No | | Res-PT | 462K | 77K | No | ## C Extended Ablation Studies C.1 Effect Of Shared Reparameterization Network Figure 8 shows performance of RESIDUAL PROMPT TUNING with shared and separate MLP for reparameterization. Interestingly, shared MLP offers better performance on small datasets (e.g. CB) due to knowledge sharing between prompt tokens. At the same time, separate MLPs offer more flexibility and perform better on larger-scale datasets (e.g. WiC). Overall, their performance is similar and shared MLP is a significantly more parameter-efficient variant. Hence, we choose to use shared MLP in our work. | Task → | BoolQ | CB | COPA | MultiRC | ReCoRD | |-----------------|---------|----------|---------|-----------|-------------| | Method | Fl/Acc | F1/EM | F1/EM | | | | Ac. | Ac. | T5-Large | | | | | Prompt Tuning 3 | | | | | | | PT w/ MLP | 83.7 | 87.1 | 52552.7 | 65.65.65. | 7.4 | | Residual PT | 84.2 | 93.3 | 54.3 | 83.9 | 65.9 | | Fine-tuning 1 | 85.4 | 93.2 | 83.4 | 67 | 86.3 | | T5-Base | | | | | | | Prompt Tuning 3 | | | | | | | PT w/ MLP | 72772.7 | 78.7 | 56.3 | 58.1 | 63.0 | | Residual PT | 79.0 | 86.0 | 60.600 | 79.6 | 56.7 | | Fine-tuning ' | 81.4 | 86.2 | 94.0 | 71.2 | 61.4 | | BERT-Base | | | | | | | Prompt Tuning | 62.62 | 62662.5 | 54.6 | 57.4 | 64.6 | | PT w/ MLP | 62.62.3 | 63.7 | 64.0 | 58.3 | 65.65.65.65 | | Residual PT | 62.62.3 | 72.6 | 64.2 | 57.8 | 65.65.65. | | Fine-tuning | 73.2 | 89.9 | 65.7 | 66.9 | 62.8 | | RTE | |-----------| | Acc | | 85.7 | | 87.7 | | 87.8 | | 61.4 | | 81.5 | | 74.6 | | 555555555 | | 5155555 | | 5255255 | | 65.1 | | WiC | |----------| | Ac. | | 68.5 | | 71.1 | | 69.3 | | 66.1 | | 68.68.68 | | 68.3 | | 55.4 | | 57.1 | | 54.2 | | 67.8 | | WSC | Avg. | |-----------|--------| | Ac. | 74.5 | | 21.9 | 67.8 | | 55.3 | 74.5 | | 86.3 | 82.3 | | 63.1 | | | 43.0 | 62.4 | | 525555555 | 70.5 | | 80.8 | 76.2 | | 64.1 | 59.2 | | 64.64 | 60.8 | | 63.8 | 61.6 | | 63.8 | 69.4 | ![14_image_0.png](14_image_0.png) ![15_image_0.png](15_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section "Limitations" ✓ A2. Did you discuss any potential risks of your work? Section "Ethical Considerations" ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 6 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 6 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 6 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix A2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix A2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A2 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A5.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
liang-etal-2023-attend
Attend, Select and Eliminate: Accelerating Multi-turn Response Selection with Dual-attention-based Content Elimination
https://aclanthology.org/2023.findings-acl.422
Although the incorporation of pre-trained language models (PLMs) significantly pushes the research frontier of multi-turn response selection, it brings a new issue of heavy computation costs. To alleviate this problem and make the PLM-based response selection model both effective and efficient, we propose an inference framework together with a post-training strategy that builds upon any pre-trained transformer-based response selection models to accelerate inference by progressively selecting and eliminating unimportant content under the guidance of context-response dual-attention. Specifically, at each transformer layer, we first identify the importance of each word based on context-to-response and response-to-context attention, then select a number of unimportant words to be eliminated following a retention configuration derived from evolutionary search while passing the rest of the representations into deeper layers. To mitigate the training-inference gap posed by content elimination, we introduce a post-training strategy where we use knowledge distillation to force the model with progressively eliminated content to mimic the predictions of the original model with no content elimination. Experiments on three benchmarks indicate that our method can effectively speeds-up SOTA models without much performance degradation and shows a better trade-off between speed and performance than previous methods.
## Attend, Select And Eliminate: Accelerating Multi-Turn Response Selection With Dual-Attention-Based Content Elimination Jianxin Liang1,2**, Chang Liu**1,3, Chongyang Tao4**, Jiazhan Feng**1,2**, Dongyan Zhao**1,3,5,6∗ 1 Wangxuan Institute of Computer Technology, Peking University 2 School of Intelligence Science and Technology, Peking University 3 Center for Data Science, Peking University 4 Microsoft Corporation 5Institute for Artificial Intelligence, Peking University 6 National Key Laboratory of General Artificial Intelligence, Peking University {liangjx,liuchang97,fengjiazhan,zhaody}@pku.edu.cn {chotao}@microsoft.com ## Abstract Although the incorporation of pre-trained language models (PLMs) significantly pushes the research frontier of multi-turn response selection, it brings a new issue of heavy computation costs. To alleviate this problem and make the PLM-based response selection model both effective and efficient, we propose an inference framework together with a post-training strategy that builds upon any pre-trained transformer-based response selection models to accelerate inference by progressively selecting and eliminating unimportant content under the guidance of context-response dual-attention. Specifically, at each transformer layer, we first identify the importance of each word based on context-to-response and response-to-context attention, then select a number of unimportant words to be eliminated following a retention configuration derived from evolutionary search while passing the rest of the representations into deeper layers. To mitigate the training-inference gap posed by content elimination, we introduce a posttraining strategy where we use knowledge distillation to force the model with progressively eliminated content to mimic the predictions of the original model with no content elimination. Experiments on three benchmarks indicate that our method can effectively speeds-up SOTA models without much performance degradation and shows a better trade-off between speed and performance than previous methods. ## 1 Introduction Constructing intelligent dialogue systems has attracted wide attention in the field of natural language processing (NLP) in recent years. There are two approaches widely used for the dialogue ∗ Corresponding author: Dongyan Zhao. ## Context A: can someone help me with installing drivers? this is the output file. B: What drivers are you installing A: I try to install the video card drivers, and it says to check out the log file of it. B: Give more detail. How do you try to install those drivers? which log file is that. A: The ones that ship with Ubuntu. Response B: This might be heavily connected, so maybe you have another driver manager running other open windows synaptic. Table 1: A dialogue example from Ubuntu Corpus. The light gray words are eliminated in shadow layers, the light red words are eliminated in mediate layers, and the black words are retained all the time and sent to the deeper layer for the context and response matching. system, generation-based and retrieval-based methods. The former views conversation as a generation problem (Vinyals and Le, 2015; Serban et al., 2016; Zhang et al., 2020b), while the latter aims to select the optimal response from candidates given a dialog context (Wu et al., 2017; Tao et al., 2019b; Xu et al., 2021; Han et al., 2021; Feng et al., 2022). Since retrieval-based methods can usually provide fluent and informative responses, they are widely adopted in a variety of industrial applications such as XiaoIce (Shum et al., 2018) from Microsoft and AliMe Assist (Li et al., 2017) from Alibaba. We focus on multi-turn response selection in retrieval-based dialogue systems in this paper. Recently advances of pre-trained language models (Devlin et al., 2019) further push the research frontier of this field by providing a much powerful backbone for representation learning (Whang et al., 2020; Gu et al., 2020) and dialogue-oriented selfsupervised learning (Xu et al., 2021; Zhang and Zhao, 2021; Han et al., 2021). Although significant performance improvement has been made by these PLM-based response selection models, they usually suffer from substantial computational cost and high inference latency due to the growing model size, presenting challenges for their development in resource-limited real-world applications. Therefore, there is an urgent need to accelerate PLMbased response selection models while maintaining their satisfactory performance. To accelerate PLM-based multi-turn response selection, one direct idea is to avoid *unnecessary* calculation when joint modeling dialogue context and response. Through empirical observation, we find that there are many unimportant contents that are either redundant (i.e., repeated by many context turns) or less relevant to the topic, especially in the lengthy dialogue context (Zhang et al., 2018). If accurately identified and appropriately eliminated, the removal of the unnecessary calculation on them can bring minimum performance degradation. Drawing inspiration from Goyal et al. (2020), we propose an inference framework together with a post-training strategy customized for PLM-based multi-turn response selection, where unimportant contents are progressively identified and dropped as the calculation goes from shallow layers to deep. In our framework, we seek to answer three research questions (RQs): (1) how to accurately identify these unimportant contents, (2) how to properly decide the intensity of elimination for these unimportant contents under various computation demands, and (3) how to eliminate unnecessary calculations on those contents at the minimum cost of performance degradation. As the answer to the above questions, we propose an inference framework together with a post-training strategy customized for PLM-based multi-turn response selection as illustrated in Table 1. For RQ1, we propose a dualattention-based method to measure the relative importance of tokens in context and response as we find this method is in accordance with our empirical observation. For RQ2, we adopt evolutionary search (Cai et al., 2019) to build the Pareto Frontier of performance-efficiency map and choose proper retention configurations (i.e., which defines how many tokens are passed to the next layer for each layer) from the frontier. For RQ3, we notice the gap between the proposed efficient inference framework and training and employ knowledge distillation (Hinton et al., 2015) to mitigate this gap by forcing the model with progressively eliminated contents to mimic the predictions of the original model with no content elimination. We evaluate our proposed method on three benchmarks for multi-turn response selection: Ubuntu (Lowe et al., 2015), Douban (Wu et al., 2017) and E-commerce (Zhang et al., 2018). Experimental results show that our proposed method can accelerate the inference of PLM-based multiresponse selection models with acceptable performance degradation under various computation constraints, while significantly outperforming previous acceleration methods. We also conduct comprehensive analyses to thoroughly investigate the effectiveness of proposed components. We summarize the contributions of this paper as follows: (1) We propose Attend, Select and Eliminate (ASE), an efficient inference framework customized for PLM-based multi-turn response selection models that identify and progressively eliminate unimportant contents. (2) We propose a knowledge-distillation-based post-training strategy to mitigate the training-inference gap and decrease the performance degradation caused by content elimination. (3) We conduct comprehensive experiments on three benchmarks to verify the effectiveness of our proposed method and prove its superiority over other acceleration methods. ## 2 Related Work Recently, methods based on pre-trained models are relatively popular, Whang et al. (2020) introduced the next sentence prediction and mask language model tasks in the PLMs into the conversation corpus, conducted post-domain training, and finally treated the context as a long sequence, and adjusted the model directly by fine-tuning the model. Compute context-response match scores. Xu et al. (2021) tries to introduce self-supervised learning tasks to increase the difficulty of model training, and the results show the effectiveness of these works. From the perspective of data augmentation, BERT-FP (Han et al., 2021) splits the context into multiple sets of short context-response pairs and introduces a conversational relevance task, which achieves state-of-the-art performance. Although the performance of the pre-training model is powerful, it also brings some problems. ![2_image_0.png](2_image_0.png) The expensive computational cost and high inference latency hinder the further implementation of the PLMs to a certain extent. Some works try to alleviate this problem, one of the branches is to reduce the model size, such as distillation (Jiao et al., 2020; Wang et al., 2021; Liu et al., 2022a,b), structural pruning (Michel et al., 2019; Fan et al., 2019; Gordon et al., 2020; Hou et al., 2020) and quantization (Zafrir et al., 2019; Shen et al., 2020; Zhang et al., 2020a; Bai et al., 2021), etc. Goyal et al. (2020) adopts the Attention Strategy to select the important tokens with a fixed length configuration, but its speed ratio cannot be selected as needed and once full training can only get a model with a fixed speedup. Since existing method Goyal et al. (2020) is mainly evaluated on single-sentence or sentencepair tasks, it not fully suitable for response selection where the model needs to understand the relationship between all the utterances in a dialogue session and learn the interaction of the utterances closely related to the response. Therefore, we propose to select and eliminate the token representation based on context-to-response and responseto-context attention (i.e., dual-attention, **DualA**), which make good use of the relationship between context-response. ## 3 Task Formulation Considering a dialogue system given a dialogue dataset D = {(ci, ri, yi)} n i=1. Each sample in the dataset is a triple that consists of context ci, response ri, and ground truth label yi. ci = {u1, u2*, ..., u*l} is dialogue context with l utterances and {uj} l j=1 are arranged in a temporal order. riis a response candidate and yi = 1 represents riis a proper response for the context ci, otherwise yi = 0. The core problem of this research is to learn a matching model M(·, ·) which can measure the matching degree between context and response. ## 4 Methodology We aim to accelerate the inference of PLM-based multi-turn response selection models by proposing Attend, Select and Eliminate (ASE) that progressively identifies and eliminates unimportant contents to avoid unnecessary calculations. The overall framework is illustrated in Figure 1. There are three crucial questions that need to be answered: (1) how to accurately identify the unimportant contents, (2) how to properly decide the intensity of content elimination, and (3) how to effectively mitigate the training-inference gap in our framework and decrease the performance degradation. In the following part of this section, we elaborate on our method by answering the above three research questions. ## 4.1 Content Selection In the specific scenario of multi-turn dialogue, there is a lengthy context with multiple turns and a single sentence of candidate response and the model aims to measure their semantic similarity. To achieve this goal, existing PLM-based methods calculate the interaction of all contents without distinction, regardless of the various importance of contents where many of them are redundant or topic-irrelevant. In order to eliminate them for inference acceleration, we need to accurately identify them first during encoder flow as in Figure 2(b). ![3_image_0.png](3_image_0.png) ## 4.1.1 Empirical Methods The multi-turn context accounts for a large proportion of the input pair (ci, ri), making it a good choice to start our content selection. For multiturn context, the easiest way is to conduct content selection in sentence-level. Empirically, the last few utterances in the dialogue context are more close to the response in the dialogue flow, so they might be more important than the utterances in the beginning. Hereby, we can also simply select the last k utterances in the original context as the new context (i.e., ci = {uj} n j=n+1−k ) and concatenate them with the candidate response, resulting in the setting that we denote as Lastk. Similarly, we can select other context utterances, such as the first k utterances and randomly selected k utterances which are denoted as Firstk and Randk, respectively. ## 4.1.2 Dual-Attention-Based Content Selection Although simply adopting empirical methods (i.e., Lastk) yields plausible results as will be shown in our experiments later, this approach takes all the last k utterances without distinction, regardless of the various importance of utterances and tokens. A reasonable way is to conduct content selection in a more fine-grained manner (i.e., token-level). Recent works have shown that the importance of a token can be measured by the total attention weights it receives from other tokens (Goyal et al., 2020; Kim and Cho, 2021), denoted as AM. However, AM treats all tokens in the input sequence equally without distinction, neglecting the imbalanced relationships between tokens in context and response. Intuitively, for a token in the context, the attention it receives from other context tokens reflects its importance in the context, which we call self-importance, and the attention obtained from response tokens reflects its importance for semantic matching, which we call mutual-importance. Therefore, we propose to disentangle the attention received by a token into two parts: (1) the selfattention within a context or response and (2) the mutual-attention between a context and a response, and jointly consider them when measuring the importance of a token, and we call it **DualA**. Specifically, take a token w in the context for example in Figure 2(a), we use the averaged attention weights posed by the response tokens on it as its mutualimportance score, formulated as: $$g_{\mathrm{c,mutual}}(w)={\frac{1}{H\cdot|T_{r e s}|}}\cdot\sum_{h=1}^{H}\sum_{w^{\prime}\in T_{r e s}}A_{h}[w^{\prime},w],\tag{1}$$ where Tres means the set of tokens belonging to the response, Ah represents the attention received by token w from w′ on head h, and H denotes the number of attention heads. While for the selfimportance of w, we adopt the averaged attention weights posed by other context tokens on it: $$g_{\text{c,self}}(w)=\frac{1}{H\cdot|T_{con}|}\cdot\sum_{h=1}^{H}\sum_{w^{\prime}\in T_{con}\atop w^{\prime}\neq w}A_{h}[w^{\prime},w],$$ where $T_{res}$ means the set of context tokens. We $\beta_{0,1}\leq\ 1$. then jointly consider the self-importance and the mutual-importance of w by a weighted sum of gc,self(w) and gc,mutual(w): $$g_{\rm c}(w)=\alpha_{c}\cdot g_{\rm c,self}(w)+\beta_{c}\cdot g_{\rm c,mutual}(w),\tag{3}$$ where αc, βc that satisfy 0 ≤ αc, βc ≤ 1 and αc + βc = 1 are weights for calculating the overall importance score for context tokens. Similarly, we can calculate the overall importance score for the tokens in the response with the only difference lying in the weights for response tokens αr, βr: $$g_{\rm r}(w)=\alpha_{r}\cdot g_{\rm r,self}(w)+\beta_{r}\cdot g_{\rm r,mutual}(w).\tag{4}$$ It should be noted that our method can be viewed as a generalization of typical attention-based importance measurement (Goyal et al., 2020), and can flexibly balance the influence of self-attention and dual-attention parts. ## 4.2 Retention Configuration Search After having the basis for evaluating the importance of the token, the model needs to determine retention configuration, i.e., how to properly decide the intensity of content elimination and how many tokens to keep and pass to deeper encoder layers. Given a PLM-based model M(θ) with m encoder layers, and θ is the parameter of model M. S = {s1, s2, · · · , sn} is a set called *retention configurations* where si = [l1, l2, l3, · · · , lm] is a monotonically non-increasing sequence and lj indicates that lj tokens are kept from the output of the lj−1-th encoder layer and passed to the lj -th encoder layer. According to s, the model M(θ) keeps and eliminates the corresponding number of tokens in each encoder, M(θ) can get faster inference, but the performance may degrade. In theory, there can be l0 l1 × l1 l2 *× · · · ×*lm−1 lm possible combinations for each s. By using evolutionary algorithms (Cai et al., 2019), we search for the Pareto Frontier to make the optimal tradeoffs between performance and efficiency which can satisfy various given computation constraints. ## 4.3 Training Framework In the aforementioned sections, we have introduced our accelerated inference framework for PLMbased multi-turn response selection models. Here, we present our training framework. Given a pre-trained language model such as BERT (Devlin et al., 2019), we first adapt it to the task of multi-turn response selection by using the SOTA method (i.e., BERT-FP (Han et al., 2021)) on some multi-turn response selection dataset, obtaining the model M(θ). Then we conduct retention configuration search (described in Sec. 4.2) based on our proposed method DualA to obtain a set of optimal retention configurations S∗. Now with the trained model M(θ) and S∗ with n retention configurations, we can get n acceleration settings for model inference with various speedup ratio, denoted as G = {M(θ, s1), · · · , M(*θ, s*n)}. Although one can directly utilize M(*θ, s*j ) for faster inference, we argue that there is a gap between the training and our proposed accelerated inference framework. The previously trained model M(θ) didn't encounter the situation where the input sequence of tokens is progressively eliminated from shallow layers to deep layers. Therefore, we propose to mitigate this training-inference gap with once-for-all self-distillation. Specifically, we fix M(θ) as the teacher and make a copy of it as the student. During self-distillation, the teacher receives the complete inputs without content elim- ## Algorithm 1: Model Training Steps Input: PLM (i.e.,BERT*base*) ; Datasets D*train* and Ddev; 1 Initialize retention set S; 2 Training BERTbase on D*train* to get M(θ) using BERT-FP (Han et al., 2021); 3 **repeat** 4 Sort the tokens based on the importance through Eq.(3) and Eq.(4) ; 5 Generate new s′ by evolutionary algorithms (Cai et al., 2019); 6 Update S based on the efficiency and performance on Ddev of M(*θ, s*′); 7 **until** S *converges to get* S∗; 8 **repeat** 9 Randomly sample a configuration sj from S∗; 10 Optimize M(*θ, s*j ) by minimizing K-L divergence through Eq.(5); 11 **until** *convergence*; Output: M(θ∗) and S∗ ination and produces a probability distribution pM(θ)(ci, ri) of whether the response is appropriate to the context or not. While for the student, in order to ensure it can be customized to all retention configurations S∗simultaneously with the same parameters θ∗, we randomly sample the configuration sj and compute its output distribution under content elimination setting as pM(θ′,sj )(ci, ri), which is used to compute the KL-divergence with the teacher's outputs following Hinton et al. (2015): $${\cal L}_{\theta^{\prime}}=D_{\rm KL}(p_{M(\theta)}(c_{i},r_{i})\|p_{M(\theta^{\prime},s_{j})}(c_{i},r_{i})).\tag{5}$$ After self-distillation, we obtain the adapted model M(θ∗) customized for all the searched optimal retention configurations S∗, making our final inference acceleration settings G∗ = {M(θ∗, s1), · · · , M(θ∗, sn)} efficient at the minimum cost of performance degradation. ## 5 Experiments 5.1 Dataset We evaluate our framework on three widely used multi-turn response selection benchmarks: the Ubuntu Corpus (Lowe et al., 2015), the Douban Corpus (Wu et al., 2017)and the E-commerce Corpus (Zhang et al., 2018). Model Ubuntu Douban E-commerce R10@1 R10@2 R10@5 Speed MAP MRR P@1 R10@1 R10@2 R10@5 Speed R10@1 R10@2 R10@5 Speed SMN 0.726 0.847 0.961 - 0.529 0.569 0.397 0.233 0.396 0.724 - 0.453 0.654 0.886 - DAM 0.767 0.874 0.969 - 0.550 0.601 0.427 0.254 0.410 0.757 - 0.526 0.727 0.933 - MRFN 0.786 0.886 0.976 - 0.571 0.617 0.448 0.276 0.435 0.783 - - - - - IOI 0.796 0.894 0.974 - 0.573 0.621 0.444 0.269 0.451 0.786 - 0.563 0.768 0.950 - MSN 0.800 0.899 0.978 - 0.587 0.632 0.470 0.295 0.452 0.788 - 0.606 0.770 0.937 - BERT 0.808 0.897 0.975 1x 0.591 0.633 0.454 0.280 0.470 0.828 1x 0.610 0.814 0.973 1x BERT-DPT 0.851 0.924 0.984 1x - - - - - - - - - - - BERT-SL 0.884 0.946 0.990 1x - - - - - - - 0.776 0.919 0.991 1x BERT-FP 0.911 0.962 0.994 1x 0.644 0.680 0.512 0.324 **0.542 0.870** 1x 0.870 **0.956** 0.993 1x BERT+ASE∗ 0.813 0.902 0.976 2.0x 0.591 0.639 0.462 0.283 0.475 0.814 2x 0.664 0.837 0.973 2.3x BERT+ASE† 0.828 0.910 0.979 1.1x 0.602 0.646 0.469 0.290 0.489 0.837 1.3x 0.700 0.852 0.971 1.4x BERT-FP+ASE∗ 0.897 0.955 0.991 1.5x 0.633 0.678 0.511 0.323 0.525 0.844 2x 0.843 0.941 0.993 1.4x BERT-FP+ASE† **0.914 0.964 0.994** 1.1x **0.650 0.691 0.532 0.343** 0.536 0.856 1.4x **0.872** 0.954 **0.996** 1.1x ![5_image_0.png](5_image_0.png) ## 5.2 Experimental Settings We use BERT-FP's trained model to search on the validation set and get k (k<20) different length configurations. We adopt the weighted sum of the distillation loss and the cross-entropy loss, as the training objective function running 5 to 8 epochs. We employ recall rate Rn@k as the evaluation metric. Especially for some samples in the Douban corpus having more than one true candidate response, we use MAP, MRR, and P@1 same as Tao et al. (2019b) and Yuan et al. (2019). For inference efficiency, we employ FLOPs (floating-point operations) speedup ratio compared to the BERT model as the measure, as it is agnostic to the choice of the underlying hardware. To avoid the pseudo improvement by pruning padding, we evaluate all models with input sequences without padding to the maximum length such as to pad length to 256. ## 5.3 Comparison Methods We compare our method with these baselines: (1)Interaction-based Models where the context and response candidate interact with each other at the beginning stage. SMN (Wu et al., 2017), DAM (Zhou et al., 2018), IOI (Tao et al., 2019b), MSN (Yuan et al., 2019), MRFN (Tao et al., 2019a). (2)BERT-based Models where the context and response are concatenated together and feed into BERT-based models to BERT (Devlin et al., 2019), BERT-DPT (Whang et al., 2020), BERT-SL (Xu et al., 2021), BERT-FP (Han et al., 2021). **(3)Inference Accelerated Models** PoWER-BERT (Goyal et al., 2020), L-Adaptive (Kim and Cho, 2021). ## 5.4 Overall Performance Table 2 and Figure 3 shows the overall comparison results with baselines. We can see that with ASE, the performance and efficiency of the BERT and BERT-FP are greatly improved. Specifically, BERT-FP+ASE† performs slightly better than the model BERT-FP on Ubuntu and E-commerce and achieves a significant improvement by 2.0% in P@1 and by 1.9% in R10@1 on Douban. BERTFP+ASE∗achieves comparable performance with a double speed on Douban. The ASE also gives the vanilla BERT significant performance improvement: 9.0% in R10@1 at 1.4x speed, 5.4% in R10@1 at 2.3x on E-commerce, and slightly better performance with a double speed on Ubuntu and Douban. The detail of the BERT with ASE is shown in Appendix. Figure 3 compares the effect of combining BERT-FP with three different accelerating methods: ASE, PoWER-BERT, and L-adaptive. It can be seen that with ASE, BERT-FP achieves better results than with other method by a large margin, which demonstrates that extracting important tokens based on dual attention is feasible for accelerating the inference of multi-turn response selection. In contrast, both baselines have shown a large decline due to the incomplete adaptation of the task. ![6_image_0.png](6_image_0.png) ## 5.5 Discussions Comparison between different content selection strategies. Intuitively, the latter utterances may be helpful for the multi-turn response selection. We compare several different strategies, including empirical methods (i.e., Lastk, Firstk, and Randk), the attention-based method AM and dualattention-based method DualA. Figure 4 shows the results of these strategies with k=3, 4, and 5 on Ubuntu. It can be seen that based on the three simple empirical strategies, Lastk, Firstk, and Randk, the model can also achieve good performance with a certain inference speed. Strategy Lastk performs much better than strategy Firstk and Randk, which validates our hypothesis that latter utterances in context may be more helpful and more important for selecting appropriate responses. Most importantly, the performance-efficiency tradeoffs of our proposed strategy based on dual attention are completely better than the other strategies. This result shows that to achieve the effect of faster inference, DualA, a fine-grained strategy of selecting token, is more effective than the utterance-level selection method for the response selection. The effects of using only the k-th utterance from last as the context. To understand the effect of utterances in different positions on the task of response selection, we test the performance using only the k-th from last utterance as context. From the validation set, we first filter out examples where the context is too short and keep the examples where the context consists of more than 6, 8, 10, and 12 utterances on Ubuntu. Then, the k-th utterance from last of the context and the candidate response are concatenated, being fed to a trained model for classification. As experimental results in Figure 5(a) show, the overall performance of the model is relatively low. Even for the last utterance of the context, also the previous turn of the response, the performance is still not high. However, model performance increases rapidly as the utterance position moves forward under these four settings, which means that the closer the utterance to the candidate response, the better the performance for the response selection. This is also in line with the actual chat scene of human beings, where both parties usually respond to each other's current utterance. The distribution of the selected token representations. Under the same retention configuration, the token selected by different strategies will be different. To better observe which tokens are selected by strategies, we divide the dialogue context into three parts, the first third, middle third, and last third of the context. On the Ubuntu IRC V1 corpus, we set the same retention configuration for both strategies, then as the encoder layer deepens, we count the distribution of token in the context part that is selected using AM and DualA. In Figure 5(b), under the same retention configuration, it can be seen that under the method AM which uses the total attention weights it receives from other tokens to evaluate the token's importance, as the encoder layer deepens, the proportion of token selected in the last third part is slightly higher, while the first third and the middle third are basically the same. However, there is almost no difference in the distribution of the three parts. While in Figure 5(c), under the method DualA based on the dual-attention of the context and response, it can be seen that as the encoder layer deepens, the percentage of token selected in the first third of the context drops sharply. The middle and last third parts still retain a large part. Until after the ninth encoder layer, the middle and last parts begin to ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) ![7_image_1.png](7_image_1.png) decrease drastically but are still more than the first third part of the context. This is consistent with the results in Figure 5(a). To a certain extent, this result shows that when the attention of response-tocontext is used as the query, the response prefers to focus on the middle and last parts of the context, that is, the tokens that are closer to the response will provide more help in response selection, but are never the same. Hyper-parameter tuning. According to Equation 4, the self-importance gr,self and the mutualimportance gr,mutual have different contributions to selecting tokens. We experiment with the effects on the performance with different gr,self and gr,mutual weights. As shown in Figure 6, the horizontal axis is α/β, which represents the weight coefficient of the gr,self to gr,mutual during the model selecting tokens belonging to the context. It can be seen that as the α/β increases, the tokens selected in the context change, and the performance also gradually improves, reaching the maximum at α/β = 0.25. Consistent with our finds in Figure 4, method DualA is consistently performant than AM by a large margin. These results under different speedup ratios show consistent trends, i.e., the method of selecting tokens based on dual-attention is more effective for the response selection task. The effects of the once-for-all self-distillation. After token selection, we compare model performance on Ubuntu with or without self-distillation. Different from the traditional distillation method, we adopt the once-for-all self-distillation method to distill the teacher's knowledge to the student by sampling different retention configurations during the training. Figure 7 is a comparison of the performance with and without self-distillation. It can be seen that with self-distillation, the performance is significantly improved for the model under all retention configurations, especially at large speedup ratio. As the speedup ratio of the model increases, that is, more tokens are eliminated during ![8_image_0.png](8_image_0.png) inference, and the performance of the model starts to degrade, but the performance improvement of self-distillation is also enhanced. This way of optimizing all the retention in the training once avoids the problem of re-distilling if configuration various during the actual deployment process. The flexibility of ASE. We demonstrate the flexibility of ASE by applying it on top of vanilla BERT. ASE can be easily integrated with any BERT-like model. We use the bert-base model from Huggingface1and finetune it on three benchmarks: Ubuntu, Douban and E-commerce. Then we apply the Dualattention-based Content Selection method in Section 4.1.2 to search for the optimal retention and perform self-distillation. Figure 8 shows that ASE can boost BERT performance by 2.0% at 1.1x on Ubuntu and 9.0% at 1.4x on E-commerce. ## 6 Conclusion In this paper, we propose a new framework of progressively extracting important tokens and eliminating redundant tokens to accelerate inference for multi-turn response selection, which identifies important tokens based on dual-attention of the context and response. The experimental results empirically verify the effectiveness of this method. In the future, we plan to accelerate inference further by combining it with the layer-wise reduction. ## Limitations During the configuration search stage, because this is a multi-objective optimization problem involving performance and efficiency, we use the evolutionary algorithm to search here. Designing a robust and efficient optimization objective is not simple and it will affect the convergence of search results. 1https://huggingface.co/bert-base-uncased, https://huggingface.co/bert-base-chinese Limited by hardware, and in order to speed up the search, we use a small subset of the validation set to search retention configuration, which is bound to have a certain impact on the overall search results. ## Ethical Statement In this paper, we propose ASE, an algorithm to accelerate multi-turn response selection by prograssively selecting and eliminating unimportant tokens. The training corpora including the Ubuntu Corpus, the Douban Corpus and the E-commerce Corpus used for evaluating our framework are publicly available and don't pose privacy issues. The algorithm that we propose does not introduce ethical or social bias. ## Acknowledgements We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106600). ## References Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jin Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2021. BinaryBERT: Pushing the limit of BERT quantization. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4334–4348, Online. Association for Computational Linguistics. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. 2019. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*. Jiazhan Feng, Chongyang Tao, Chang Liu, Rui Yan, and Dongyan Zhao. 2022. How to represent context better? an empirical study on context modeling for multi-turn response selection. In *Findings of the* Association for Computational Linguistics: EMNLP 2022, pages 7285–7298, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Mitchell Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 143–155, Online. Association for Computational Linguistics. Saurabh Goyal, Anamitra Roy Choudhury, Saurabh Raje, Venkatesan Chakaravarthy, Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating bert inference via progressive word-vector elimination. In *International Conference on Machine Learning*, pages 3690–3699. PMLR. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In *Proceedings of the* 29th ACM International Conference on Information & Knowledge Management, pages 2041–2044. Janghoon Han, Taesuk Hong, Byoungjae Kim, Youngjoong Ko, and Jungyun Seo. 2021. Finegrained post-training for improving retrieval-based dialogue systems. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1549–1558, Online. Association for Computational Linguistics. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic bert with adaptive width and depth. *Advances in Neural* Information Processing Systems, 33:9782–9793. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163– 4174, Online. Association for Computational Linguistics. Gyuwan Kim and Kyunghyun Cho. 2021. Lengthadaptive transformer: Train once with length drop, use anytime with search. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6501–6511, Online. Association for Computational Linguistics. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017. Alime assist: An intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2495–2498. Chang Liu, Chongyang Tao, Jiazhan Feng, and Dongyan Zhao. 2022a. Multi-granularity structural knowledge distillation for language model compression. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1001–1011, Dublin, Ireland. Association for Computational Linguistics. Chang Liu, Chongyang Tao, Jianxin Liang, Tao Shen, Jiazhan Feng, Quzhe Huang, and Dongyan Zhao. 2022b. Rethinking task-specific knowledge distillation: Contextualized corpus as better textbook. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10652–10658, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. *arXiv preprint arXiv:1506.08909*. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? *arXiv* preprint arXiv:1905.10650. Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 30. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 34, pages 8815–8821. Heung-Yeung Shum, Xiaodong He, and Di Li. 2018. From eliza to xiaoice: challenges and opportunities with social chatbots. arXiv preprint arXiv:1801.01957. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019a. Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 267–275. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019b. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 1–11, Florence, Italy. Association for Computational Linguistics. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. *arXiv preprint arXiv:1506.05869*. Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. MiniLMv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 2140–2151, Online. Association for Computational Linguistics. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim. 2020. An effective domain adaptive post-training method for bert in response selection. In *INTERSPEECH*, pages 1585– 1589. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrievalbased chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505, Vancouver, Canada. Association for Computational Linguistics. Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2021. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14158–14166. Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 111–120, Hong Kong, China. Association for Computational Linguistics. Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pages 36–39. IEEE. Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020a. TernaryBERT: Distillation-aware ultra-low bit BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 509–521, Online. Association for Computational Linguistics. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DIALOGPT : Largescale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics: System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling multi-turn conversation with deep utterance aggregation. In *Proceedings of the 27th International Conference on* Computational Linguistics, pages 3740–3752, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Zhuosheng Zhang and Hai Zhao. 2021. Structural pretraining for dialogue comprehension. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5134–5145, Online. Association for Computational Linguistics. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1118–1127, Melbourne, Australia. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitiations,7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ,1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 5 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xu-etal-2023-medical
Medical Dialogue Generation via Dual Flow Modeling
https://aclanthology.org/2023.findings-acl.423
Medical dialogue systems (MDS) aim to provide patients with medical services, such as diagnosis and prescription. Since most patients cannot precisely describe their symptoms, dialogue understanding is challenging for MDS. Previous studies mainly addressed this by extracting the mentioned medical entities as critical dialogue history information. In this work, we argue that it is also essential to capture the transitions of the medical entities and the doctor{'}s dialogue acts in each turn, as they help the understanding of how the dialogue flows and enhance the prediction of the entities and dialogue acts to be adopted in the following turn. Correspondingly, we propose a Dual Flow enhanced Medical (DFMed) dialogue generation framework. It extracts the medical entities and dialogue acts used in the dialogue history and models their transitions with an entity-centric graph flow and a sequential act flow, respectively. We employ two sequential models to encode them and devise an interweaving component to enhance their interactions. Experiments on two datasets demonstrate that our method exceeds baselines in both automatic and manual evaluations.
# Medical Dialogue Generation Via Dual Flow Modeling Kaishuai Xu1, Wenjun Hou1,2∗, Yi Cheng1∗, Jian Wang1**, Wenjie Li**1 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong 2Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China {kaishuaii.xu, alyssa.cheng, jian-dylan.wang}@connect.polyu.hk, houwenjun060@gmail.com, cswjli@comp.polyu.edu.hk ## Abstract Medical dialogue systems (MDS) aim to provide patients with medical services, such as diagnosis and prescription. Since most patients cannot precisely describe their symptoms, dialogue understanding is challenging for MDS. Previous studies mainly addressed this by extracting the mentioned medical entities as critical dialogue history information. In this work, we argue that it is also essential to capture the transitions of the medical entities and the doctor's dialogue acts in each turn, as they help the understanding of how the dialogue flows and enhance the prediction of the entities and dialogue acts to be adopted in the following turn. Correspondingly, we propose a Dual Flow enhanced Medical (DFMED) dialogue generation framework. It extracts the medical entities and dialogue acts used in the dialogue history and models their transitions with an entity-centric graph flow and a sequential act flow, respectively. We employ two sequential models to encode them and devise an interweaving component to enhance their interactions. Experiments on two datasets demonstrate that our method exceeds baselines in both automatic and manual evaluations. ## 1 Introduction Medical dialogue systems (MDS) have drawn considerable research attention with the increasing demand for telemedicine, especially after the outbreak of the COVID-19 pandemic (Zeng et al., 2020; Liu et al., 2020; Zhou et al., 2021; He et al., 2022; Xia et al., 2020; Yan et al., 2022), as they can provide much more people with in-time and affordable access to medical services such as health consultation, diagnosis, and prescription. ![0_image_0.png](0_image_0.png) Figure 1: An example of a medical dialogue. *Diagnosis* and *Prescription* are short for *Make a diagnosis* and Prescribe medications, respectively. For MDS, an efficient understanding of the dialogue history is challenging, as patients usually cannot describe their symptoms precisely and tend to convey lots of redundant information unnecessary for diagnosis (Liu et al., 2020; Mengel et al., 2002). To extract the critical information in the lengthy dialogue, previous research focused on identifying the important medical entities mentioned in the context, such as diseases, medicine, and symptoms (Liu et al., 2020; Lin et al., 2021; Liu et al., 2021). In our work, we argue that capturing the transitions of the medical entities and the doctor's dialogue acts in each turn (as depicted in Figure ∗Equal Contributions. 1) is also essential for the construction of MDS, which was largely overlooked by previous studies. In medical dialogues, the flows of medical entities and dialogue acts both follow particular transition patterns. For the medical entity flow, entities to appear in the following utterance are usually closely related to the ones mentioned recently. As in Figure 1, the entities mentioned in adjacent dialogue rounds are logically related, being neighboring nodes in the medical knowledge. For the dialogue act flow, though variations are allowed to some extent, it usually needs to follow the medical consultation framework suggested in Silverman et al. (2016). Modeling these two types of transitions would be helpful in dialogue understanding as they effectively capture how the dialogue history flows. Moreover, learning their transition patterns would also enhance the prediction of the dialogue acts and the medical entities to be adopted in the future turn. Based on the above intuition, we propose a Dual Flow enhanced Medical (DFMED) dialogue generation framework. At each dialogue turn, it extracts the medical entities and the dialogue acts used in the dialogue history, and models their transitions with an entity-centric graph flow and a sequential act flow, respectively. Two sequential models are constructed to encode their transitions, with an interweaving component to enhance their interactions. The output representations are then used to predict the entities and the acts to be adopted in the following turn, which are employed to guide the response generation through gate control. Our main contributions are summarized as follows: - We propose a novel MDS framework, DFMED, which models the transitions of medical entities and dialogue acts via step-by-step interweaving. - We summarize the dialogue acts in the medical consultation scenario grounded on the medical documentation standards, SOAP notes (Cameron and Turtle-Song, 2002), including make a diagnosis, *prescribe medications*, etc. - Experimental results show the superiority of DFMED over the previous frameworks and demonstrate the effectiveness of introducing the medical entity and dialogue act flows. ## 2 Related Work Medical dialogue systems aim to provide medical services for patients. Early studies focus on automatic diagnosis in the form of a task-oriented dialogue system, and the purpose is to collect hidden symptoms in minimal turns and make a diagnosis at the end (Liao et al., 2020; Lin et al., 2019; Chen et al., 2021; Liu et al., 2022). Wei et al. (2018) creates a dataset annotated with symptom phrases and constructs a reinforcement learning based MDS. Xu et al. (2019) improves the topic transition in MDS by introducing a medical knowledge graph. With the release of large-scale medical dialogue datasets (e.g., MedDialog (Zeng et al., 2020), MedDG (Liu et al., 2020), and KaMed (Li et al., 2021)), dialogue response generation attracts increasing attention. Liu et al. (2020) frames medical dialogue generation as entity prediction and entity-aware response generation. Furthermore, Liu et al. (2021) unifies the dialogue context understanding and entity reasoning through a heterogeneous graph. Li et al. (2021) considers medical entities in patient and doctor utterances as states and actions and presents semi-supervised variation reasoning with a patient state tracker and a physician action network. The proposed model, VRBot, achieves comparable performance without entity supervision. Lin et al. (2021) analyses a low-resource challenge in medical dialogue generation and develops an entity-involved meta-learning framework to enhance diagnostic experience sharing between different diseases. Although many studies have tried to improve medical dialogue generation by incorporating predicted medical entities, they simply treat entities in different turns as nodes in one entity graph with no entity transition modeling. Besides, few works focus on sequential entity-guided dialogue act prediction and sequential act-involved entity selection. Our framework exploits the transition and interaction of entity and act flows to strengthen dialogue understanding and guide response generation. ## 3 Preliminary Problem Formulation. We define a medical dialogue as U={(Pk, Dk)} T k=1, where P and D represent utterances from patients and doctors. Each utterance contains several medical entities E={ei}, and each doctor utterance is annotated with multiple dialogue acts A={aj}. Given the dialogue history Ut={P1, D1*, ..., P*t}, the system is supposed to generate the t-th doctor utterance Dt. Dialogue Acts. We summarize several common dialogue acts implied in medical dialogues. One type is medical-related dialogue acts. We design acts that represent a function in a medical documentation standard, the SOAP note (Cameron and Turtle-Song, 2002), and occur throughout the dialogue. For example, *State a required medical test* and *Prescribe medications* in the "Plan" function of the SOAP note are included in our designed acts. The other type is general open-domain dialogue acts. We choose some acts introduced by Zhao et al. (2022) and further refine them, such as merging acts that behave as social obligation management into *Chitchat*. Eventually, we obtain 7 dialogue acts for flow modeling, and later experiments demonstrate the effectiveness of guidance for response generation. Detailed description of each dialogue act can be found in Appendix A.2. ## 4 Method The proposed method learns two flows (i.e., a medical entity flow and a dialogue act flow) to model the propagation of medical entities and medicalrelated interactions and guide response generation with the corresponding hints. As shown in Figure 2, the overall framework contains two modules. The Dual Flow Modeling module learns the entity and act transitions and infers probable entities and acts. The Response Generation module outputs a response under the guidance of the selected entities and predicted acts. ## 4.1 Dual Flow Modeling Figure 3 left displays the architecture of the Dual Flow Modeling module. To extract flow transitions, we encode the medical entities and the dialogue acts in a sequential way. Besides, at each turn, entity and act embeddings are mutually integrated through an interweaving component, named Interweaver. The context states S c produced by a context encoder also play a role in the integration. After encoding the whole dialogue history, we obtain the entity state S e t and act state S a t . The entity states are adopted to select the relevant entities. The act states are sent to a multi-label act predictor to estimate the next acts. We apply BERT (Devlin et al., 2019) as the context encoder to encode the dialogue history. All utterances are concatenated with a unique role token (i.e., "[P]" or "[D]") to separate the patient and doctor utterances. We compute the context states with different history ranges Uk={P1, D1*, ..., P*k}(k ∈ [1, t]) for supporting the interweaving at each turn. The mean embedding of tokens in history Uk is ap- Acts Entities Dialogue History Dual Flow Modeling Predicted Acts Selected Entities Response Generation Response Figure 2: The overall framework of DFMED. plied as the k-th turn context state S c k ∈ R d, where d is the dimension of the state. ## 4.1.1 Entity Flow To learn the sequential transition of medical entities, we create the Entity Flow that encodes an entity graph transition and selects the most relevant entities. Specifically, the flow consists of an entity graph for each turn. Medical entities of each dialogue turn and entities from an external knowledge graph with a one-hop connection to the entities in dialogue are included in the graph of the k-th turn, denoted as Gk. Since we assume entities hop along the links on the graph, the ones in the sub-graph partially provide future transition hints. We define graphs that include entities until the t-th turn as G≤t. For all entities in graph G≤t, we use the same encoder as above to get entity embeddings. Instead of randomly initializing an embedding, BERT-based encoding keeps token-level semantics. Then, the average token embedding of each entity is employed as the raw embedding, denoted as h e0 ∈ R d. Finally, Graph Attention Network (GAT) (Velickovic et al., 2018) is implemented to merge neighboring information for each entity: $$\alpha_{i j}^{k}=\frac{\exp\left(\sigma_{1}\left(a^{\mathsf{T}}[W^{k}h_{i}^{e_{0}}||W^{k}h_{j}^{e_{0}}]\right)\right)}{\sum_{\mu\in\mathcal{N}_{i}}\exp\left(\sigma_{1}\left(a^{\mathsf{T}}[W^{k}h_{i}^{e_{0}}||W^{k}h_{\mu}^{e_{0}}]\right)\right)},\tag{1}$$ $$\mathrm{(1)}$$ $$h_{i}^{e}=\left[\sigma_{2}\left(\sum_{j\in\mathcal{N}_{i}}\alpha_{i j}^{k}W^{k}h_{j}^{e0}\right)\right]_{k=1}^{K},\tag{2}$$ where h e i ∈ R dis the updated entity embedding, K is the number of heads, a ∈ R 2dis a trainable weight, Wk ∈ R dh×dis the linear transformation matrix for the k-th head, σ1 and σ2 are the activation function, and Ni represents neighboring entities that connect to entity i with one hop. The updated embedding is used to compute each turn's overall graph embedding via mean pooling: $${\bar{h}}_{k}^{e}={\frac{1}{|G_{k}|}}\sum_{i\in G_{k}}h_{i}^{e},k\in[1,t],\qquad\quad(3)$$ ![3_image_0.png](3_image_0.png) where h¯ek ∈ R dis the entity graph embedding, t represents the turn of the target response. Following Tu et al. (2022), we employ a GRU to model the entity graph transition throughout the dialogue. The GRU takes all the previous graph embeddings {h¯e1 , h¯e2 , ..., h¯e t } as input and produces the entity state as follows: $$S_{t}^{e}=\mathrm{GRU_{Entity}}(S_{t-1}^{e},\bar{h}_{t}^{e}),$$ t), (4) where S e t ∈ R dis the final hidden state of the GRU. We denote S e t as the entity state that implies clues for the entity transition in the previous context. Then, we apply the entity state S e t to calculate relevant scores for candidate entities. These entities are from the sub-graph of entities in a historical dialogue context. The score is defined as follows: $${\mathrm{Score}}=\left\langle S_{t}^{e},h_{i}^{e}\right\rangle,i\in G_{\leq t}^{\mathbf{l}},\qquad\qquad(5)$$ where ⟨,⟩ represents a similarity function, h e i is the entity embedding, G1≤t is the one-hop sub-graphs for entities until the t-th turn. We select top-k relevant entities Eˆtto guide response generation. ## 4.1.2 Act Flow To learn the medical-related interactions of the doctor, we design the Act Flow that encodes act sequences and predicts the next acts. Specifically, the flow is composed of the act sequence of each turn. We first randomly initialize the trainable embedding of each dialogue act, denoted as h a ∈ R d. The act sequence of the k-th turn can be defined as Ak = {h a 1 , ha 2 , ..., hanka }, where n ka represents the number of dialogue acts for each turn. Then, we compute the act sequence embedding h¯a k ∈ R d, k ∈ [1, t] through mean pooling. Similar to the entity flow, we employ a GRU to model dialogue act transition. With all sequence embeddings as input, the final hidden state of the GRU is calculated and denoted as the act state: $$S_{t}^{a}=\mathrm{GRU}_{\mathrm{Act}}(S_{t-1}^{a},\bar{h}_{t}^{a}),$$ $$(6)$$ $\mathbf{a}$ t), (6) $$(4)$$ where S a t ∈ R dis the act state of the t-th turn. Then, the multi-act probability of the t-th turn is computed based on the act state S a t with a sigmoid and linear transformation layer: $$\mathrm{Prob}=\mathrm{sigmoid}(W_{a}S_{t}^{a}+b_{a}),$$ $$\left(7\right)$$ t + ba), (7) where Wa ∈ R na×dand ba ∈ R na are model parameters, and na denotes the number of candidate dialogue acts. The predicted dialogue acts Aˆt are obtained through an appropriate threshold. ## 4.1.3 Flow Interweaving To achieve the integration of these two flows, we present an Interweaver to extend the entity graph embedding and act sequence embedding, as shown in Figure 3 right. This component incorporates the dialogue context into entity/act states and integrates entity/act sequential information from each other. For the entity flow, we first fuse the historical context into the entity graph embedding. The context-aware graph embedding of each turn is computed via cross-attention: $$\alpha_{k i}=\mathrm{softmax}(\frac{Q_{k}^{c\mathsf{T}}K_{i}^{e}}{\sqrt{d}}),\qquad\qquad(8)$$ $$\bar{h}_{k}^{e^{c}}=\sum_{i\in G_{k}}\alpha_{k i}V_{i}^{e},k\in[1,t],\qquad\qquad(9)$$ where h¯e c k ∈ R dis the context-aware graph embedding at the k-th turn, Ke iand V e iare linear projected vectors based on the entity embedding h e i , and Qck is based on the context state S c k . We denote this operation on S c k and [h e i ]i∈Gk as CA(S c k , [h e i ]i∈Gk ). Then, the act transition is fused into the graph embedding via CA as follows: $$\bar{h}_{k}^{e^{a}}=\mathrm{CA}(\bar{h}_{k}^{e},[h_{j}^{a}]_{j\in A_{\leq k}}),k\in[1,t],$$ ), k ∈ [1, t], (10) where h¯e a k ∈ R dis the act-aware graph embedding, and A≤k represents act sequences until the k-th turn. The final entity graph embedding is defined as the concatenation of three embeddings: $$\bar{h}^{e^{\prime}}=[\bar{h}^{e};\bar{h}^{e^{e}};\bar{h}^{e^{a}}],$$ a], (11) where h¯e′ k ∈ R 3dis the extended graph embedding that incorporates historical dialogue context and act transition pattern. For the act flow, we apply the CA operation to compute the context-aware and entity-aware act sequence embedding following the same way. The context-aware sequence embedding h¯a c kis based on S c k and [h a j ]j∈Ak , and the entity-aware sequence embedding h¯a e kis based on h¯a k and [h e i ]i∈G≤k . These two embeddings incorporate the historical dialogue context and entity transition pattern. Then, the final sequence embedding h¯a′∈ R 3dis concatenated as follows: $$\bar{h}^{a^{\prime}}=[\bar{h}^{a};\bar{h}^{a^{c}};\bar{h}^{a^{c}}].$$ We send the extended embeddings h¯e′and h¯a′ instead of the pure embeddings to the above GRUs, allowing two flow models to acquire context information and lead each other. ## 4.1.4 Training Objective The training of the dual flow modeling module includes two tasks, medical entity ranking and multi-act classification. The first one follows a contrastive learning (Gao et al., 2021) way with a negative log likelihood loss: $${\mathcal L}_{e}=-\sum_{t}^{T}\sum_{t^{+}}\log{\frac{e^{\left\langle S_{t}^{e},h_{t^{+}}^{e}\right\rangle}}{\sum_{i^{-}\in G_{\leq t}^{1}}e^{\left\langle S_{t}^{e},h_{i^{-}}^{e}\right\rangle}}},\tag{13}$$ where h e t+ is the embedding of a target entity mentioned in the t-th response, ⟨,⟩ is the dot product operation that calculates relevant scores (see Eq. 5), and G1≤t denotes the one-hop sub-graphs for ![4_image_0.png](4_image_0.png) $$(10)^{\frac{1}{2}}$$ the dialogue history until the t-th turn. We randomly select several entities instead of the whole sub-graph as negative entities. The second one is defined as a multi-label classification with a binary cross-entropy loss: $${\mathcal{L}}_{a}=\sum_{t}^{T}\sum_{j}^{n_{a}}{\mathrm{BCE}}({\hat{y}}_{t,j}^{a},y_{t,j}^{a}),\qquad(14)$$ $$(11)$$ where yˆ a t,j is the probability of the dialogue act j for the t-th response (see Eq. 7), and y a t,j is the groundtruth act label. The overall training objective of the dual flow modeling module can be calculated as: $${\mathcal{L}}_{F}=\lambda_{e}{\mathcal{L}}_{e}+\lambda_{a}{\mathcal{L}}_{a},\qquad\qquad(15)$$ where λe and λa are weights for each task. ## 4.2 Flow-Guided Response Generation $$(12)$$ After training the dual flow modeling module, we exploit the top-k relevant entities Eˆt(see Sec. 4.1.1) and predicted dialogue acts Aˆt(see Sec. 4.1.2) to guide response generation. We first encode the entity/act and dialogue history separately, allowing a relatively complete dialogue context. Then, these two types of information is merged into the decoder via a fusion component. Act-Entity Fusion As shown in Figure 4, the dialogue acts and entities are concatenated into one token sequence. We assign a unique token to each dialogue act. Given the entity/act sequence and dialogue history sequence as input, the corresponding encoder produces final hidden states, Hea and Hc. We fuse two types of information through a gate mechanism in the decoder: $$\begin{array}{l}{{h_{w}^{l,e a}=\mathrm{CA}(h_{w}^{l},H^{e a}),}}\\ {{h_{w}^{l,c}=\mathrm{CA}(h_{w}^{l},H^{c}),}}\\ {{g_{w}^{l}=\mathrm{sigmoid}(W^{l}h_{w}^{l,c}),}}\\ {{h_{w}^{\prime}=\mathrm{FFN}(g_{w}^{l}h_{w}^{l,c}+(1-g_{w}^{l})h_{w}^{l,e a}),}}\end{array}$$ w ), (19) where h l′w is the final hidden state at the w-th token after fusion, h lw denotes the output of the l-th selfattention layer, FFN is the feed forward network, and Wlis the trainable parameter. We adopt the final hidden state h L′ w of the decoder to compute the probability distribution of the next token: $$p(D_{t,w+1})=\mathrm{softmax}(W_{d}h_{w}^{L^{\prime}}+b_{d}),\tag{20}$$ where Wd and bd are linear mapping parameters. Training and Inference When training the response generation module, we use the selected top-k entities from the dual flow modeling module and ground truth dialogue acts as encoder input. The training objective is defined as follows: $${\mathcal{L}}_{G}=-\sum_{t}^{T}\sum_{w=1}^{w}\log p(D_{t,w+1}),\qquad(21)$$ where p(Dt,w+1) denotes the probability of the next token in the t-th response. Then in inference, we apply the top-k entities and predicted dialogue acts to generate the next response. ## 5 Experiments 5.1 Dataset Our experiments are conducted on two medical dialogue datasets, MedDG (Liu et al., 2020) and KaMed (Li et al., 2021). The MedDG dataset contains 17K dialogues, focusing on 12 diseases in the gastroenterology department. The medical entities mentioned in the dialogues are annotated in MedDG. We split the dataset into 14862/1999/999 dialogues as the training/validation/test sets, following its original division. The KaMed dataset contains over 63K dialogues, covering diverse diseases in about 100 hospital departments. We filter out some dialogues with privacy concerns in KaMed (see Appendix A.3) and obtain 29,159/1,532/1,539 dialogues as the training/validation/test sets. The final data accounts for around 51% of the total. ## 5.2 Baseline Models We compare DFMED with five baseline models. (1) **Seq2Seq** (Sutskever et al., 2014) is an RNNbased sequence to sequence model with an attention mechanism. (2) **HRED** (Serban et al., 2016) is a hierarchical RNN that models a dialogue as a token sequence and an utterance sequence. (3) **GPT-2** (Radford et al., 2019) is a transformer decoder-based language model. (4) **BART** (Lewis et al., 2020) is a transformer-based encoderdecoder model. (5) **VRBot** (Li et al., 2021) is a medical dialogue generation model with patient entity tracking and doctor entity learning. For the experiments on the MedDG dataset, we extend Seq2Seq, HRED, GPT-2, and BART with entity hints, following Liu et al. (2020). Specifically, the extracted medical entities are appended at the end of the input sequence. In the following, these models with entity modeling are displayed with the **-Entity** suffix (e.g., Seq2seq-Entity). ## 5.3 Evaluation Metrics Automatic Evaluation. We evaluate modules in the proposed framework separately. For the dual flow modeling module, the top-20 recall rate (R@20) of target entities and the weighted F1 score (Weighted-F1) of different dialogue acts considering the act imbalance are adopted. For the response generation module, BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) scores in different n-grams (i.e., B-1, B-2, B-4, R-1, and R-2) are adopted to assess the response quality. Besides, we also measure the precision, recall, and F1 of entities (i.e., E-P, E-R, and E-F1) in responses to demonstrate the reliability following Liu et al. (2020). Human Evaluation. We chose 100 cases at random and invited three annotators to evaluate them manually. Results of the proposed framework are compared with different baseline models. We use three metrics to evaluate all of the generated responses based on past research (Liu et al., 2020; Li et al., 2021): *sentence fluency* (FLU), *knowledge* accuracy (KC), and *entire quality* (EQ). On a 5point Likert scale, from 1 (worst) to 5 (best), three annotators are asked to score these responses. ## 5.4 Implementation Details For all baselines, we implement the open-source algorithm following Liu et al. (2020) and Li et al. (2021). We use the MedBERT1 pretrained in the medical domain as the backbone of the dual flow modeling module. To extract the medical entities in the dialogue history, we refer to the medical knowledge graph CMeKG2and extract the text spans that match the string of nodes in the graph. Entities with a one-hop connection to the historical entities are target ones for entity flow modeling. Besides, following Yan et al. (2022), to identify 1https://github.com/trueto/medbert 2http://cmekg.pcl.ac.cn/ | w/o Pre-training w/ Pre-trained LM | |--------------------------------------| Methods B-1 B-2 B-4 R-1 R-2 E-P E-R E-F1 Seq2Seq 28.55 22.85 15.45 25.61 11.24 16.79 10.44 12.88 Seq2Seq-Entity 29.13 23.22 15.66 25.79 11.42 **23.79** 15.89 19.06 HRED 31.61 25.22 17.05 24.17 9.79 15.56 10.12 12.26 HRED-Entity 32.84 26.12 17.63 24.26 9.76 21.75 15.33 17.98 VRBot 29.69 23.90 16.34 24.69 11.23 18.67 9.72 12.78 GPT-2 35.27 28.19 19.16 28.74 13.61 18.29 14.45 16.14 GPT-2-Entity 34.56 27.56 18.71 28.78 13.62 21.27 17.10 18.96 BART 34.94 27.99 19.06 29.03 **14.40** 19.97 14.29 16.66 BART-Entity 34.14 27.19 18.42 28.52 13.67 23.49 16.90 19.66 DFMED 42.56† 33.34† **22.53**† 29.31 14.21 22.48 22.84† **22.66**† w/o Act Flow 36.79 29.18 19.81 **29.45** 14.26 22.73 21.70 22.20 w/o Entity Flow 42.14 32.83 21.95 29.26 13.73 16.86 22.06 19.11 w/o Interweaving 42.35 33.02 22.19 29.02 14.11 22.14 20.62 21.34 Methods B-1 B-2 B-4 R-1 R-2 Seq2Seq 23.52 18.56 12.13 23.56 8.67 HRED 26.75 21.08 13.91 22.93 7.80 VRBot 30.04 23.76 16.36 18.71 7.28 GPT-2 33.76 26.58 17.82 26.80 10.56 BART 33.62 26.43 17.64 27.91 11.43 DFMED 40.20† 30.97† 20.76† 28.28† **11.54**† w/o Act Flow 35.47 28.11 18.78 27.97 11.45 w/o Entity Flow 39.14 29.92 19.73 27.17 10.47 w/o Interweaving 39.34 30.45 20.38 28.03 11.39 the dialogue acts, we apply an open-source pseudolabeling algorithm3to automatically label each utterance. We train the dual flow modeling module with the AdamW (Loshchilov and Hutter, 2019) optimizer. The learning rate and batch size are 4e-5 and 12 with 1000 warmup steps. The best loss weights λe and λa are 1 and 0.05 through grid searching. After training ten epochs, checkpoints with the highest average F1 for act prediction and the highest recall rate for top-20 entity selection on the validation set are selected. We select the threshold for each dialogue act, which achieves the best F1 score for the corresponding act on all validation samples. Then, we use Chinese pre-trained BART*base* 4 model with a six-layer encoder and a six-layer decoder for the response generation module. The entity/act encoder and context encoder share the same encoder. We adopt the AdamW optimizer and set the learning rate to 3e-5 with 2000 warmup steps. The model is trained for ten Table 3: Human evaluation results on MedDG. epochs with a batch size of 4. We implement all experiments on a single RTX 3090 GPU. ## 6 Results And Analysis | Methods | FLU | KC | EQ | |-------------|-------|------|------| | BART | 3.82 | 1.86 | 3.06 | | BART-Entity | 3.85 | 2.03 | 3.35 | | DFMED | 4.00 | 2.14 | 3.61 | | Gold | 4.12 | 3.97 | 4.35 | ## 6.1 Automatic Evaluation The overall comparison of DFMED and other baseline models on the MedDG dataset is illustrated in Table 1, and the KaMed dataset is in Table 2. The observations from the comparison are as follows: (1) Our proposed framework DFMED outperforms these baseline models in most metrics. Specifically, on the MedDG dataset, it is 8.42%, 6.15%, 4.11%, and 0.79% higher than the best baseline model, BART-Entity, on B-1, B-2, B-4, and R-1. On the KaMed dataset, DFMED outperforms BART by 6.58%, 4.54%, 3.12%, and 0.37% on B1, B-2, B-4, and R-1, indicating better similarity to the content of ground truth responses. Besides, on the MedDG dataset, DFMED exceeds BART-Entity by 3.00% on E-F1, meaning that it can generate responses with more accurate entity mentions. The above increases are because DFMED learns the transition of a medical entity flow and a dialogue act flow and predicts the entities and acts to guide response generation. Moreover, interweaving two flows enhances the prediction of entities and acts. (2) DFMED effectively fuses the guidance from | Methods | MedDG | KaMed | | | | | |---------------------------|---------|---------|-------------|-------|-------|-------| | Weighted-F1 | R@20 | B-4 | Weighted-F1 | R@20 | B-4 | | | DFMED | 62.83 | 56.53 | 22.53 | 55.81 | 52.23 | 20.76 | | w/o Flow Modeling | 62.09 | 54.74 | 22.11 | 55.16 | 51.31 | 20.21 | | w/o Interweaving | 62.13 | 54.98 | 22.19 | 55.17 | 51.44 | 20.38 | | w/o Entity attends to Act | 62.37 | 55.75 | 22.34 | 55.43 | 51.84 | 20.62 | | w/o Act attends to Entity | 62.43 | 55.91 | 22.40 | 55.62 | 52.10 | 20.67 | Table 4: Results of Dual Flow Modeling with ablation. The best results are in **bold**. medical entities and dialogue acts. Compared to GPT-2 and BART, which have a performance drop on BLEU with the incorporation of medical entities (i.e., **-Entity**), DFMED shows increases in all these metrics. The main reason is that the force incorporation of entities may reduce response fluency. In DFMED, the dialogue act guidance influences attention to entities in the encoder and implicitly prevents the enforcement. The gate mechanism in the decoder controls the proportion of information from entities and acts. Besides, dialogue acts also provide essential content for responses without entity hints. This improvement is significant as about half of the responses in datasets do not match an entity in CMeKG via automatic string matching. ## 6.2 Human Evaluation We select methods with high accuracy on the MedDG dataset to conduct a human evaluation, as shown in Table 3. Our framework displays an overall better response quality. Especially on the EQ, DFMED performs significantly better than baselines due to the incorporation of dialogue acts. The Cohen's Kappa coefficient equals 0.53 and indicates a moderate agreement (Cohen, 1968). ## 6.3 Analysis Of Dual Flow Modeling To further explore the effectiveness of our method, we investigate the following variants of DFMED: (1) **w/o Flow Modeling**, where we use a context state produced by mean pooling of all hidden states of dialogue context tokens to rank entities and predict dialogue acts. (2) **w/o Interweaving**, which removes the interweaving between entities and acts. (3) **w/o Entity attends to Act**, which removes the interweaving from act sequences. (4) **w/o Act attends to Entity**, which removes the interweaving from sequential entity graphs. Table 4 shows the ablation study results. We observe drops in all metrics with the ablation variants, indicating the effectiveness of our proposed module. Specifically, the result of w/o Flow Modeling significantly drops on the entity recall rate and slightly drops on the overall act prediction F1 compared to the full model. It demonstrates that flow modeling can be necessary for learning the transition of entities and acts in dialogues. Besides, comparison among variants of the interweaver illustrates that incorporating sequential entity graphs and act sequences assists in the transition. Notably, the act sequence is more conducive to the transition of the entity and act flows. It may be because the entity graph introduce noise entities from patient utterances. These entities are more variable and can be deviated from the main content. ## 6.4 Case Study Patient: Hello, doctor! I feel a little stomachache and suffocated. The stomachache is intermittent, sometimes it hurts, sometimes it doesn't. When I am not active, the pain is not obvious, but when I move, bloating makes me sick. I still have diarrhea. :( Doctor: Hi, is it colic? Does stomachache have anything to do with eating? Patient: No! It hurts before and after eating. ... Patient: I've never had a gastroscopy. Gold Response: You may have gastritis or stomach cramps and need to do a gastroscopy or a barium meal first! If diagnosed with the above diseases, you can take omeprazole, Kangfuxin liquid, and belladonna tablets following specific instructions. BART: I suggest you try some omeprazole and hydrotalcite chewable tablets. BART-Entity: It is recommended to have a gastroscopy to see if there is any history of gastritis or gastric ulcer. Selected Entities: gastritis, gastroscopy, omeprazole,...; Predicted Acts: [diagnosis], [prescription], [test]. Ours (DFMED): [You may have gastritis or gastroesophageal reflux.] [It is recommended to take omeprazole, mosapride, hydrotalcite.] [If the symptoms are not relieved, it is recommended to do a gastroscopy.] A case generated by the above methods is illustrated in Table 5. Compared to baseline models, DFMED can produce responses more consistent with the dialogue context reflected by medical entities and dialogue acts. We can observe that the dual flow modeling module correctly predicts all acts and selects several medical entities, although two medications (e.g., "mosapride") are different from ground truth mentions. Then, the beneficial guidance from medical entities and acts is employed to generate the next response. The response generation module fuses the entities and acts and produces the response containing these two hints. ## 7 Conclusion In this paper, we propose a dual flow enhanced medical dialogue generation framework, DFMED, that models a medical entity flow and a dialogue act flow to improve the relevant entity selection and dialogue act prediction. Besides, we design an interweaver to strengthen interactions of two flows. The selected entities and predicted acts are applied to guide response generation. Experiments validate the effectiveness of our DFMED on two datasets. ## Limitations Although our proposed framework beats several baseline methods for medical dialogue generation, there is still room for progress. We exploit an entity flow and a dialogue act flow to improve dialogue understanding and guide response generation. However, our summarized dialogue acts are limited in the types and granularity of functions they denote. We can manually annotate more medical-related dialogue acts in our future research following the SOAP notes. Besides, more medical knowledge with different formats, such as medical articles and medical examination reports, can be incorporated. Finally, it is crucial to recognize the potential risks associated with system utilization and the possibility of patient privacy leakage. A collaborative approach involving both dialogue systems and medical professionals should be considered. This will ensure that responses are endorsed by physicians and stringently overseen by reliable authorities. ## Ethics Statement Our proposed system aims to provide medical services, such as diagnosis and prescription, for patients who suffer from certain diseases. All datasets have been anonymized when released in dataset papers. However, since we train the model with limited and incomplete samples in two datasets, the generated responses may involve misleading information about diagnosis, treatment, and precautions. We recommend that users adopt the system as an auxiliary tool and go to the hospital for help if necessary. Besides, when interacting with the system, there is a risk of sensitive information leak (e.g., gender as reported by users). It can be addressed by adopting anonymous technology in the future. Thus, we strongly advise users to consider the ethical implications of the generated responses carefully. Furthermore, the scientific artifacts that we used are freely available for research, including NLTK, ROUGE, Transformers, and other GitHub codes. And this paper's use of these artifacts is consistent with their intended use. ## Acknowledgment This work was supported by the Research Grants Council of Hong Kong (15207920, 15207821, 15207122) and National Natural Science Foundation of China (62076212). ## References Susan Cameron and Imani Turtle-Song. 2002. Learning to write case notes using the soap format. Journal of Counseling & Development, 80(3):286–292. Junying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, and Xin Liu. 2021. Diaformer: Automatic diagnosis via symptoms sequence generation. *ArXiv*, abs/2112.10433. Jacob Cohen. 1968. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. *Psychological bulletin*, 70(4):213. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhenfeng He, Yuqiang Han, Zhenqiu Ouyang, Wei Gao, Hongxu Chen, Guandong Xu, and Jian Wu. 2022. Dialmed: A dataset for dialogue-based medication recommendation. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 721–733. International Committee on Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Dongdong Li, Zhaochun Ren, Pengjie Ren, Zhumin Chen, Miao Fan, Jun Ma, and Maarten de Rijke. 2021. Semi-supervised variational reasoning for medical dialogue generation. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 544–554. ACM. Kangenbei Liao, Qianlong Liu, Zhongyu Wei, Baolin Peng, Qin Chen, Weijian Sun, and Xuanjing Huang. 2020. Task-oriented dialogue system for automatic disease diagnosis via hierarchical reinforcement learning. *ArXiv*, abs/2004.14254. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Shuai Lin, Pan Zhou, Xiaodan Liang, Jianheng Tang, Ruihui Zhao, Ziliang Chen, and Liang Lin. 2021. Graph-evolving meta-learning for low-resource medical dialogue generation. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, ThirtyThird Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021*, pages 13362–13370. AAAI Press. Xinzhu Lin, Xiahui He, Qin Chen, Huaixiao Tou, Zhongyu Wei, and Ting Chen. 2019. Enhancing dialogue symptom diagnosis with global attention and symptom graph. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 5033–5042, Hong Kong, China. Association for Computational Linguistics. Wenge Liu, Yi Cheng, Hao Wang, Jianheng Tang, Yafei Liu, Ruihui Zhao, Wenjie Li, Yefeng Zheng, and Xiaodan Liang. 2022. "my nose is running.""are you also coughing?": Building A medical diagnosis agent with interpretable inquiry logics. *ArXiv*, abs/2204.13953. Wenge Liu, Jianheng Tang, Xiaodan Liang, and Qingling Cai. 2021. Heterogeneous graph reasoning for knowledge-grounded medical dialogue system. *Neurocomputing*, 442:260–268. Wenge Liu, Jianheng Tang, Jinghui Qin, Lin Xu, Zhen Li, and Xiaodan Liang. 2020. Meddg: A large-scale medical consultation dataset for building medical dialogue system. *ArXiv*, abs/2010.07497. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Mark B Mengel, Warren Holleman, and Scott A Fields. 2002. *Fundamentals of clinical practice*. Springer Science & Business Media. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 3776–3784. AAAI Press. Jonathan Silverman, Suzanne Kurtz, and Juliet Draper. 2016. *Skills for communicating with patients*. crc press. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In *Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information* Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Quan Tu, Shen Gao, Yanran Li, Jianwei Cui, Bin Wang, and Rui Yan. 2022. Conversational recommendation via hierarchical information modeling. In *SIGIR '22:* The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, pages 2201–2205. ACM. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *6th International* Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-fai Wong, and Xiangying Dai. 2018. Task-oriented dialogue system for automatic diagnosis. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201–207, Melbourne, Australia. Association for Computational Linguistics. Yuan Xia, Jingbo Zhou, Zhenhui Shi, Chao Lu, and Haifeng Huang. 2020. Generative adversarial regularized mutual information policy gradient framework for automatic diagnosis. In *The Thirty-Fourth AAAI* Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 1062–1069. AAAI Press. Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. 2019. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7346–7353. AAAI Press. Guojun Yan, Jiahuan Pei, Pengjie Ren, Zhaochun Ren, Xin Xin, Huasheng Liang, Maarten de Rijke, and Zhumin Chen. 2022. Remedi: Resources for multidomain, multi-service, medical dialogues. In SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11 - 15, 2022, pages 3013–3024. ACM. Guangtao Zeng, Wenmian Yang, Zeqian Ju, Yue Yang, Sicheng Wang, Ruisi Zhang, Meng Zhou, Jiaqi Zeng, Xiangyu Dong, Ruoyu Zhang, Hongchao Fang, Penghui Zhu, Shu Chen, and Pengtao Xie. 2020. MedDialog: Large-scale medical dialogue datasets. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 9241–9250, Online. Association for Computational Linguistics. Jianqiao Zhao, Yanyang Li, Wanyu Du, Yangfeng Ji, Dong Yu, Michael R. Lyu, and Liwei Wang. 2022. Floweval: A consensus-based dialogue evaluation framework using segment act flows. *CoRR*, abs/2202.06633. Meng Zhou, Zechen Li, Bowen Tan, Guangtao Zeng, Wenmian Yang, Xuehai He, Zeqian Ju, Subrato Chakravorty, Shu Chen, Xingyi Yang, Yichen Zhang, Qingyang Wu, Zhou Yu, Kun Xu, Eric Xing, and Pengtao Xie. 2021. On the generation of medical dialogs for COVID-19. In *Proceedings of the 59th* Annual Meeting of the Association for Computational ## A Appendix A.1 Details Of Packages A.2 Details Of Dialogue Acts | Dialogue Acts | MedDG | KaMed | |-------------------------------|---------|---------| | Inquire | 25.49% | 20.95% | | Make a diagnosis | 6.72% | 8.64% | | Prescribe medications | 10.12% | 13.74% | | State a required medical test | 4.25% | 8.29% | | Provide daily precautions | 7.51% | 5.29% | | Inform | 29.91% | 30.04% | | Chitchat | 15.98% | 13.04% | Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 886–896, Online. Association for Computational Linguistics. We use the NLTK package in version 3.4.1 for calculating BLEU scores, the Pyrouge package in version 0.1.3 for calculating ROUGE scores, and the Transformers package in version 4.21.3. Table 6: The proportion of each dialogue act in the MedDG and KaMed datasets. In this section, we will describe the detail of our summarized dialogue acts. The seven dialogue acts can be divided into two types: (i) medical-related functions and (ii) general open-domain dialogue acts. The specific meaning of each act is interpreted as follows: Medical-related functions. (1) **Inquire**. The doctor asks questions about the history of the present illness (e.g., the location and duration of one symptom), previous surgery and medical conditions, current medications, allergies, etc. It corresponds to the "Subjective" function of the SOAP note (Cameron and Turtle-Song, 2002). (2) **Make** a diagnosis. The doctor makes a differential diagnosis based on historical dialogue context. It corresponds to the "Assessment" function of the SOAP note. (3) **Prescribe medications**. The doctor provide medication names and instructions. (4) State a required medical test. The doctor explains which tests are required and why each one was chosen to resolve diagnostic ambiguities; besides, what the next step would be if the results are positive or negative. (5) **Provide daily precautions**. The doctor explains the things that need to be paid attention to every day. Acts (3), (4), and (5) correspond to the "Plan" function of the SOAP note. General open-domain dialogue acts. (6) **Inform**. The doctor tells the patient some information that he assumes to be correct. (7) **Chitchat**. The doctor expresses welcome, goodbye, apology and thanks to the patient. The proportion of each act in the MedDG and KaMed datasets is shown in the Table 6. Examples for each dialogue act are listed as follows: 1. **Inquire**: "Hello, do you usually have diarrhea?", "Have you taken any medicine before?"; 2. **Make a diagnosis**: "You may have gastroenteritis", "You may have allergic rhinitis, which is easy to get sick this season."; 3. **Prescribe medications**: "Please take Amoxicillin capsule 1.0g 2 times a day and Clarithromycin tablet 0.5g 2 times a day. Both are taken after meals."; 4. **State a required medical test**: "If you often feel sick to your stomach, you can do a gastroscopy.", "If your condition does not improve, I suggest you do a gastroscopy and Helicobacter pylori detection."; 5. **Provide daily precautions**: "Please drink plenty of water, eat more fruits and vegetables. And try to have a morning poop."; 6. **Inform**: "Migraine is a primary headache whose etiology and pathogenesis are not fully understood.", "It can be effective in five days, and individual differences are relatively large."; 7. **Chitchat**: "You're welcome, and I wish you a speedy recovery!", "Thank you so much", "Hello!"; ## A.3 Extra Process Of The Kamed Dataset Patient: Can I apply a facial mask if I have pimples on my face? Doctor: Hello, do you have a picture to show? How long has it been? Are there any discomforts? Patient: The image is not available for privacy concerns. Doctor: Based on your description, it seems like acne, also known as pimples or blackheads. ... and "The voice is not available for privacy concerns" since these dialogues are incomplete and hard to understand. We have obtained approximately 51% samples for our experiments. The reason is as follows. The KaMed dataset is collected from an online medical consultation platform. The raw dialogues contain multi-modal information such as pictures (e.g., medical examination reports and photos of body parts) and voice messages. These messages are crucial for dialogue development since the doctor will respond to the picture or voice. For example, they will discuss the result of an examination report, or patients will directly use voice messages instead of texts to express their condition. However, this information is replaced by meaningless text such as "The image is not available" for privacy concerns when collecting the dataset. Thus, the dialogue context is incomplete and difficult to understand. We argue that filtering dialogues that contain these texts can help us build a more robust model for dialogue understanding. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 Limitations. ✓ A2. Did you discuss any potential risks of your work? 9 Ethics Statement. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 0 Abstract; 1 Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5.1 Dataset; 5.2 Baseline models; 5.3 Evaluation Metrics. ✓ B1. Did you cite the creators of artifacts you used? 5.1 Dataset; 5.2 Baseline models; 5.3 Evaluation Metrics. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 9 Ethics Statement. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 9 Ethics Statement. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 9 Ethics Statement. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5.1 Dataset. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5.1 Dataset; Appendix A.2 Details of Dialogue Acts. ## C ✓ **Did You Run Computational Experiments?** 5 Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5.4 Implementation Details. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5.4 Implementation Details. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6 Results and Analysis. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.1 Details of Package. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wang-etal-2023-listen
Listen, Decipher and Sign: Toward Unsupervised Speech-to-Sign Language Recognition
https://aclanthology.org/2023.findings-acl.424
Existing supervised sign language recognition systems rely on an abundance of well-annotated data. Instead, an unsupervised speech-to-sign language recognition (SSR-U) system learns to translate between spoken and sign languages by observing only non-parallel speech and sign-language corpora. We propose speech2sign-U, a neural network-based approach capable of both character-level and word-level SSR-U. Our approach significantly outperforms baselines directly adapted from unsupervised speech recognition (ASR-U) models by as much as 50{\%} recall@10 on several challenging American sign language corpora with various levels of sample sizes, vocabulary sizes, and audio and visual variability. The code is available at \url{https://github.com/cactuswiththoughts/UnsupSpeech2Sign.gitcactuswiththoughts/UnsupSpeech2Sign.git}.
# Listen, Decipher And Sign: Toward Unsupervised Speech-To-Sign Language Recognition Liming Wang1, Junrui Ni1, Heting Gao1, Jialu Li1, Kai Chieh Chang1**, Xulin Fan**1, Junkai Wu1**, Mark Hasegawa-Johnson**1and **Chang D. Yoo**2 1University of Illinois Urbana-Champaign 2Korea Advanced Institute of Science Technology {lwang114,jhasegaw}@illinois.edu, cd_yoo@kaist.ac.kr ## Abstract Existing supervised sign language recognition systems rely on an abundance of well-annotated data. Instead, an unsupervised speech-to-sign language recognition (SSR-U) system learns to translate between spoken and sign languages by observing only non-parallel speech and signlanguage corpora. We propose speech2signU, a neural network-based approach capable of both character-level and word-level SSRU. Our approach significantly outperforms baselines directly adapted from unsupervised speech recognition (ASR-U) models by as much as 50% recall@10 on several challenging American sign language corpora with various levels of sample sizes, vocabulary sizes, and audio and visual variability. The code is available at cactuswiththoughts/UnsupSpeech2Sign.git. ## 1 Introduction Many hearing-impaired people communicate natively in sign language (SL); for them, SL communication is as effortless as native spoken communication is for normal-hearing people. However, when it comes to a conversation between a hearingimpaired and a normal hearing, tremendous barriers exist for several reasons. First, there is a shortage of people who are bilingual in spoken and sign languages. Automatic sign language recognition models exists (Koller et al., 2016; Huang et al., 2018) but are fully supervised and require a large number of annotated data, which are hard to acquire. As a result, such systems are often limited to a small vocabulary. On the other hand, untranscribed speech audio and SL videos are quite common on the Internet, presenting an exciting possibility: Given a non-parallel pair of speech and sign language datasets, can we train a model to translate between spoken and sign languages? This task, we called *unsupervised speech-to-sign language* recognition (SSR-U), is analogous to well-known problems such as unsupervised machine translation (MT-U) (Ravi and Knight, 2011; Artetxe et al., 6785 2018a; Lample et al., 2018) and unsupervised automatic speech recognition (ASR-U) (Liu et al., 2018; Chen et al., 2019; Baevski et al., 2021), albeit with a few new challenges. First of all, in the case of SSR-U, both modalities are continuous as opposed to at least one of them being discrete in the case of ASR-U and MT-U. Consequently, the matching process is much more challenging due to higher within and cross-modal variability. Further, most sign language and spoken language can only be matched on the *word* level as opposed to the subword level in the case of ASR-U. Not only does the space of possible mappings explode combinatorially, but less training data and fewer temporal constraints are also available to recover the correct mapping. In this paper, we develop a neural network-based framework, speech2sign-U, for both character-level (with fingerspelling sequence) and word-level SSRU. It achieves promising results on datasets with up to around 900 ASL signs. ## 2 Problem Formulation Suppose we have a corpus of unlabeled speech recordings sampled from the random process A = (A1, · · · , AT ) and another separately collected corpus of unlabeled sign language videos sampled from the random process V = (V1, · · · , VL). Both A and V contain the *same* semantic information but different para-linguistic information such as speaker/signer identity and prosody. In other words, if we filter out the para-linguistic information and retain the semantic information as X := X(A) = (X1, · · · , XT ) ∼ PX for the speech and Y := Y (V ) = (Y1, · · · , YL) ∼ PY for the videos, we can find a *generator* function G : X T7→ Y L such that Y = G(X). Since the corpora are unpaired, we cannot estimate G directly from samples, and the goal of SSR-U is to "decipher" it using only the relations between the speech-only and video-only distributions, PX and $$P Y^{*}$$ $$\sum_{x\in\mathbb{X}^{T}}P_{X}(x)G(y|x)=P_{Y}(y),\qquad\quad(1)$$ for all sign language unit sequences y ∈ Y L, where G(y|x) = 1 if and only if y = G(x). ## 3 Proposed Methods 3.1 Character-Level Speech2Sign-U In the case of character-level speech2sign-U, V is drawn from a collection of unlabeled fingerspelling sequences, where each Viis the hand gesture for a character. In this case, we adopt a similar architecture as wav2vec-U (Baevski et al., 2021). Sign video preprocessing Given a sign video v ∼ V , we obtain its visual features (v1, · · · , vL) by passing the raw video frames into a *local* feature extractor such as VGG19 or RCNN (Ren et al., 2015). The local features are then contextualized by a *sign language encoder*, consisting of a twolayer multilayer perceptron (MLP) and a one-layer uni-directional LSTM: $$c_{1},\cdots,c_{L}=\operatorname{LSTM}(v_{1},\cdots,v_{L}).\qquad(2)$$ The sign language encoder is then trained using contrastive predictive coding (CPC) (van den Oord et al., 2018): $$\mathcal{L}_{\text{CPC}}:=$$ $$-\mathbb{E}_{V}\left[\sum_{i,k}\log\frac{e^{c_{i}^{\top}\text{MLP}(v_{i+k})}}{\sum_{n\in\mathcal{N}_{i,k}}e^{c_{i}^{\top}\text{MLP}(n)}}\right],\tag{3}$$ where $\mathcal{N}_{i,k}$ is a set of negative samples chosen uni where Ni,k is a set of negative samples chosen uniformly at random from times other than i + k. Finally, we apply K-means clustering on (c1, · · · , cL) to obtain the *sign cluster units* Y := (y1, · · · , yL). Speech preprocessing As in wav2vec-U, for each utterance, we first use a voice activity detector (VAD) to remove silences between speech frames and randomly insert silences between word boundaries of the sign cluster sequence so that their silence distributions match. Next, we contextualize the raw speech frames using wav2vec 2.0 pretrained on LibriLight: $$(z_{1},\cdots,z_{T})=\mbox{\rm wav2vec2}(a_{1},\cdots,a_{T}).\tag{4}$$ Finally, we extract K-means clusters from (z1, · · · , zT ) and merge consecutive frames belonging to the same clusters to obtain the segment-level speech features (x1, · · · , xT ). Unsupervised training A convolutional generator G : X → Y then generates a sequence of cluster units (Yˆ1, *· · ·* , YˆL) = G(X) from the segment features X by sampling from the posterior probabilities at each segment i: $${\hat{Y}}_{i}\sim G_{i}(y_{i}|X):={\frac{\exp(\operatorname{Conv}_{i,y_{i}}(X))}{\sum_{k}\exp(\operatorname{Conv}_{i,k}(X))}}.\quad(5)$$ Then we adopt the generative adversarial network (GAN (Goodfellow et al., 2014)) objective by training a binary classifier D : Y 7→ [0, 1] to discriminate between the real cluster sequence and the generated one: min D max G−EX∼PX [log(1 − D(G(X)))] − EY ∼PY [log D(Y )] + λLgp + γLsp + ηLcd, (6) where Lgp,Lsp and Lcd stand for the gradient penalty, smoothness penalty and code diversity losses as defined in (Baevski et al., 2021). ## 3.2 Word-Level Speech2Sign-U Word-level speech2sign-U is more challenging than character-level: the GAN objective in Eq. (6) fails to converge for vocabulary sizes of 100 or larger, apparently due to variability in the audio and video signals. Therefore, we instead adopt a novel GAN-*free* architecture trained to match marginals between the generated and real probability distributions as shown in Fig. 1. Preprocessing We extract the sign video and speech features similar to Section 3.1, except with a few modifications: first, we assume the word-level boundaries for both the speech and sign videos are available, which may be ground truth or boundaries detected using unsupervised word segmentation algorithms from phoneme boundaries (Kreuk et al., 2020; Bhati et al., 2021; Cuervo et al., 2022). Then we compute the segment-level speech features by averaging the frame-level wav2vec 2.0 features within each word. Further, we use the I3D (Carreira and Zisserman, 2017) as the local feature extractor and average the pretrained video feature frames within each word-level sign video segment. Lastly, we perform K-means clustering on the segment features and use the output cluster units as inputs X to the speech generator as we found that quantized speech features work better than continuous features. ![2_image_0.png](2_image_0.png) Unsupervised unigram matching Similar to Section 3.1 , we seek to match the probability distributions in the two modalities as our unsupervised training criterion. Instead of using a convolutional generator as in Eq. (5), we instead use a linear generator for each segment i : $$G(y_{i}|x_{i}):=\frac{\exp(W_{y_{i}}x_{i})}{\sum_{y^{\prime}\in\mathbb{Y}}\exp(W_{y^{\prime}}x_{i})},\qquad(7)$$ Eq. ( 1 ) can now be achieved by minimizing the ℓ 1 distance between the empirical positional unigram probabilities of the generated and real sign cluster units: $${\mathcal{L}}_{\mathrm{pos}}(G)=\sum_{i=1}^{L}\|{\hat{P}}_{X_{i}}G-{\hat{P}}_{Y_{i}}\|_{1},\qquad(8)$$ where ˆ P X i and ˆ P Y i are empirical unigram distributions for the speech and sign units, and G ∈ R [x]x|Y| := (G(y|x))xEX,yEY• Note that such an objective is typically optimized implicitly by a GAN, but we found that the explicit formula not only avoids the need for a discriminator but also leads to more stable training and better performance. Unsupervised skip-gram matching Positional unigram constraints alone may not be sufficient for word-level SSR-U. Therefore, we add additional monstraints using skip-grams . Define the k -step skip-gram to be the joint probability $$\Pr[Z_{1}=z,Z_{k+1}=z^{\prime}]:={\frac{\sum_{i=1}^{L-k}P_{Z_{i}Z_{i+k}}(z,z^{\prime})}{L-k}}\tag{9}$$ $$=:(P_{k}^{Z Z^{\prime}})_{z z^{\prime}}.$$ Then, apply Eq. ( 1 ) again, we have the skip-grams for the generated and real sign cluster units satisfy $$G^{\top}P_{k}^{XX^{\prime}}G=P_{k}^{YY^{\prime}},\,1\leq k\leq K-1.\tag{10}$$ Again, we approximate this constraint by minimizing their ℓ 1 distance: $${\mathcal{L}}_{\mathrm{skip}}(G)=\sum_{k=1}^{K}\|G^{\top}{\hat{P}}_{k}^{X X^{\prime}}G-{\hat{P}}_{k}^{Y Y^{\prime}}\|_{1}.\quad(11)$$ The overall loss for the word-level speech2sign-U is then $${\mathcal{L}}_{\mathrm{sp2sign}-\mathrm{U,word}}={\mathcal{L}}_{\mathrm{pos}}+\lambda{\mathcal{L}}_{\mathrm{skip}}.\qquad(12)$$ Speech-to-sign retriever Given a query speech audio (sign video), we would like to use it to retrieve its translation from a database of sign videos (speech audios). To this end, we use the generator | # signs | # train sents | # valid | # test | | |--------------------------|-----------------|-----------|----------|------| | Character-level datasets | | | | | | FS LibriSpeech | 87k | 287k | 5.5k | 5.6k | | FS LJSpeech | 87k | 13.1k | 348 | 523 | | Word-level datasets | | | | | | ASL Libri. 100 | 2.6k | 14.1k | 56 | 54 | | ASL Libri. 200 | 4.4k | 56.2k | 291 | 311 | | ASL Libri. 500 | 8.2k | 137k | 941 | 956 | | ASL Libri. 1k | 11.6k | 290k | 2.7k | 2.5k | to compute a similarity score between each speech sequence X and sign sequence Y as: $$\mathrm{Sim}(X,Y)=-\frac{1}{L}\mathrm{DTW}(G(X),Y),\tag{13}$$ where DTW(·, ·) is the dynamic time warping distance between two feature sequences with *cosine* distance as the frame-level metric, computed using the DTW library (Giorgino, 2009). ## 4 Experiments 4.1 Datasets The detailed statistics are shown in Table 1. Fingerspelling LibriSpeech To extract semantic units from the fingerspelling signs, we trained the visual CPC encoder on a sentence-level fingerspelling dataset constructed from the 960-hour LibriSpeech dataset and the Unvoiced dataset (Nagaraj, 2018). To construct the dataset, we replace each letter in the LibriSpeech transcript with an image of that letter's ASL Alphabet symbol chosen uniformly at random from Unvoiced. To study the effect of visual variability on SSR-U, we subset the ASL Alphabet images to 100, 300, 500, or 1000 images per letter sign. The dev-clean subset of LibriSpeech is used as the validation set. Fingerspelling LJSpeech We train our characterlevel model on another sentence-level fingerspelling dataset constructed from LJSpeech (Ito and Johnson, 2017) and the ASL Alphabet dataset similar to the fingerspelling LibriSpeech. ASL LibriSpeech For the word-level SSR-U, we construct another corpus using LibriSpeech for speech and MSASL (Joze and Koller, 2019) for word-level sign videos. Since many MSASL videos no longer exist on YouTube, only 11.6k out of 25k videos are downloaded. Further, due to the mismatch in vocabulary size, we use forced alignment information to filter out LibriSpeech words that don't appear in MSASL and keep sentences that are at least 5 words long. Next, for each word in each sentence, we pick a word-level sign video uniformly at random from MSASL. To study the effect of vocabulary size on our model, we follow the split provided by (Joze and Koller, 2019) to subset the data to a vocabulary size of 100, 200, 500 or 1000. ## 4.2 Overall Results Evaluation metrics We evaluate the performance of our systems using two metrics: the *unit* error rate (UER) is the average insertion I, deletion D, and substitution S error between the predicted and true visual cluster units, which may be character- or word-level units depending on the task: $$\mathrm{UER}={\frac{I+D+S}{3}}\times100.$$ The other metric we used to evaluate the speech-tosign (A→ V) and sign-to-speech (V→ A) retrieval tasks is *recall@*k (R@k) (k = 1, 5, 10), which is the percentage of hits in the top k results returned by the retriever. Character-level SSR-U The character-level results are shown in Table 2. To obtain retrieval results, we trained our own wav2vec-U 2.0 using the code released by the authors. Unfortunately, we were unable to achieve the same results they report in their paper. For our ASR-U experiments, wav2vec-U significantly outperforms wav2vec-U 2.0 in terms of both word error rates and retrieval tasks. For SSR-U, we compare our models with wav2vec-U (and 2.0) as well as a supervised image and caption retrieval model trained under a ranking-based criterion (Harwath et al., 2018). We replace their original CNN speech encoder with a two-layer MLP with hidden and output sizes of 256 and ReLU activation, and their VGG16 image encoder with a linear image encoder with an output size of 256. We found that our models with 100 and 300 images per letter achieve superior performances in terms of recall scores, even to the text-based wav2vec-U, but remain about 30% below the supervised topline. Notably, our model performs worse on the A → V direction than on the V → A direction, especially in terms of recall@1. This is perhaps due to significant insertion | A→ V | V→ A | | | | | | | | |-----------------------------------------|------------|------|------|------|-------|------|------|-------| | Model | Images/ltr | UER↓ | R@1↑ | R@5↑ | R@10↑ | R@1↑ | R@5↑ | R@10↑ | | Supervised speech-to-sign recognition | | | | | | | | | | (Harwath et al., 2018) | 1000 | - | 85.1 | 93.7 | 95.4 | 79.3 | 96.8 | 99.1 | | Unsupervised speech recognition | | | | | | | | | | wav2vec-U | - | 39.5 | 1.9 | 42.1 | 59.5 | 17.6 | 44.2 | 65.2 | | wav2vec-U 2.0 | - | 68.1 | 1.1 | 6.3 | 11.8 | 2.6 | 9.8 | 14.9 | | Unsupervised speech-to-sign recognition | | | | | | | | | | speech2sign-U | 100 | 43.1 | 1.7 | 42.1 | 63.4 | 27.5 | 62.7 | 78.4 | | speech2sign-U | 300 | 45.0 | 1.9 | 48.8 | 67.1 | 22.8 | 53.9 | 71.5 | | speech2sign-U | 500 | 46.2 | 1.0 | 33.1 | 57.2 | 31.3 | 57.0 | 72.8 | | speech2sign-U | 1000 | 48.6 | 1.3 | 43.8 | 63.8 | 32.5 | 58.7 | 71.5 | errors in the generated character sequence, which leads to many false positives during speech-to-sign retrieval. Word-level SSR-U The word-level results are shown in Table 3. To establish top-line results for error rates and retrieval recall scores, we train a word-level unsupervised speech recognition model, speech2text-U, using the same criterion as speech2sign-U in Eq. 12, except by replacing the sign cluster sequences obtained from clustering word-level sign video features (see Section 3.2) with the underlying textual word labels as the target random variable Y . At the same time, for the subset with a vocabulary size of 98, we compare the performance of our model that uses unsupervised unigram and skipgram matching with wav2vec-U, which uses a JSD GAN for distribution matching, to show our proposed training method significantly improves the word error rates and the recall scores for both retrieval directions. However, we still observe a large gap in recall between our unsupervised model and the supervised speech-to-image retrieval model (Harwath et al., 2018). The performance of both word-level ASR-U and SSR-U degrades as the vocabulary size increases. The unit error rate (UER) increases from 53.6% to 87.9%, the recall@1 of speech-to-sign (A→V) retrieval decreases from 69.6% to 12.1%, and the recall@1 of sign-to-speech (V→A) retrieval decreases from 71.4% to 10.9% as the vocabulary size increases from 98 to 877. Such performance degradation is much more significant than that of character-level SSR-U because the word modality involves extra morphological complexity on top of the phonological character modality. ![4_image_0.png](4_image_0.png) ## 4.3 Analysis Effect of skip-gram size The relation between recall@1, 5, 10 and skip-gram size K is shown in Figure 2. Increasing K generally improves all recall metrics for SSR-U by introducing more constraints to the generator mapping, though the performance starts to saturate at K = 4. Effect of the number of speech clusters We experiment with speech2sign-U models with speech cluster sizes |X| equal to 100, 200, 400, and a model that directly takes raw wav2vec 2.0 features as inputs (|X| = ∞), as shown in Figure 2. We found that the continuous model is significantly | A→ V | V→ A | | | | | | | | |-----------------------------------------|------------|------|------|------|-------|------|------|-------| | Model | Vocab size | UER↓ | R@1↑ | R@5↑ | R@10↑ | R@1↑ | R@5↑ | R@10↑ | | Supervised speech-to-sign recognition | | | | | | | | | | (Harwath et al., 2018) | 877 | - | 55.2 | 78.9 | 86.3 | 51.7 | 85.5 | 93.1 | | Unsupervised speech recognition | | | | | | | | | | wav2vec-U | 98 | 73.7 | 16.1 | 32.1 | 51.8 | 17.8 | 41.1 | 50.0 | | speech2text-U | 98 | 7.5 | 98.2 | 98.2 | 100 | 98.2 | 98.2 | 98.2 | | speech2text-U | 193 | 11.2 | 96.9 | 98.6 | 99.0 | 96.9 | 99.3 | 99.3 | | speech2text-U | 468 | 30.0 | 68.0 | 86.2 | 90.2 | 66.7 | 85.3 | 90.2 | | speech2text-U | 877 | 34.4 | 37.9 | 60.7 | 69.3 | 38.7 | 59.4 | 68.3 | | Unsupervised speech-to-sign recognition | | | | | | | | | | speech2sign-U | 98 | 53.6 | 69.6 | 96.4 | 98.2 | 71.4 | 96.4 | 100 | | speech2sign-U | 193 | 60.8 | 75.3 | 91.1 | 92.4 | 69.4 | 90.0 | 93.5 | | speech2sign-U | 468 | 73.2 | 56.5 | 76.8 | 83.6 | 47.6 | 74.5 | 83.5 | | speech2sign-U | 877 | 87.9 | 12.1 | 25.5 | 32.6 | 10.9 | 22.1 | 29.7 | Table 3: Overall speech2sign-U results on ASL LibriSpeech Video features Vocab Size A→ V R@1 R@5 R@10 VGG19 98 32.1 64.3 78.6 OpenPose 98 0.0 8.9 16.1 I3D RGB 98 **76.8** 89.3 96.4 193 66.0 86.3 91.4 877 1.1 3.0 5.1 | Video | Vocab | |----------------------------|---------| | features | Size | | I3D RGB I3D flow I3D joint | | 98 69.6 **96.4 98.2** 193 63.9 86.9 91.1 468 43.9 68.5 76.6 877 **12.1 25.5 32.6** 98 28.6 67.9 76.8 193 **75.3 91.1 92.4** 468 **56.5 76.8 83.6** 877 0.1 0.2 0.4 Table 4: Effect of the video features on ASL LibriSpeech with various vocabulary sizes worse than discrete models and |X| = 200 provides the most consistent recall scores across different skip-gram sizes. Effect of training objectives The effect of different training objectives including the default speech2sign-U loss (L1) in Eq. (12), the maximum mean discrepancy (MMD) GAN and the JensenShannon divergence (JSD) GAN is shown in Figure 3. For models trained with MMD and JSD Table 5: Effect of the speech segmentation using speech2sign-U on ASL LibriSpeech 100 GAN loss, we instead feed the generator outputs to a discriminator with a single convolutional layer while keeping all other settings the same. Our experiment indicates that the GAN-free approach is consistently more stable and accurate compared to the GAN-based approach. | Boundary | Word | A→ V | | | |---------------|-------------|--------|------|------| | Label | Boundary F1 | R@1 | R@5 | R@10 | | speech2text-U | | | | | | Word | 100 | 98.2 | 98.2 | 100 | | Phoneme | 88.1 | 78.6 | 98.2 | 98.2 | | speech2sign-U | | | | | | Word | 100 | 69.6 | 96.4 | 98.2 | | Phoneme | 88.1 | 57.1 | 82.1 | 91.1 | Effect of visual features The effect of visual features is shown in Table 4. We experimented with different types of visual features on ASL LibriSpeech with different vocabulary sizes such as VGG19 and the pose keypoint features from OpenPose (Cao et al., 2019). For the OpenPose features, we extract the keypoints from each video frames and re-sample each sign video feature frames to 30 frames as the segment-level feature. I3D architecture (Carreira and Zisserman, 2017) significantly ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) outperforms VGG19 and OpenPose as a feature extractor, demonstrating the importance of temporal information for SSR-U. We also found that I3D with optical flow features performs better than I3D with raw RGB inputs for most vocabulary sizes. Further, we found that concatenating the features from the RGB-based and flow-based I3Ds is beneficial for vocabulary sizes 193 and 468 but not when the vocabulary size is too small or too large, even causing training instability for vocabulary size 877. Effect of segmentations The effect of gold and predicted speech segmentation for word-level SSRU is shown in Table 5. For models trained with phoneme boundaries, we obtain predicted word segmentations using a CPC-based unsupervised segmentation system (Kreuk et al., 2020) with meanpooled phoneme-level wav2vec 2.0 features as inputs. The convolutional encoder in the original model is replaced by a two-layer MLP with 256 output dimensions trained on ASL LibriSpeech 100 for 200 epochs. This yields an exact-match boundary F1 of 88%. Using such detected word boundaries, we found about a 20% drop in recall@1 for speech2text-U and an 8-17% relative drop in recall@1,5,10 for speech2sign-U. Still, our model remains much better than the wav2vec-U baseline with ground-truth word boundaries, demonstrating its robustness to segmentation noise. Effect of word frequencies We plotted the F1 score of the first 100 word classes ranked by frequency in Figure 4. For ASL LibriSpeech 100 and 500, while noisy, it is not hard to observe that the F1 score positively correlates with word frequency in a somewhat exponential fashion. Starting with F1 above 0.55 for the most frequent word, the performance quickly drops below 0.2 at around the 30th most frequent word. This trend is less conclusive on ASL LibriSpeech 1000 with generally low F1 scores, but the highest F1 scores are still observed for the most frequent words. The trend is also illustrated by the DTW alignment of a speechvideo pair correctly retrieved by speech2sign-U in Figure 5. In our example, speech2sign-U mistakes the sign "more" for more frequent signs such as "when" and "have". Additional factors such as *visual similarity* also play a role in the case of "more" and "when", as both signs involve touching the tips of both hands. Such factors may explain the fluctuations in Figure 4. More error analysis can be found in Appendix A. ## 5 Related Works Sign language recognition One way to bridge between sign language and written/spoken language is to build a sign language recognition (SLR) system trained on parallel sign language and text corpora. The earliest attempts tried to recognize fingerspelling gestures using hand-tracking signals from wired gloves (Grimes, 1983; Charayaphan and Marble, 1992). Later works introduced vision to either correct the errors made by the handtracking model, or to serve as a cheaper and lessintrusive alternative (Tamura and Kawasaki, 1988). Focusing on the problem of *isolated* sign recognition and treating it as a classification task, a variety of statistical and deep learning models have been proposed, such as HMM (Starner and Pent- ![7_image_0.png](7_image_0.png) land, 1997), 3D-CNN (Huang et al., 2015), twostream inflated 3D (I3D) CNN (Carreira and Zisserman, 2017; Joze and Koller, 2019), and transformer (Bohácek and Hrúz ˇ , 2022), among others. To handle multi-sign video sequences, (Koller et al., 2016, 2017, 2018, 2019) reformulate the problem as a sequence labeling problem and develop various systems based on 2D-CNN-HMM hybrid models for German sign language recognition. Later works improve the alignment mechanism of previous models using soft DTW (Huang et al., 2018), CTC with DTW contraints (Pu et al., 2019) or pseudo-labeling refinement (Zhou et al., 2019). While some aim to directly use raw RGB images or generic action features like optical flow as inputs (Koller et al., 2016; Huang et al., 2018; Joze and Koller, 2019), others have found domainspecific features like whole-body and hand keypoints to be more reliable and robust (Bohácek ˇ and Hrúz, 2022). Thanks to the rapid development of the field, there are now many word-level and sentence-level datasets available in different SLs, and we refer to (Joze and Koller, 2019) for a more comprehensive review. Unsupervised cross-modal alignment The task of translating between two languages without parallel corpora has been demonstrated between written language pairs (MT-U) and between spokenwritten language pairs (ASR-U). (Haghighi et al., 2008) and (Ravi and Knight, 2011; Pourdamghani and Knight, 2017) are respectively the first to treat word-level and sentence-level MT-U as a distribution matching problem and built the first such systems by training statistical machine translation systems using nonparallel corpora, which are further improved by (Artetxe et al., 2018b). To allow more general source and target distributions, (Zhang et al., 2017a,b; Conneau et al., 2018; Artetxe et al., 2018a; Lample et al., 2018) instead use neural networks to embed the source and target distributions and match the distributions using either shared denoising autoencoder (Artetxe et al., 2018a), earth-mover distance minimization (Zhang et al., 2017b) or a generative adversarial network (GAN) with additional regularization losses (Zhang et al., 2017a; Conneau et al., 2018; Lample et al., 2018). (Chung et al., 2018; Liu et al., 2018; Chen et al., 2019; Baevski et al., 2021; Liu et al., 2022) adapt and perfect the GAN-based approach for spoken-written language pairs by leveraging large-scale self-supervised speech representation learning models (Chung and Glass, 2018; Baevski et al., 2020) as well as iterative self-training techniques (Liu et al., 2018). ## 6 Conclusion In this paper, we propose the task of unsupervised speech-to-sign language recognition and a neural network model, speech2sign-U, capable of both character-level and word-level SSR-U. On various unpaired speech and ASL datasets, our models consistently outperform previous unsupervised models such as wav2vec-U. Further, we found our model reliable to train for a variety of vocabulary sizes and robust against various types of noise in both speech and visual modalities. ## 7 Limitations Our model currently requires high-quality word boundaries for both speech and sign videos. However, as demonstrated by our preliminary results in Table 5, we can overcome such limitations by incorporating more powerful unsupervised segmentation algorithms to our system. Further, while our dataset is sufficient to model the variability in speech and videos, all experiments to date have assumed that spoken and signed sentences share similar word order, which may not be true of natural spoken and signed communications. A future direction of this research will seek to develop methods for spoken-sign language pairs with very different syntactic structures. Lastly, the vocabulary size under our study on word-level SSR-U is relatively small (<1000), and a promising future direction is to extend the current approach to deal with much larger vocabulary size in more diverse conversations. ## 8 Ethical Considerations One potential ethical concern for our model is the risk of miscommunication. Due to the small amount of resources used to train our system, it tends to be less accurate than its supervised counterpart, and its mistakes may cause confusion, misunderstanding and other psychological harm to the users of our systems. The other ethical concern is that the data used to train the system is demographically homogeneous, as we have noticed from some brief inspections that most of the signers in the ASL datasets are white middle-aged adults. This may lead the system to worse retrieval accuracy for people underrepresented in the training corpus, such as black people, children and elderly people. ## Acknowledgement This work utilizes resources supported by the National Science Foundation's Major Research Instrumentation program, grant \#1725729 (Kindratenko et al., 2020), as well as the University of Illinois at Urbana-Champaign. This work was also supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.20220-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics) ## References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised neural machine translation. In *International Conference on Learning Representations*. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3632– 3642, Brussels, Belgium. Association for Computational Linguistics. Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised speech recognition. In *Neural Information Processing System*. Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Neural Information Processing System*. S. Bhati, J. Villalba, P. Zelasko, L. Moro-Velázquez, ˙ and N. Dehak. 2021. Segmental contrastive predictive coding for unsupervised word segmentation. In Interspeech, page 366–370. Matyáš Bohácek and Marek Hrúz. 2022. ˇ Sign posebased transformer for word-level sign language recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops. Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2019. Openpose: Realtime multiperson 2d pose estimation using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence. Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR). C. Charayaphan and A. E. Marble. 1992. Image processing system for interpreting motion in american sign language. *Journal of Biomedical Engineering*, 14(5):419–425. Kuan-Yu Chen, Che-Ping Tsai, Da-Rong Liu, Hung-Yi Lee, and Lin shan Lee. 2019. Completely unsupervised speech recognition by a generative adversarial network harmonized with iteratively refined hidden markov models. In *Interspeech*. Yu-An Chung and James Glass. 2018. Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech. In *INTERSPEECH*. Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. 2018. Unsupervised cross-modal alignment of speech and text embedding spaces. In *Neural* Information Processing System. A. Conneau, G. Lample, M. Ranzato, L. Denoyer, and H. Jégou. 2018. Word translation without parallel data. In *International Conference on Learning Representations*. Santiago Cuervo, Adrian Lancucki, Ricard Marxer, ´ Pawel Rychlikowski, and Jan Chorowski. 2022. Variable-rate hierarchical cpc leads to acoustic unit discovery in speech. In *Neural Information Processing System*. Toni Giorgino. 2009. Computing and visualizing dynamic time warping alignments in r: The dtw package. *Journal of Statistical Software*, 31(7):1–24. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Neural Information Processing System. Gary J. Grimes. 1983. *Digital data entry glove interface* device. US Patent. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In *Proceedings of ACL08: HLT*, pages 771–779, Columbus, Ohio. Association for Computational Linguistics. David Harwath, Adrià Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, and James Glass. 2018. Jointly discovering visual objects and spoken words from raw sensory input. In *ECCV*. Jie Huang, Houqiang Li Wengang Zhou, and Weiping Li. 2015. Sign language recognition using 3d convolutional neural networks. In *IEEE International* Conference on Multimedia and Expo (ICME), pages 1–6. Jie Huang, Wengang Zhou, Qilin Zhang, Houqiang Li, and Weiping Li. 2018. Video-based sign language recognition without temporal segmentation. In *AAAI* Conference on Artificial Intelligence (AAAI). Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. *IEEE* Transactions on Big Data, 7(3):535–547. Hamid Vaezi Joze and Oscar Koller. 2019. MS-ASL: A large-scale data set and benchmark for understanding american sign language. In *The British Machine* Vision Conference (BMVC). V. Kindratenko, D. Mu, Y. Zhan, J. Maloney, S. Hashemi, B. Rabe, K. Xu, R. Campbell, J. Peng, and W. Gropp. 2020. HAL: Computer system for scalable deep learning. In *Practice and Experience in* Advanced Research Computing (PEARC '20), page 26–30. Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In *International* Conference on Learning Representations. Oscar Koller, Necati Cihan Camgoz, Hermann Ney, and Richard Bowden. 2019. Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos. *IEEE* Transactions on Pattern Analysis and Machine Intelligence. Oscar Koller, Sepehr Zargaran, and Hermann Ney. 2017. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMs. In *The IEEE / CVF* Computer Vision and Pattern Recognition Conference (CVPR), pages 4297–4305. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2016. Deep sign: Hybrid CNNHMM for continuous sign language recognition. In Proc. British Machine Vision Conference (BMVC), page 1–12. Oscar Koller, Sepehr Zargaran, Hermann Ney, and Richard Bowden. 2018. Deep sign: Enabling robust statistical continuous sign language recognition via hybrid CNN-HMMs. International Journal of Computer Vision (IJCV), 126(12):1311–1325. Felix Kreuk, Joseph Keshet, and Yossi Adi. 2020. Self-supervised contrastive learning for unsupervised phoneme segmentation. In *INTERSPEECH*. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Alexander H. Liu, Wei-Ning Hsu, Michael Auli, and Alexei Baevski. 2022. Towards end-to-end unsupervised speech recognition. In IEEE Spoken Language Technology Workshop (SLT). Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, and Lin shan Lee. 2018. Completely unsupervised phoneme recognition by adversarially learning mapping relationships from audio embeddings. In *Interspeech*. Akash Nagaraj. 2018. ASL alphabet. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. In *ArKiv*. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Trevor Gregory, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, highperformance deep learning library. In *Advances in* Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Nima Pourdamghani and Kevin Knight. 2017. Deciphering related languages. In *Proceedings of the 2017* Conference on Empirical Methods in Natural Language Processing, pages 2513–2518, Copenhagen, Denmark. Association for Computational Linguistics. Junfu Pu, Wengang Zhou, and Houqiang Li. 2019. Iterative alignment network for continuous sign language recognition. In The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR). Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In *Proceedings of the 49th Annual* Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12– 21, Portland, Oregon, USA. Association for Computational Linguistics. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Neural Information Processing System. Thad Starner and Alex Pentland. 1997. Real-time American sign language recognition from video using hidden markov models. *Motion-Based Recognition*, page 227–243. Shinichi Tamura and Shingo Kawasaki. 1988. Recognition of sign language motion images. *Pattern Recognition*, 21(4):343–353. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In *Proceedings of the* 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945, Copenhagen, Denmark. Association for Computational Linguistics. Hao Zhou, Wengang Zhou, and Houqiang Li. 2019. Dynamic pseudo label decoding for continuous sign language recognition. In *International Conference* on Multimedia and Expo (ICME). ## A Appendix A.1 Reproducibility Checklist All experiments are done on four 16GB NVIDIA V100 GPUs and all models are implemented using Pytorch (Paszke et al., 2019) and Fairseq (Ott et al., 2019). Character-level speech2sign-U We use the exact same generator and discriminator architectures as the wav2vec-U (Baevski et al., 2021). For the CPC-based fingerspelling feature extractor, we use a two-layer MLP as the encoder, with 256 hidden units, ReLU activation and 256 output units and a single-layer LSTM with 256 hidden and output units as the autoregressive predictor. We found 3 prediction steps and 32 negative samples per positive sample for the CPC loss to be the best setting for training. For the CPC-based fingerspelling feature extractor, we train for 60 epochs using Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 0.001, a batch size of 16 with β1 = 0.9 and β2 = 0.999. The checkpoint with the highest average next-frame prediction performance during training is used for the feature extraction later. For the K-means clustering, we use FAISS (Johnson et al., 2019) and set the number of clusters to be the same as the vocabulary size. For the GAN training, we train the model for 10000 updates and validate the model every 1000 updates using the UER metric. We observe similar performance between the best and the last checkpoints for most experiments. Again, we follow the publicly available implementation of wav2vec-U (Baevski et al., 2021) using Fairseq for all the distributed training, optimizer and scheduler setting. Word-level speech2sign-U For extracting the optical flow features of sign images, we use the OpenCV implementation of Dual TV-L1 method and resized all images to 224 × 224. For the OpenPose features, we follow the default settings to extract the pose keypoints and set the keypoint coordinates to 0 when the model fails to detect any keypoints. We also normalize the keypoints by the size of the video frame. The I3D model we use are trained on the ImageNet dataset and fine-tuned on the Charades dataset, for both RGB and flow implementations. The same CPC sign encoder as that in character-level experiments is used, except with the pretrained video features as inputs and the outputs of the *MLP encoder* as outputs instead of that of the LSTM model. We then train the CPC sign encoder for 200 epochs on ASL LibriSpeech 1000. The CPC sign encoder features are then quantized into the same number of discrete units as the vocabulary size (100 for ASL LibriSpeech 100, etc.) using K-means implemented in FAISS (Johnson et al., 2019). For the speech feature clustering, we again use the FAISS (Johnson et al., 2019) implementation of K-means with a cluster size of about 4 times of the vocabulary size of ASL LibriSpeech 100, 200 and 500, and 2000 clusters for ASL LibriSpeech 1000. The cluster sizes are chosen to ensure a cluster purity of about 90%. For the word-level speech2sign-U, the speech generator is a linear layer with no bias. Skip grams of a maximal step of 6 are used for experiments on ASL LibriSpeech 100, 200 and 500, and a maximal step of 4 are used for ASL LibriSpeech 1000. For the unsupervised training, we train the model for a number of updates equal to 3000 × jsample size batch size k. We found that larger batch size generally leads to better performance, and use a batch size of 16k for ASL LibriSpeech 100, 200 and 500, and a batch size of 12k for ASL LibriSpeech 1000 due to GPU memory constraints. Adam optimizer with a initial learning rate of 0.4 and [β1, β2] = [0.9, 0.999] is used throughout the training. ## A.2 **More Ssr-U Retrieval Examples And Error** Analysis More DTW alignments between speech-video pairs correctly retrieved by speech2sign-U are shown in Figure 6. As we can see, our model is able to correctly align the speech and sign video after the DTW step. However, in order to better understand the type of errors the model is susceptible to, we also show the similarity map *before* the DTW step in Figure 7. While the similarity maps are noisier than their corresponding DTW alignments, the high similarity regions are correctly concentrated approximately along the diagonal most of the time. there are, however, several common failure modes by speech2sign-U. The most common mistake by the model is to confuse less frequent words with more frequent ones, for example, confuse the less frequent word "history" with the more frequent word "from" and "outside" in Figure 7d, or the less frequent "more" with the more frequent "good" in Figure 7c or the less frequent "like" with the more frequent "when" and "man" in Figure 7b. Another type of mistake is to confuse visually similar signs such as "one", "two" and "three" in Figure 7a. The last common type of mistake for speech2sign-U is to confuse acoustically similar words, such as the word "they" and "their" in Figure 7c. ![12_image_0.png](12_image_0.png) ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2, 3, 4 ✓ B1. Did you cite the creators of artifacts you used? Section 6 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The license and terms of use is straightforward as we use open-source, publicly available software for non-commercial purposes ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data is collected by other authors and have been carefully checked to remove any privacy-related information ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4, Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4, Appendix A C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
prabhakaran-etal-2023-distinguishing
Distinguishing Address vs. Reference Mentions of Personal Names in Text
https://aclanthology.org/2023.findings-acl.425
Detecting named entities in text has long been a core NLP task. However, not much work has gone into distinguishing whether an entity mention is addressing the entity vs. referring to the entity; e.g., \textit{John, would you turn the light off?} vs. \textit{John turned the light off}. While this distinction is marked by a \textit{vocative case} marker in some languages, many modern Indo-European languages such as English do not use such explicit vocative markers, and the distinction is left to be interpreted in context. In this paper, we present a new annotated dataset that captures the \textit{address} vs. \textit{reference} distinction in English, an automatic tagger that performs at 85{\%} accuracy in making this distinction, and demonstrate how this distinction is important in NLP and computational social science applications in English language.
# Distinguishing Address Vs. Reference Mentions Of Personal Names In Text Vinodkumar Prabhakaran Google Research San Francisco, CA, USA vinodkpg@google.comn Aida Mostafazadeh Davani Google Research Portland, OR, USA aidamd@google.com Melissa J Ferguson Yale University New Haven, CT, USA melissa.ferguson@yale.edu Stav Atir University of Wisconsin-Madison Madison, WI, USA stav.atir@wisc.edu ## Abstract Detecting named entities in text has long been a core NLP task. However, not much work has gone into distinguishing whether an entity mention is addressing the entity vs. referring to the entity; e.g., John, would you turn the light off? vs. *John turned the light off*. While this distinction is marked by a *vocative case* marker in some languages, many modern IndoEuropean languages such as English do not use such explicit vocative markers, and the distinction is left to be interpreted in context. In this paper, we present a new annotated dataset that captures the *address* vs. *reference* distinction in English,1an automatic tagger that performs at 85% accuracy in making this distinction, and demonstrate how this distinction is important in NLP and computational social science applications in English language. ## 1 Introduction Named entity recognition (NER) in text has long been a core task in the NLP community (Sundheim, 1995; Yadav and Bethard, 2018). However, not much work has looked into distinguishing whether an entity mention is an instance of addressing the entity or referring to them: - *John, would you turn the light off?* (Address) - *John turned the light off.* (Reference) The address usage is also called a *vocative phrase*: "a noun phrase which does not belong to the thematic grid of a predicate and is used to attract someone's attention" (Moro, 2003). Many languages have explicit morphological *vocative case* markers: e.g., in "Et tu, Brute?", Brute marks the vocative case of the nominative Brutus. However, many 1https://stavatir.com/s/address-vs-reference.xlsx modern Indo-European languages, including English, do not have vocative case markers, and the distinction is left to be interpreted based on context. Distinguishing vocative phrases is important in many NLP tasks, such as sentiment analysis (Karami et al., 2020), offensiveness detection (Mubarak et al., 2020) and information extraction (Makazhanov et al., 2014). For instance, Karami et al. (2020) point out the difference in interpretations between *"Let's eat, Grandma"* and *"Let's eat* Grandma". The vocative distinction is also important for NLP-aided computational social sciences, since the pragmatics and the patterns of usage vary between these two types of name mentions (Dickey, 1997), and since name mentions capture various societal biases (Prabhakaran et al., 2019). This aspect is especially crucial in studies analyzing political discourse, with the goal of understanding the rhetoric by and about political personalities (Prabhakaran et al., 2014; Gupta, 2022). Despite the prevalence of NER as a useful task in various NLP applications (Marrero et al., 2013), efforts to make this distinction have largely been limited to languages that have explicit vocative case markers such as Portuguese (Baptista and Mamede, 2017), Hebrew (Tsarfaty et al., 2019), Korean (Nam and Choi, 1997), and Sindhi (Muslim and Bhatti, 2010), and not much work has looked into detecting vocative name mentions in English. In this paper, we present a dataset of social media text in the political domain in English language, with person mentions annotated with the *address* vs. reference distinction. We then build a tagger that is able to make this distinction automatically, with an accuracy of 85%. We use this tagger to demonstrate the importance of this distinction in two largescale computational socio-linguistic analysis. First, 6801 we demonstrate that female personalities are more likely to be mentioned in the addressing context than male personalities, across three different social medial corpora, which has implications for NLP research on gender bias in data and models. Second, we demonstrate that sentences with address mentions are significantly more likely to be toxic than those with reference mentions. This finding has important implications for the active area of NLP research on detecting online abuse. ## 2 Address Vs. Reference Mentions How a person is addressed or referenced in language, and its associated pragmatics has long been of interest in sociolinguistics (Brown et al., 1960; Brown and Ford, 1961). While most of this research focused on the different address pronouns and the T/V distinction, much less work has looked into the difference in the social meaning of a mention when used as an address vs. when used as a reference (Dickey, 1997). While this distinction is not limited to persons (for instance, organizations may also be mentioned in an addressing context, as in *Hey Doordash, where is my food?*), person name mentions add additional nuance owing to the social relations. For instance, Dickey (1997) show that the words used to address a person by a speaker may differ from the words used to refer to them depending on the social power relations between the speaker, the referent, and the addressee. Forms of address has been studied in NLP-aided computational sociolinguistics, for instance, in the context of how they relate to social power relations (Prabhakaran et al., 2013). The address vs. references distinction has also been shown to be of value in NLP tasks, for instance, Mubarak et al. (2020) extracts Arabic tweets with the vocative particle "yA" as it indicates directing speech to a person or a group, increasing the likelihood of offensiveness. However NLP work on making this distinction is largely limited to languages that have explicit vocative case markers. In the absence of any vocative markers, as in English, this becomes a task that relies on the syntactic context. In this paper, we build resources to perform and evaluate this distinction, and demonstrate its utility in NLP applications. There is related work in NLP on detecting addressees in multi-party dialog (op den Akker and op den Akker, 2009; Ouchi and Tsuboi, 2016; Le et al., 2019; Ek et al., 2018), which is a substantially different task from ours. First, addressee detection in multi-party dialog takes into account the larger dialog/content context (e.g., prior utterances). For instance, Ouchi and Tsuboi (2016) jointly captures "who is talking about *what* at each time step" in order to determine the addressee. Ours is a simple linguistic task that relies on the local syntactic context of named mentions, making it applicable in broader contexts. Second, the above work crucially looks into the implicit cues about addressees. In contrast, our work focuses only on explicit mentions, primarily motivated by the computational social science analyses anchored on them. ## 2.1 Data Source: We use the corpus of Facebook comments on politicians' posts released by (Voigt et al., 2018) for this study. Our choice is motivated by three reasons. First, the comments in this corpus are all made in response to a individual's Facebook post and hence it is likely for it to have more instances of comments addressing the person than general social media data with mentions of that person. Second, the corpus captures the individual's name within the metadata, making it easy to detect and disambiguate different mentions referring to the same person. Finally, the corpus also captures the gender information of the person the comments are in response to (unlike most other gender-labeled data that captures the gender of the speaker/writer) as it was originally developed to study gender bias in social media, which is one of our goals too. Pre-processing: Since the metadata captures the politician's name that each comment is in response to, we use a regex-based approach to determine if that politician is mentioned in the comment or not. We made sure the regex captures different forms of address including full name mentions, first name mentions, and last name mentions. Furthermore, since the corpus contained comments directed at only 402 politicians, we manually coded different common variations and misspellings of their first and last names. For instance, the first name of the politician *Jim Boozman* could be mentioned as Jim, James, or *Jimmy*, and the common variations of his lastname included Boozman, *Boozeman*, and Bozeman. While some of these choices may be genuine misspellings, some others may indicate pragmatic connotations: *Jimmy* instead of Jim may have been used to evoke familiarity, while *Boozeman* instead of *Boozman* may have been intended to evoke humor or disrespect. We do not analyze these distinctions in this paper, however, we included them in our regex to ensure that we capture such diverse associated linguistic contexts. Annotation: We sampled 800 comments with at most 100 words (to avoid exceedingly long comments) from the corpus. We restricted ourselves to only those comments with a single mention of the individual (i.e., removed comments with no or multiple mentions). Multiple mentions were rare in our data (less than 1%), and when they do happen they were almost exclusively all reference mentions, as it is unlikely for someone to address someone by name, and then refer to them in third person in the same sentence itself. We trained two annotators to make the *address* vs. *reference* distinction. The annotators were undergraduate students majoring in Psychology at Yale University. Annotators were provided with the comments, the individual whose post the comment was in response to, as well as the mention of that individual detected in the comment. They were asked to label whether the mention was addressing the individual vs. referencing the individual, along with examples. Analysis: All comments were double annotated, obtaining an inter-annotator agreement of κ = 0.898, suggesting that the task is relatively easy for trained humans, and that our annotations capture reliable data. We then performed an adjudication round where both annotators met with one of the authors and arrived at a final label through discussion. While most disagreements were due to misinterpretations, some cases were inherently ambiguous. For instance, in "*Yes!!! Sen. Booker*", it is ambiguous whether the commenter is addressing Sen. Booker or just mentioning him. The annotation and adjudication process revealed 15 comments where the name mention was not valid; e.g., within a URL contained in the comment, and 11 comments where the comment did not have enough linguistic context to make the distinction; e.g., when the comment was just a name mention. We removed these comments as they will add noise, resulting in 774 comments in the dataset, each with a mention labeled as either *address* or reference. There were 250 (32.3%) instances that were the *address* usage compared to 524 (67.7%) instances that were the *reference* usage. ## 2.2 Automatic Tagger We now investigate automatically distinguishing address vs. *reference*, given a text and a name mention in it. Since contextualized embeddings such as BERT (Devlin et al., 2019) are proven to capture syntactic information (Clark et al., 2019), we expect the positional embedding of the name mention to capture its syntactic context and hence help make this distinction. Further, we use the intuition that *reference* mentions are more likely to occur in syntactic contexts where third person pronouns could fit, while *address* mentions are more likely to fit second person pronouns or address terms. We consider three settings, each with two sets of words that fit with the *address* vs. *reference* contexts: S1: you/your vs. he/him/his/she/her S2: you/your vs. he/him/his/she/her/they/them S3: you/your/hey/hi vs. he/him/his/she/her S1 uses singular pronouns, S2 includes the (usually) plural pronouns they/*them*, S3 includes addressing terms (hey/hi). For each setting, we use a contextual embedding, replace the mention with [MASK] and calculate the score for each word in the list to fit the masked slot. If the top scored word from the list is of the *address* category, we predict the mention as *address*, otherwise, as *reference*. To illustrate, the top candidate from S3 above for the input "*[MASK], would you turn the light off?*" as per BERT is hey, while the top candidate for "*[MASK] turned the light off* " is he, then she. This approach is not entirely foolproof, but as Table 1 shows, this simple approach yielded good performance of 85% accuracy. We report results using BERT and DistillBERT models across all three settings outlined above. Adding addressing terms hey and hi increased the accuracy, while adding the third person pronouns *they* and *them* that are usually used in plural context (but also has singular usage) resulted in reducing the accuracy. Most errors happen when the sentence is not well-formed or uses non-standard language. An approach to circumvent this issue is to fine-tune a pre-trained model using our data. In our preliminary experiments, fine-tuning a BERT model only yields marginal (∼1%) improvement in accuracy at sentence level. Using more advanced models and hyper parameter tuning may yield better performance. However, our goal in this paper is not to build the best tagger possible for this task, rather to demonstrate the utility of this task in NLP and computational social science applications. Given the high performance of the Slot-filling model, we use it for all analyses in the rest of this paper. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) ## 3 Gender Effects In Addressing We first look into the RtGender dataset (Voigt et al., 2018) built to study differential responses to gender. They found that responses to female posters or speakers were more likely to be about the individuals (e.g., their appearance) rather than about the content they posted or talked about. As a complementary analysis, we analyze whether these responses were addressed to the speaker or poster, or referring to them. We apply the tagger to 5K comments each, chosen at random, from three different sub-corpora in the RtGender corpus: comments in response to (1) Facebook posts by politicians (FB Congress), (2) Facebook posts by celebrities (FB Wiki), and (3) TED talk videos (Ted Talks). We ensured that the tagger does not introduce systematic gender bias; t-test revealed no association between gender and error (p = 0.166). Across board, mentions of female personalities were more likely to be in the *address* rather than reference contexts (Figure 1). This difference was statistically significant in all three cases: t(4999) = 3.51, p < .001 (FB Congress); t(4999) = 3.87, p < .001 (FB Wiki); and t(4999) = 4.41, p < .001 (TED Talks). For the congress dataset, we also have access to the political party they belong to; we added it as a control causing the effect size to decrease (2.72) suggesting that political party affiliation plays an important role. In fact, Figure 2 shows that the gender disparity is present only for the Republican party politicians. Addressing someone directly could be an expression of friendliness or familiarity, and its prevalence in comments directed at female personalities is notable. These insights enable adding nuance to many NLP-aided studies of gender and power. Moreover, this finding adds to research on gender influences on communication with and about professionals (Atir and Ferguson, 2018). ## 4 Address Vs. Reference And Toxicity We now turn to online abuse detection, an NLP task where address vs. reference distinction is important. Prior work has shown that 2nd person pronouns are spuriously associated with toxic comments (Hede et al., 2021). In languages such as Arabic that has explicit vocative markers, researchers have used vocative markers to curate comments with higher likelihood of offensiveness (Mubarak et al., 2020). In this section, we use our tagger to analyze the tox- ![3_image_3.png](3_image_3.png) ![4_image_0.png](4_image_0.png) icity dataset annotated by Jigsaw (Jigsaw, 2018) to see if this pattern holds true. In the Jigsaw dataset, we do not have access to the mentions of people in text. Hence, we created a tagger for the Jigsaw dataset by first using the SpaCy python package to detect person mentions, then used the BERT Slotfilling (S3) tagger to detect whether each person is addressed or referenced in the message. We find significant difference in address vs. reference in toxic vs. non-toxic tweets. The average toxicity score of sentences with address mentions were 0.088, compared to 0.070 for those without; this difference is statistically significant using the standard Student's t-test (p < .001) and a permutation test (p < .001). Figure 3 shows differences in the ratios of address to reference mentions in toxic and non-toxic texts. This finding is important for NLP-aided content moderation, especially in detecting targets of abuse. ## 5 Discussion/Conclusion In this paper, we introduced the basic NLP task of distinguishing a name mention to be *address* or reference, annotated a new dataset for it in the English language, and presented a simple tagger using contextual word embeddings. Our annotation and tagging experiments reveal this to be a relatively easy task, however our accuracy being only at 85% suggests room to improve. We also demonstrate the utility of this capability in computational social science work anchored on name mentions through two analyses: first, on gender bias in mention patterns, and second, in toxic comments online. This capability is important, but often ignored, for tasks that assume entity mentions to be part of the expressed propositional meaning; e.g., belief modeling (Prabhakaran et al., 2015), and social relation extraction (Massey et al., 2015). It will also aid in tasks that model relationships between interactants, such as power (Prabhakaran and Rambow, 2014) and influence (Rosenthal and Mckeown, 2017). The vocative usage is arguably already being implicitly modeled in tasks such as dialog act tagging. However, it may be important to model it explicitly in certain cases, e.g., our work could contribute to ongoing efforts in detecting addressees in multi-party dialog (Ouchi and Tsuboi, 2016; Le et al., 2019). Future work should look into these applications, and more advanced modeling techniques such as few-shot training for this task. ## 6 Limitations Our work is not without its limitations. First of all, our annotated data is relatively small. However, given the relatively straightforward task (as reflected in high IAA), and since we are using this data only for evaluations, we believe that this amount of data is sufficient for the research questions we are asking/answering in this paper. Second, our data entirely comes from the politics domain and social media, situated in the US context. This choice was driven by our downstream use case of a large scale social science analysis in the US political domain. While we have not established how well our tagger performs in domains other than politics, given that our tagger relies on contextualized language models trained on web data and since it is performing a basic linguistic task, we believe that the performance is robust across domains used in Section 3 and 4. However, we expect performance degradation with genre or dialectal shifts with substantial differences in syntactic patterns. Third, we have not fully exploited the utility of the dataset in this work. As mentioned in Section 2.2, our aim in this paper is not to build the best tagger possible, and hence we did not explore state of the art modeling techniques such as few-shot learning. Finally, our work is done entirely on English language data. While we believe that similar approach could work in other languages without vocative markers, more research need to be performed to verify that. While we acknowledge these limitations, we reiterate that these are outside the scope of what could be meaningfully done within this short paper. ## 7 Ethical Considerations Like any technology, our work also has the potential for misuse. For instance, using the tagger for social science analyses in contexts where it was not trained or tested for might result in erroneous insights. Hence, we will be releasing a data card and model card along with the publication to document the intended use cases and various analysis results. Furthermore, although we ensured our tagger do not have gender bias in error rates, it may vary across other socio-demographic groups. However, the likelihood of this is rather low since we mask the identity of the name in the slot-filling approach, and hence any biases captured by person names are avoided in our current scheme. Finally, our gender bias analysis is limited to the binary gender, as all the RtGender corpus captured only binary gender. ## Acknowledgements We thank Jacob Eisenstein, Emily Reif, Kathy Meier-Hellstern, and the anonymous reviewers for helpful feedback. We also thank our research assistants Sevi Burget-Foster and Julia Sanderson who annotated the comments in our data. ## References Stav Atir and Melissa J Ferguson. 2018. How gender determines the way we speak about professionals. Proceedings of the National Academy of Sciences, 115(28):7278–7283. Jorge Baptista and Nuno Mamede. 2017. Vocatives in portuguese: Identification and processing. In 6th Symposium on Languages, Applications and Technologies (SLATE 2017). Schloss Dagstuhl-LeibnizZentrum fuer Informatik. Roger Brown and Marguerite Ford. 1961. Address in american english. *The Journal of Abnormal and Social Psychology*, 62(2):375. Roger Brown, Albert Gilman, et al. 1960. The pronouns of power and solidarity. *Style in language*, pages 252–281. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Eleanor Dickey. 1997. Forms of address and terms of reference. *Journal of linguistics*, 33(2):255–274. Adam Ek, Mats Wirén, Robert Östling, Kristina N. Björkenstam, Gintare Grigonyt ˙ e, and Sofia ˙ Gustafson Capková. 2018. Identifying speakers and addressees in dialogues extracted from literary fiction. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Akshat Gupta. 2022. On building spoken language understanding systems for low resourced languages. In Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–11, Seattle, Washington. Association for Computational Linguistics. Anushree Hede, Oshin Agarwal, Linda Lu, Diana C Mutz, and Ani Nenkova. 2021. From toxicity in online comments to incivility in american news: Proceed with caution. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2620–2630. Jigsaw. 2018. Toxic comment classification challenge. https://www.kaggle.com/c/\jigsawtoxic-comment-classificationchallenge/data. Accessed: 2021-05-01. Mansooreh Karami, Ahmadreza Mosallanezhad, Michelle V Mancenido, and Huan Liu. 2020. "let's eat grandma": When punctuation matters in sentence representation for sentiment analysis. *arXiv* e-prints, pages arXiv–2101. Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, and Rui Yan. 2019. Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1909– 1919, Hong Kong, China. Association for Computational Linguistics. Aibek Makazhanov, Denilson Barbosa, and Grzegorz Kondrak. 2014. Extracting family relationship networks from novels. *arXiv preprint* arXiv:1405.0603. Mónica Marrero, Julián Urbano, Sonia SánchezCuadrado, Jorge Morato, and Juan Miguel GómezBerbís. 2013. Named entity recognition: fallacies, challenges and opportunities. Computer Standards & Interfaces, 35(5):482–489. Philip Massey, Patrick Xia, David Bamman, and Noah A Smith. 2015. Annotating character relationships in literary texts. arXiv preprint arXiv:1512.00728. Andrea Moro. 2003. Notes on vocative case. a case study in clause structure. Amsterdam Studies in the Theory and History of Linguistic Science Series 4, pages 247–262. Hamdy Mubarak, Kareem Darwish, Walid Magdy, Tamer Elsayed, and Hend Al-Khalifa. 2020. Overview of OSACT4 arabic offensive language detection shared task. In Proceedings of the 4th Workshop on open-source arabic corpora and processing tools, with a shared task on offensive language detection, pages 48–52. Mutee U Rahman Muslim and Mohammad Iqbal Bhatti. 2010. Finite state morphology and sindhi noun inflections. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, pages 669–676. Jee-sun Nam and Key-Sun Choi. 1997. A local grammar-based approach to recognizing of proper names in korean texts. In *Fifth Workshop on Very* Large Corpora. Harm op den Akker and Rieks op den Akker. 2009. Are you being addressed? - real-time addressee detection to support remote participants in hybrid meetings. In *Proceedings of the SIGDIAL 2009 Conference*, pages 21–28, London, UK. Association for Computational Linguistics. Hiroki Ouchi and Yuta Tsuboi. 2016. Addressee and response selection for multi-party conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2133–2143, Austin, Texas. Association for Computational Linguistics. Vinodkumar Prabhakaran, Ashima Arora, and Owen Rambow. 2014. Staying on topic: An indicator of power in political debates. In *Proceedings of the* 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1481–1486. Vinodkumar Prabhakaran, Tomas By, Julia Hirschberg, Owen Rambow, Samira Shaikh, Tomek Strzalkowski, Jennifer Tracey, Michael Arrigo, Rupayan Basu, Micah Clark, et al. 2015. A new dataset and evaluation for belief/factuality. In *4th Joint Conference on Lexical and Computational Semantics,** SEM 2015, pages 82–91. Association for Computational Linguistics (ACL). Vinodkumar Prabhakaran, Ben Hutchinson, and Margaret Mitchell. 2019. Perturbation sensitivity analysis to detect unintended model biases. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5740–5745. Vinodkumar Prabhakaran, Ajita John, and Dorée D Seligmann. 2013. Who had the upper hand? ranking participants of interactions based on their relative power. In *Proceedings of the Sixth International* Joint Conference on Natural Language Processing, pages 365–373. Vinodkumar Prabhakaran and Owen Rambow. 2014. Predicting power relations between participants in written dialog from a single thread. In *Proceedings of the 52nd Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 339–344. Sara Rosenthal and Kathleen Mckeown. 2017. Detecting influencers in multiple online genres. ACM Transactions on Internet Technology (TOIT), 17(2):1–22. Beth M Sundheim. 1995. Overview of results of the MUC-6 evaluation. In Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995. Reut Tsarfaty, Shoval Sadde, Stav Klein, and Amit Seker. 2019. What's wrong with hebrew NLP? and how to make it right. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 259–264. Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. RtGender: A corpus for studying differential responses to gender. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation (LREC 2018). Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 2145–2158. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.1 ✓ B1. Did you cite the creators of artifacts you used? Section 2.2 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 7 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2 and ## C ✓ **Did You Run Computational Experiments?** Section 2.2 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. We used checkpoints of pre-trained models and discussed their size and parameters (and refer to respective papers). We do not train any new models. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. No hyper-parameter tuning was performed ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 2.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 2 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 2.1 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. We trained expert annotators on the topic based on the information presented in the paper. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. We used data published in another publication, no new data was collected from users. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We did not perform any data collections from users D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. This will be discussed in the camera-ready, since annotators were students from a specific university.
jiang-etal-2023-low
{``}Low-Resource{''} Text Classification: A Parameter-Free Classification Method with Compressors
https://aclanthology.org/2023.findings-acl.426
Deep neural networks (DNNs) are often used for text classification due to their high accuracy. However, DNNs can be computationally intensive, requiring millions of parameters and large amounts of labeled data, which can make them expensive to use, to optimize, and to transfer to out-of-distribution (OOD) cases in practice. In this paper, we propose a non-parametric alternative to DNNs that{'}s easy, lightweight, and universal in text classification: a combination of a simple compressor like \textit{gzip} with a $k$-nearest-neighbor classifier. Without any training parameters, our method achieves results that are competitive with non-pretrained deep learning methods on six in-distribution datasets.It even outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also excels in the few-shot setting, where labeled data are too scarce to train DNNs effectively.
# "Low-Resource" Text Classification: A Parameter-Free Classification Method With Compressors Zhiying Jiang1,2, Matthew Y.R. Yang1**, Mikhail Tsirlin**1, Raphael Tang1**, Yiqin Dai**2and **Jimmy Lin**1 1 University of Waterloo 2 AFAIK {zhiying.jiang, m259yang, mtsirlin, r33tang}@uwaterloo.ca quinn@afaik.io jimmylin@uwaterloo.ca ## Abstract Deep neural networks (DNNs) are often used for text classification due to their high accuracy. However, DNNs can be computationally intensive, requiring millions of parameters and large amounts of labeled data, which can make them expensive to use, to optimize, and to transfer to out-of-distribution (OOD) cases in practice. In this paper, we propose a non-parametric alternative to DNNs that's easy, lightweight, and universal in text classification: a combination of a simple compressor like *gzip* with a k-nearest-neighbor classifier. Without any training parameters, our method achieves results that are competitive with non-pretrained deep learning methods on six in-distribution datasets. It even outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also excels in the few-shot setting, where labeled data are too scarce to train DNNs effectively. Code is available at https://github.com/bazingagin/npc_gzip. ## 1 Introduction Text classification, as one of the most fundamental tasks in natural language processing (NLP), has improved substantially with the help of neural networks (Li et al., 2022). However, most neural networks are data-hungry, the degree of which increases with the number of parameters. Hyperparameters must be carefully tuned for different datasets, and the preprocessing of text data (e.g., tokenization, stop word removal) needs to be tailored to the specific model and dataset. Despite their ability to capture latent correlations and recognize implicit patterns (LeCun et al., 2015), complex deep neural networks may be overkill for simple tasks such as topic classification, and lighter alternatives are usually good enough. For example, Adhikari et al. (2019b) find that a simple long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) with appropriate regularization can achieve competitive results. Shen et al. 6810 (2018) further show that even word-embeddingbased methods can achieve results comparable to convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Among all the endeavors for a lighter alternative to DNNs, one stream of work focuses on using compressors for text classification. There have been several studies in this field (Teahan and Harper, 2003; Frank et al., 2000), most of them based on the intuition that the minimum cross entropy between a document and a language model of a class built by a compressor indicates the class of the document. However, previous works fall short of matching the quality of neural networks. Addressing these shortcomings, we propose a text classification method combining a lossless compressor, a compressor-based distance metric with a k-nearest-neighbor classifier (kNN). It utilizes compressors in capturing regularity, which is then translated into similarity scores by a compressor-based distance metric. With the resulting distance matrix, we use kNN to perform classification. We carry out experiments on seven in-distribution datasets and five out-of-distribution ones. With a simple compressor like *gzip*, our method achieves results competitive with those of DNNs on six out of seven datasets and outperforms all methods including BERT on all OOD datasets. It also surpasses all models by a large margin under few-shot settings. Our contributions are as follows: (1) we are the first to use NCD with kNN for topic classification, allowing us to carry out comprehensive experiments on large datasets with compressor-based methods; (2) we show that our method achieves results comparable to non-pretrained DNNs on six out of seven in-distribution datasets; (3) on OOD datasets, we show that our method outperforms all methods, including pretrained models such as BERT; and (4) we demonstrate that our method excels in the few-shot setting of scarce labeled data. ## 2 Related Work 2.1 Compressor-Based Text Classification Text classification using compressors can be divided into two main approaches: (1) Using a compressor to estimate entropy based on Shannon Information Theory; (2) Using a compressor to approximate Kolmogorov complexity and *information distance*. 1 The first approach mainly employs a text compression technique called Prediction by Partial Matching (PPM)2for topic classification. This approach estimates the cross entropy between the probability distribution of a specific class c and a given document d: Hc(d) (Frank et al., 2000; Teahan and Harper, 2003). The intuition is that the lower the cross entropy, the more likely that d belongs to c. Marton et al. (2005); Coutinho and Figueiredo (2015); Kasturi and Markov (2022) further improve the final accuracy by improving the representation to better cope with the compressor. Another line of compressor-based methods (Khmelev and Teahan, 2003; Keogh et al., 2004) takes advantage of the *information distance* (Bennett et al., 1998), a distance metric derived from Kolmogorov complexity. The intuition of *information distance* is that for two similar objects, there exists a *simple* program to convert one to another. However, most previous works focus on clustering (Vitányi et al., 2009), plagiarism detection (Chen et al., 2004) and time series data classification (Keogh et al., 2004). Few (Marton et al., 2005; Coutinho and Figueiredo, 2015) explore its application to topic classification, and none applies the combination of *information* distance and k-nearest-neighbor (kNN) classifier when k > 1 to topic classification. Besides, to the best of our knowledge, all the previous works use relatively small datasets like 20News and Reuters-10. There is neither a comparison between compressor-based methods and deep learning methods nor a comprehensive study of large datasets. ## 2.2 Deep Learning For Text Classification The deep learning methods used for text classification can be divided into two: transductive learning, represented by Graph Convolutional Networks (GCNs) (Yao et al., 2019), and inductive learning, dominated by recurrent neural networks (RNNs) and convolutional neural networks (CNNs). We focus on inductive learning in this paper as transductive learning assumes the test dataset is presented during the training, which is not a common scenario in practice. Zhang et al. (2015) use the character-based CNN with millions of parameters for text classification. Conneau et al. (2017) extend the idea with more layers. Along the line of RNNs, Kawakami (2008) introduce a method that uses LSTMs (Hochreiter and Schmidhuber, 1997) to learn the sequential information for classification. To better capture the important information regardless of position, Wang et al. (2016) incorporate the attention mechanism into the relation classification. Yang et al. (2016) include a hierarchical structure for sentence-level attention. As the parameter number and the model complexity increase, Joulin et al. (2017) look for using a simple linear model with a hidden layer coping with n-gram features and hierarchical softmax to improve efficiency. The landscape of classification has been further transformed by the widespread use of pretrained models like BERT (Kenton and Toutanova, 2019), with hundreds of millions of parameters pretrained on a corpus containing billions of tokens. BERT achieves the state of the art on text classification (Adhikari et al., 2019a). Built on BERT, Reimers and Gurevych (2019) calculate semantic similarity between pairs of sentences efficiently by using a siamese network architecture and fine-tuning on multiple NLI datasets (Bowman et al., 2015; Williams et al., 2018). We compare gzip with these deep learning models. ## 3 Our Approach Our approach consists of a lossless compressor, a compressor-based distance metric, and a k-NearestNeighbor classifier. Lossless compressors aim to represent information using as few bits as possible by assigning shorter codes to symbols with higher probability. The intuition of using compressors for classification is that (1) compressors are good at capturing regularity; (2) objects from the same category share more regularity than those from different categories. For example, x1 below belongs to the same category as x2, but a different category from x3. If we use C(·) to represent com6811 ![2_image_0.png](2_image_0.png) pressed length, we will find C(x1x2) − C(x1) < C(x1x3) − C(x1) where C(x1x2) means the compressed length of concatenation of x1 and x2. In other words, C(x1x2) − C(x1) can be interpreted as how many bytes do we still need to encode x2 based on the information of x1: $$\mathbf{\Pi}_{\stackrel{\cdot}{\cdot}}^{e_{1}}=J a p$$ x1 *= Japan's Seiko Epson Corp. has developed a* 12-gram flying microrobot. x2 *= The latest tiny flying robot has been unveiled* in Japan. x3 = Michael Phelps won the gold medal in the 400 individual medley. This intuition can be formalized as a distance metric derived from Kolmogorov complexity (Kolmogorov, 1963). Kolmogorov complexity K(x) characterizes the length of the shortest binary program that can generate x. K(x) is theoretically the ultimate lower bound for information measurement. To measure information content shared between two objects, Bennett et al. (1998) define information distance E(*x, y*) as the length of the shortest binary program that converts x to y: $$E(x,y)=\max\{K(x|y),K(y|x)\}\tag{1}$$ $$=K(xy)-\min\{K(x),K(y)\}\tag{2}$$ As the incomputable nature of Kolmogorov complexity renders E(x,y) incomputable, Li et al. (2004) proposes a normalized and computable version of information distance, *Normalized Compression Distance* (NCD), utilizing compressed length C(x) to approximate Kolmogorov complexity K(x). Formally, it's defined as follows (detailed derivation is shown in Appendix A): $$N C D(x,y)={\frac{C(x y)-\operatorname*{min}\{C(x),C(y)\}}{\operatorname*{max}\{C(x),C(y)\}}}\,\,\,\,(3)$$ The intuition behind using compressed length is that the length of x that has been maximally compressed by a compressor is close to K(x). Generally, the higher the compression ratio, the closer C(x) is to K(x). 1 import gzip 2 import numpy as np 3 for ( x1 , _ ) in test_set : Cx1 = len(gzip.compress(x1.encode())) distance_from_x1 = [] for(x2, _) in training_set: Cx2 = len(gzip.compress(x2.encode())) x1x2 = ".join([x1, x2]) Cx1x2 = len(gzip.compress(x1x2.encode())) ncd = (Cx1x2 - min(Cx1,Cx2)) / max( Cx1, Cx2) distance_from_x1.append(ncd) 12 sorted_idx = np . argsort ( np . array ( distance_from_x1 ) ) 13 top_k_class = training_set [ sorted_idx [: k ] , 1] 14 predict_class = max(set( top_k_class ) , key = top_k_class . count ) $\text{P}\left(\text{S}\right)=\text{P}\left(\text{S}\right)\text{P}\left(\text{S}\right)$. Listing 1: Python Code for Text Classification with *gzip*. As our main experiment results use *gzip* as the compressor, C(x) here means the length of x after being compressed by *gzip*. C(xy) is the compressed length of concatenation of x and y. With the distance matrix NCD provides, we can then use k-nearest-neighbor to perform classification. Our method can be implemented with 14 lines of Python code below. The inputs are training_set, test_set, both consisting of an array of *(text, label)* tuples, and k as shown in Listing 1. Our method is a simple, lightweight, and universal alternative to DNNs. It's simple because it doesn't require any preprocessing or training. It's lightweight in that it classifies without the need for parameters or GPU resources. It's universal as compressors are data-type agnostic, and non-parametric methods do not bring underlying assumptions. ## 4 Experimental Setup 4.1 Datasets We choose a variety of datasets to investigate the effects of the number of training samples, the number of classes, the length of the text, and the difference in distribution on accuracy. The details of each dataset are listed in Table 1. Previous works on text classification have two disjoint preferences when choosing evaluation datasets: CNN and RNN-based methods favor large-scale datasets (AG News, DBpedia, YahooAnswers), whereas transductive methods like graph convolutional neural networks focus on smaller ones (20News, Ohsumed, R8, R52) (Li et al., 2022). Dataset Ntrain Ntest C W L V ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) AG News 120K 7.6K 4 44 236 128K DBpedia 560K 70K 14 54 301 1M YahooAnswers 1.4M 60K 10 107 520 1.5M 20News 11K 7.5K 20 406 1902 277K ohsumed 3.4K 4K 23 212 1273 55K R8 5.5K 2.2K 8 102 587 24K R52 6.5K 2.6K 52 110 631 26K KinyarwandaNews 17K 4.3K 14 232 1872 240K KirundiNews 3.7K 923 14 210 1722 63K DengueFilipino 4K 500 5 10 62.7 13K SwahiliNews 22.2K 7.3K 6 327 2.2K 570K SogouNews 450K 60K 5 589 2.8K 611K Model # Par. PT TT ED Preprocessing Details ![3_image_2.png](3_image_2.png) TFIDF+LR 260K ✗ ✓ ✗ tok+tfidf+dict (+lower) LSTM 5.2M ✗ ✓ ✗ tok+dict (+emb+lower+pad) Bi-LSTM+Attn 8.2M ✗ ✓ ✗ tok+dict (+emb+lower+pad) HAN 30M ✗ ✓ ✗ tok+dict (+emb+lower+pad) charCNN 2.7M ✗ ✓ ✗ dict (+lower+pad) textCNN 31M ✗ ✓ ✗ tok+dict (+emb+lower+pad) RCNN 19M ✗ ✓ ✗ tok+dict (+emb+lower+pad) VDCNN 14M ✗ ✓ ✗ dict (+lower+pad) fastText 8.2M ✗ ✓ ✗ tok+dict (+lower+pad+ngram) BERT-base 109M ✓ ✓ ✓ tok+dict+pe (+lower+pad) W2V 0 ✓ ✗ ✗ tok+dict (+lower) SentBERT 0 ✓ ✗ ✓ tok+dict (+lower) ![3_image_3.png](3_image_3.png) We include datasets on both sides in order to investigate how our method performs in both situations. Apart from dataset sizes, we also take the number of classes into account by intentionally including datasets like R52 to evaluate the performance of datasets with a large number of classes. We also include the text length of each dataset in Table 1 as previous works (Marton et al., 2005) indicate that it affects the accuracy of compressor-based methods. Generalizing to out-of-distribution datasets has always been a challenge in machine learning. Even with the success of pretrained models, this problem is not alleviated. In fact, Yu et al. (2021) have shown that improved in-distribution accuracy on pretrained models may lead to poor OOD performance in image classification. In order to compare our method with pretrained models on OOD datasets, we choose five datasets that are unseen in BERT's pretrained corpus—Kinyarwanda news, Kirundi news, Filipino dengue, Swahili news, and Sogou news. Those datasets are chosen to have Latin script which means they have a very similar alphabet as English. For example, Swahili has the same vowels as English but doesn't have q,x as consonants; Sogou news is in Pinyin - a phonetic romanization of Chinese. Therefore, those datasets can be viewed as permutations of English alphabets (see Table 7 for text examples). ## 4.2 Baselines We compare our result with (1) neural network methods that require training and (2) nonparametric methods that use the kNN classifier directly, with or without pre-training on external data. Specifically, we choose mainstream architectures for text classification, like logistic regression, fastText (Joulin et al., 2017), RNNs with or without attention (vanilla LSTM (Hochreiter and Schmidhuber, 1997), bidirectional LSTMs (Schuster and Paliwal, 1997) with attention (Wang et al., 2016), hierarchical attention networks (Yang et al., 2016)), CNNs (character CNNs (Zhang et al., 2015), recurrent CNNs (Lai et al., 2015), very deep CNNs (Conneau et al., 2017)) and BERT (Devlin et al., 2019). We also include three other non-parametric methods: word2vec (W2V) (Mikolov et al., 2013), pretrained sentence BERT (SentBERT) (Reimers and Gurevych, 2019), and the length of the instance (TextLength), all using a kNN classifier. "TextLength" is a baseline where the text length of the instance is used as the only input into a kNN classifier, whose result rules out the impact of text length in classification. We present details of models in Table 2. Here we use AG News as an example to estimate the model size, as the number of parameters is affected by the number of classes and the vocabulary size. This dataset has a relatively small vocabulary size and number of classes, making the estimated number of parameters the lower bound of the studied datasets. Some methods require pre-training either on the target dataset or on other external datasets. We also list preprocessing required by the models in Table 2, including tokenization ("tok"), building vocabulary dictionaries and mapping tokens ("dict"), using pretrained word embeddings ("emb"), lowercasing words ("lower") and padding sequences to a certain length ("pad"). Other modelspecific preprocessing includes an extra bag of ngrams ("ngram") for *fastText* and positional embedding ("pe") for BERT. Note that for models that only require training, we do not use pretrained word embeddings; otherwise, the boundary between pretraining and training will become ambiguous. Model Pre-training Training AGNews DBpedia YahooAnswers 20News Ohsumed R8 R52 TFIDF+LR ✗ ✓ 0.898 0.982 0.715 0.827 0.549 0.949 0.874 LSTM ✗ ✓ 0.861 0.985 0.708 0.657 0.411 0.937 0.855 Bi-LSTM+Attn ✗ ✓ 0.917 0.986 0.732 0.667 0.481 0.943 0.886 HAN ✗ ✓ 0.896 0.986 0.745 0.646 0.462 0.960 0.914 charCNN ✗ ✓ 0.914 0.986 0.712 0.401 0.269 0.823 0.724 textCNN ✗ ✓ 0.817 0.981 0.728 0.751 0.570 0.951 0.895 RCNN ✗ ✓ 0.912 0.984 0.702 0.716 0.472 0.810 0.773 VDCNN ✗ ✓ 0.913 0.987 0.734 0.491 0.237 0.858 0.750 fastText ✗ ✓ 0.911 0.978 0.702 0.690 0.218 0.827 0.571 BERT ✓ ✓ 0.944 0.992 0.768 0.868 0.741 0.982 0.960 W2V ✓ ✗ 0.892 0.961 0.689 0.460 0.284 0.930 0.856 SentBERT ✓ ✗ 0.940 0.937 0.782 0.778 0.719 0.947 0.910 TextLength ✗ ✗ 0.275 0.093 0.105 0.053 0.090 0.455 0.362 gzip (ours) ✗ ✗ 0.937 0.970 0.638 0.685 0.521 0.954 0.896 Table 4: Test accuracy comparison between the average of all baseline models (excluding TextLength) and *gzip*. ## 5 Results 5.1 Result On In-Distribution Datasets | Dataset | average | gzip | |--------------|-----------|--------| | AGNews | 0.901 | 0.937 | | DBpedia | 0.978 | 0.970 | | YahooAnswers | 0.726 | 0.638 | | 20News | 0.678 | 0.685 | | Ohsumed | 0.470 | 0.521 | | R8 | 0.914 | 0.954 | | R52 | 0.838 | 0.896 | We train all baselines on seven datasets (training details are in Appendix C) using their full training sets. The results are shown in Table 3. Our method performs particularly well on AG News, R8, and R52. On the AG News dataset, fine-tuning BERT yields the highest performance among all methods, while our method, without any pre-training, achieves competitive results, with only 0.007 points lower than BERT. On both R8 and R52, the only non-pretrained neural networks that outperform our method is HAN. For YahooAnswers, the accuracy of *gzip* is about 7% lower than the average neural methods. This may be due to the large vocabulary size of YahooAnswers, which makes it hard for the compressor to compress (detailed discussion is in Appendix F). Overall, BERT-based models are robust to the size of in-distribution datasets. Character-based models like charCNN and VDCNN perform badly when the dataset is small and the vocabulary size is large (e.g., 20News). Word-based models are better at handling big vocabulary sizes. The result of TextLength is extremely low, indicating the compressed length used in NCD does not benefit from the length distribution of different classes. gzip does not perform well on extremely large datasets (e.g., YahooAnswers), but is competitive on medium and small datasets. Performance-wise, the only non-pretrained deep learning model that's competitive to *gzip* is HAN, which surpasses *gzip* on four datasets and still achieves relatively high accuracy when it's beaten by *gzip*, unlike textCNN. The difference is that *gzip* doesn't require training. We list the average of all baseline models' test accuracy (except TextLength for its very low accuracy) in Table 4. We observe that our method is either higher or close to the average on all but the YahooAnswers dataset. ## 5.2 Result On Out-Of-Distribution Datasets On five OOD datasets (Kinyarwanda news, Kirundi news, Filipino dengue, Swahili news and Sogou news), we also select DNNs to cover a wide range of parameter numbers. We discard CNN-based methods due to their inferiority when datasets are small, as shown in both Section 5.1 and Zhang et al. (2015). In addition, we also add BERT pretrained on 104 languages (mBERT). We can see in Table 5 that on languages that mBERT has not been pretrained on (Kinyarwanda, Kirundi, or Pinyin), it is worse than BERT. Compared with non-pretrained ones, pretrained models do not hold their advantage on low-resource languages with smaller data sizes, except for Filipino which shares a large vocabulary with English words. On large OOD datasets (i.e., SogouNews), BERT achieves competitive results with other non-pretrained neural networks. | Model/Dataset | KinyarwandaNews | KirundiNews | DengueFilipino | SwahiliNews | SogouNews | | | | | | |-----------------|-------------------|---------------|------------------|---------------|-------------|-------------|-------|-------------|-------|-------------| | Shot# | Full | 5-shot | Full | 5-shot | Full | 5-shot | Full | 5-shot | Full | 5-shot | | Bi-LSTM+Attn | 0.843 | 0.253±0.061 | 0.872 | 0.254±0.053 | 0.948 | 0.369±0.053 | 0.863 | 0.357±0.049 | 0.952 | 0.534±0.042 | | HAN | 0.820 | 0.137±0.033 | 0.881 | 0.190±0.099 | 0.981 | 0.362±0.119 | 0.887 | 0.264±0.042 | 0.957 | 0.425±0.072 | | fastText | 0.869 | 0.170±0.057 | 0.883 | 0.245±0.242 | 0.870 | 0.248±0.108 | 0.874 | 0.347±0.255 | 0.930 | 0.545±0.053 | | W2V | 0.874 | 0.281±0.236 | 0.904 | 0.288±0.189 | 0.993 | 0.481±0.158 | 0.892 | 0.373±0.341 | 0.943 | 0.141±0.005 | | SentBERT | 0.788 | 0.292±0.062 | 0.886 | 0.314±0.060 | 0.992 | 0.629±0.143 | 0.822 | 0.436±0.081 | 0.860 | 0.485±0.043 | | BERT | 0.838 | 0.240±0.060 | 0.879 | 0.386±0.099 | 0.979 | 0.409±0.058 | 0.897 | 0.396±0.096 | 0.952 | 0.221±0.041 | | mBERT | 0.835 | 0.229±0.066 | 0.874 | 0.324±0.071 | 0.983 | 0.465±0.048 | 0.906 | 0.558±0.169 | 0.953 | 0.282±0.060 | | gzip (ours) | 0.891 | 0.458±0.065 | 0.905 | 0.541±0.056 | 0.998 | 0.652±0.048 | 0.927 | 0.627±0.072 | 0.975 | 0.649±0.061 | Without any pre-training or fine-tuning, our method outperforms both BERT and mBERT on all five datasets. In fact, our experiments show that our method outperforms both pretrained and nonpretrained deep learning methods on OOD datasets, which back our claim that our method is universal in terms of dataset distributions. To put it simply, our method is designed to handle unseen datasets: the compressor is data-type-agnostic by nature and non-parametric methods do not introduce inductive bias during training. ## 5.3 Few-Shot Learning We further compare our method with deep learning methods under the few-shot setting. We carry out experiments on AG News, DBpedia, and SogouNews across both non-pretrained deep neural networks and pretrained ones. We use n-shot labeled examples per class from the training dataset, where n = {5, 10, 50, 100}. We choose these three datasets, as their scale is large enough to cover 100shot settings and they vary in text lengths as well as languages. We choose methods whose trainable parameters range from zero parameters like word2vec and sentence BERT to hundreds of millions of parameters like BERT, covering both wordbased models (HAN) and an n-gram one (fastText). We plot the results in Figure 2 (detailed numbers are shown in Appendix D). As shown, *gzip* outperforms non-pretrained models with 5, 10, 50 settings on all three datasets. When the number of shots is as few as n = 5, *gzip* outperforms non-pretrained models by a large margin: *gzip* is 115% better in accuracy than fastText in the AG News 5-shot setting. In the 100-shot setting, *gzip* also outperforms nonpretrained models on AG News and SogouNews but slightly underperforms on DBpedia. Previous work (Nogueira et al., 2020; Zhang et al., 2021) show that pretrained models are excellent few-shot learners, which is reflected in our consistently high accuracy of BERT and SentBERT on in-distribution datasets like AG News and DBpedia under few-shot settings.3It's worth noting, though, that *gzip* outperforms SentBERT for 50 and 100 shots. However, as shown in the SogouNews results, when the dataset is distinctively different from the pretrained datasets, the inductive bias introduced from the pre-training data leads to a low accuracy of BERT and SentBERT with 10, 50 and 100-shot settings, especially with the 5-shot setting. In general, when the shot number increases, the accuracy difference between *gzip* and deep learning methods becomes smaller. W2V is an exception that has a large variance in accuracy. This is due to the vectors being trained for a limited set of words, meaning that numerous tokens in the test set are unseen and hence out-of-vocabulary. We further investigate the quality of DNNs and our method in the 5-shot setting on five OOD datasets, tabulating results in Table 5. Under 5shot setting on OOD datasets, our method excels all the deep learning methods by a huge margin: it surpasses the accuracy of BERT by 91%, 40%, 59%, 58% and 194% and surpasses mBERT's accuracy by 100%, 67%, 40%, 12% and 130% on the corresponding five datasets.4 The reason behind the outperformance of our method is due to compressors' excellent ability to capture regularity, which is prominent when training becomes moot with very few labeled data for DNNs. ## 6 Analyses 6.1 Using Other Compressors As the compressor in our method can actually be replaced by any other compressors, we evaluate the 3BERT reaches almost perfect accuracy on DBpedia probably because the data is extracted from Wikipedia, which BERT is pretrained on. 4mBERT has much higher accuracy than BERT in the fewshot setting on Filipino and Swahili, where mBERT was pretrained on. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) performance of three other lossless compressors: bz2, *lzma*, and *zstandard*. Due to the low compression speed of *lzma*, we randomly select 1,000 test samples from the whole test set to evaluate and conduct our experiments under 5, 10, 50, and 100-shot settings. We repeat the experiments under each setting for five times to calculate the mean and the 95% confidence interval. Each of the three compressors that we choose has different underlying algorithms from gzip. bz2 uses Burrows-Wheeler algorithm (Burrows, 1994) to permute the order of characters in the strings to create more repeated "substrings" that can be compressed, giving it a higher compression ratio (e.g., it can achieve 2.57 bits-per-character (bpc) on AGNews while *gzip* can achieve only 3.38 bpc). lzma is similar to *gzip* in that they are both based on LZ77 (Ziv and Lempel, 1977), a dictionary-based compression algorithm using *(offset, length)* to represent the n-gram that has previously appeared in the search buffer.5*zstandard (zstd)* is a new compression algorithm that's built on LZ77, Huffman coding as well as Asymmetric Numeral Systems (ANS) (Duda, 2009). We pick *zstd* because of its high compressing speed and a compression ratio close to *gzip*. A competitive result would suggest that *zstd* might be an alternative to *gzip* and speed up the classification. In Figure 4, we plot the test accuracy and compression ratio of each compressor. Compression ratio is calculated by original size compressed size , so the larger the compression ratio is, the more a compressor can compress.6 Each marker type represents a dataset, with '+' representing the mean of each compressor's test accuracy across different shot settings. In general, *gzip* achieves relatively high and stable accuracy across three datasets. *lzma* is competitive with *gzip* but the speed is much slower. Despite its high compression ratio, bz2 performs the worst across AGNews and DBpedia. Normally, a higher compression ratio of a compressor suggests that the NCD based on it approximates the informa5*gzip* uses DEFLATE algorithm, which uses Huffman coding (Huffman, 1952) to further encode *(offset, length)* whereas *lzma* uses range coding to do so, resulting *lzma* has a higher compression ratio but a slower compression speed. 6We use compression ratio instead of bpc here as the latter one is too close to each other and cannot be differentiated from one another. Table 6: Comparison with other compressor-based methods under the 100-shot setting. tion distance E(*x, y*) better. But in bz2's case, its accuracy is always lower than the regression line (Figure 4). We conjecture it may be because the Burrows-Wheeler algorithm used by bz2 dismisses the information of character order by permuting characters during compression. We investigate the correlation between accuracy and compression ratio across compressors and find that they have a moderate monotonic linear correlation as shown in Figure 4. As the shot number increases, the linear correlation becomes more obvious with rs = 0.605 for all shot settings and Pearson correlation rp = 0.575, 0.638, 0.691, 0.719 respectively on 5, 10, 50, and 100-shot settings across four compressors. We have also found that for a single compressor, the easier a dataset can be compressed, the higher the accuracy *gzip* can achieve (details are in Appendix F.1). Combining our findings, we can see that a compressor performs best when it has a high compression ratio on datasets that are highly compressible unless crucial information is disregarded by its compression algorithm. ## 6.2 Using Other Compressor-Based Methods A majority of previous compressor-based text classification is built on estimating cross entropy between the probability distribution built on class c and the document d: Hc(d), as we mention in Section 2.1. Summarized in Russell (2010), the procedure of using compressor to estimate Hc(d) is: 1. For each class c, concatenate all samples dc in the training set belonging to c. 2. Compress dc as one long document to get the compressed length C(dc). 3. Concatenate the given test sample du with dc and compress to get C(dcdu). 4. The predicted class is arg minc C(dcdu) − C(dc). The distance metric used by previous work (Marton et al., 2005; Russell, 2010) is mainly C(dcdu) − C(dc). Although using this distance metric is faster than pair-wise distance matrix computation on small datasets, it has several drawbacks: (1) Most compressors have a limited "size", for *gzip* ![7_image_0.png](7_image_0.png) it's the sliding window size that can be used to search back of the repeated string while for *lzma* it's the dictionary size it can keep a record of. This means that even if there are a large number of training samples, the compressor can't take full advantage of those samples; (2) When dc is large, compressing dcdu can be slow, which parallelization can't solve. These two main drawbacks stop this method from being applied to a really large dataset. Thus, we limit the size of the dataset to 1,000 randomly picked test samples and 100-shot from each class in the training set to compare our method with this method. In Table 6, "*gzip* (ce)" means using the cross entropy C(dcdu) − C(dc) while "*gzip* (kNN)" refers to our method. We carry out each experiment for five times and calculate the mean and 95% confidence interval. Our method outperforms the crossentropy method on AGNews and SogouNews. The reason for the large accuracy gap between the two methods on SogouNews is probably because each instance in SogouNews is very long, and the size of each sample can be 11.2K, which, when concatenated, makes dc larger than 1,000K under 100-shot setting, while *gzip* typically has 32K window size only. When the search space is tremendously smaller than the size of dc, the compressor fails to take advantage of all the information from the training set, which renders the compression ineffective. The cross-entropy method does perform very well on YahooAnswers. This might be because on a divergent dataset like YahooAnswers, which is created by numerous online users, concatenating all the samples in a class allows the cross-entropy method to take full advantage of all the information from a single class. We also test the performance of the compressorbased cross-entropy method on *full* AGNews dataset, as it is a relatively smaller one with a shorter single instance. The accuracy is 0.745, not much higher than the 100-shot setting, which further confirms that using C(dcdu) − C(dc) as a distance metric cannot take full advantage of the large datasets. In general, the result suggests that the compressor-based cross-entropy method is not as advantageous as ours on large datasets. ## 7 Conclusions And Future Work In this paper, we use *gzip* with a compressorbased distance metric to do text classification. Our method achieves an accuracy comparable to non-pretrained neural network classifiers on indistribution datasets and outperforms both pretrained and non-pretrained models on out-ofdistribution datasets. We also find that our method has greater advantages under few-shot settings. For future works, we will extend this work by generalizing *gzip* to neural compressors on text, as recent studies (Jiang et al., 2022) show that combining neural compressors derived from deep latent variables models with compressor-based distance metrics can even outperform semi-supervised methods for image classification. ## Limitations As the computation complexity of kNN is O(n 2), when the size of a dataset gets really big, speed becomes one of the limitations of our method. Multithreads and multi-processes can greatly boost the speed. Lempel-Ziv Jaccard Distance (LZJD) (Raff and Nicholas, 2017), a more efficient version of NCD can also be explored to alleviate the inefficiency problem. In addition, as our purpose is to highlight the trade-off between the simplicity of a model and its performance, we focus on the vanilla version of DNNs, which is already complex enough compared with our method, without add-ons like pretrained embeddings (Pennington et al., 2014). This means we do not exhaust all the techniques one can use to improve DNNs, and neither do we exhaust all the text classification methods in the literature. Furthermore, our work only covers traditional compressors. As traditional compressors are only able to capture the orthographic similarity, they may not be sufficient for harder classification tasks like emotional classification. Fortunately, the ability to compress redundant semantic information may be made possible by neural compressors built on latent variable models (Townsend et al., 2018). ## Ethics Being parameter-free, our method doesn't rely on GPU force but CPU resources only. Thus, it does not bring negative environmental impacts revolving around GPU. In terms of overgeneralization, we conduct our experiments on both in-distribution and out-of-distribution datasets, covering six languages. As compressors are data-type agnostic, they are more inclusive to datasets, which allows us to classify low-resource languages like Kinyarwanda, Kirundi, and Swahili and to mitigate the underexposure problem (Hovy and Spruit, 2016). However, as our method has not been fully explored on datasets other than topic classification, it is very possible that our method makes unexpected classification mistakes on tasks like emotion classification. We encourage the usage of this method in the real world to be limited to topic classification and hope that future work can explore more diverse tasks. ## Acknowledgement This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and in part by the Global Water Futures program funded by the Canada First Research Excellence Fund (CFREF). ## References Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019a. Docbert: Bert for document classification. *arXiv preprint arXiv:1904.08398*. Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019b. Rethinking complex neural network architectures for document classification. In Proceedings of the 2019 Conference of NAACL-HLT, Volume 1 (Long and Short Papers), pages 4046–4051. Charles H Bennett, Péter Gács, Ming Li, Paul MB Vitányi, and Wojciech H Zurek. 1998. Information distance. *IEEE Transactions on information theory*, 44(4):1407–1423. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632– 642. Michael Burrows. 1994. A block-sorting lossless data compression algorithm. *SRC Research Report, 124*. Xin Chen, Brent Francia, Ming Li, Brian Mckinnon, and Amit Seker. 2004. Shared information and program plagiarism detection. IEEE Transactions on Information Theory, 50(7):1545–1551. Alexis Conneau, Holger Schwenk, Loïc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1107–1116. David Pereira Coutinho and Mario AT Figueiredo. 2015. Text classification using compression-based dissimilarity measures. *International Journal* of Pattern Recognition and Artificial Intelligence, 29(05):1553004. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Jarek Duda. 2009. Asymmetric numeral systems. *arXiv* preprint arXiv:0902.0271. Eibe Frank, Chang Chui, and Ian H Witten. 2000. Text categorization using compression models. William Hersh, Chris Buckley, TJ Leone, and David Hickam. 1994. Ohsumed: An interactive retrieval evaluation and new large test collection for research. In *SIGIR'94*, pages 192–201. Springer. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598. David A Huffman. 1952. A method for the construction of minimum-redundancy codes. *Proceedings of the* IRE, 40(9):1098–1101. Zhiying Jiang, Yiqin Dai, Ji Xin, Ming Li, and Jimmy Lin. 2022. Few-shot non-parametric learning with deep latent variable model. *Advances in Neural Information Processing Systems (NeurIPS)*. Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In *European conference on machine learning*, pages 137–142. Springer. Armand Joulin, Edouard Grave, and Piotr Bojanowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. *EACL 2017*, page 427. Alexandros Kastanos and Tyler Martin. 2021. Graph convolutional network for swahili news classification. arXiv preprint arXiv:2103.09325. Nitya Kasturi and Igor L Markov. 2022. Text ranking and classification using data compression. In *I (Still)* Can't Believe It's Not Better! Workshop at NeurIPS 2021, pages 48–53. PMLR. Kazuya Kawakami. 2008. Supervised sequence labelling with recurrent neural networks. *Ph. D. thesis*. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Eamonn Keogh, Stefano Lonardi, and Chotirat Ann Ratanamahatana. 2004. Towards parameter-free data mining. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 206–215. Dmitry V Khmelev and William J Teahan. 2003. A repetition based measure for verification of text collections and for text categorization. In *Proceedings* of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 104–110. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. Andrei N Kolmogorov. 1963. On tables of random numbers. Sankhya: The Indian Journal of Statistics, ¯ Series A, pages 369–376. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In *Twenty-ninth AAAI conference on artificial intelligence*. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Proceedings of the Twelfth International Conference on Machine Learning, pages 331–339. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. *nature*, 521(7553):436–444. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Ming Li, Xin Chen, Xin Li, Bin Ma, and Paul MB Vitányi. 2004. The similarity metric. *IEEE transactions on Information Theory*, 50(12):3250–3264. Qian Li, Hao Peng, Jianxin Li, Congying Xia, Renyu Yang, Lichao Sun, Philip S Yu, and Lifang He. 2022. A survey on text classification: From traditional to deep learning. *ACM Transactions on Intelligent Systems and Technology (TIST)*, 13(2):1–41. Xien Liu, Song Wang, Xiao Zhang, Xinxin You, Ji Wu, and Dejing Dou. 2020. Label-guided learning for text classification. *arXiv preprint arXiv:2002.10772*. Evan Dennison Livelo and Charibeth Cheng. 2018. Intelligent dengue infoveillance using gated recurrent neural learning and cross-label frequencies. In 2018 IEEE International Conference on Agents (ICA), pages 2–7. IEEE. Yuval Marton, Ning Wu, and Lisa Hellerstein. 2005. On compression-based text classification. In European Conference on Information Retrieval, pages 300–314. Springer. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Rubungo Andre Niyongabo, Qu Hong, Julia Kreutzer, and Li Huang. 2020. Kinnews and kirnews: Benchmarking cross-lingual text classification for kinyarwanda and kirundi. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5507–5521. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 708–718. Antoine Nzeyimana and Andre Niyongabo Rubungo. 2022. Kinyabert: a morphology-aware kinyarwanda language model. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5347–5363. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Edward Raff and Charles Nicholas. 2017. An alternative to ncd for large sequences, lempel-ziv jaccard distance. In *Proceedings of the 23rd ACM SIGKDD* international conference on knowledge discovery and data mining, pages 1007–1015. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Stuart J Russell. 2010. *Artificial intelligence a modern* approach. Pearson Education, Inc. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450. Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen, Ranit Aharonov, and Noam Slonim. 2022. Cluster & tune: Boost cold start performance in text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7639–7653. William J Teahan and David J Harper. 2003. Using compression-based language models for text categorization. In *Language modeling for information* retrieval, pages 141–165. Springer. James Townsend, Thomas Bird, and David Barber. 2018. Practical lossless compression with latent variables using bits back coding. In International Conference on Learning Representations. Paul MB Vitányi, Frank J Balbach, Rudi L Cilibrasi, and Ming Li. 2009. Normalized information distance. In Information theory and statistical learning, pages 45–82. Springer. Canhui Wang, Min Zhang, Shaoping Ma, and Liyun Ru. 2008. Automatic online news issue construction in web environment. In *Proceedings of the 17th* international conference on World Wide Web, pages 457–466. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the NAACL-HLT, Volume 1 (Long Papers)*, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In *Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies*, pages 1480– 1489. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In *Proceedings of the AAAI conference on artificial* intelligence, volume 33, pages 7370–7377. Yaodong Yu, Heinrich Jiang, Dara Bahri, Hossein Mobahi, Seungyeon Kim, Ankit Singh Rawat, Andreas Veit, and Yi Ma. 2021. An empirical study of pre-trained vision models on out-of-distribution generalization. In *NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications*. Haode Zhang, Yuwei Zhang, Li-Ming Zhan, Jiaxin Chen, Guangyuan Shi, Xiao-Ming Wu, and Albert YS Lam. 2021. Effectiveness of pre-training for few-shot intent classification. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1114–1120. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28. Jacob Ziv and Abraham Lempel. 1977. A universal algorithm for sequential data compression. *IEEE* Transactions on information theory, 23(3):337–343. ## A Derivation Of Ncd Recall that information distance E(*x, y*) is: $$E(x,y)=\max\{K(x|y),K(y|x)\}\tag{4}$$ $$=K(xy)-\min\{K(x),K(y)\}\tag{5}$$ E(*x, y*) equates the similarity between two objects in a program that can convert one to another. The simpler the converting program is, the more similar the objects are. For example, the negative of an image is very similar to the original one as the transformation can be simply described as "inverting the color of the image". In order to compare the similarity, the relative distance is preferred. Vitányi et al. (2009) propose a normalized version of E(*x, y*) called Normalized information distance (NID). Definition 1 (NID) *NID is a function:* Ω × Ω → [0, 1], where Ω *is a non-empty set, defined as:* $$N I D(x,y)={\frac{\operatorname*{max}\{K(x|y),K(y|x)\}}{\operatorname*{max}\{K(x),K(y)\}}}.\quad(6)$$ Equation (6) can be interpreted as follows: Given two sequences x, y, K(y) ≥ K(x): $$\text{NID}(x,y)=\frac{K(y)-I(x:y)}{K(y)}=1-\frac{I(x:y)}{K(y)},\tag{7}$$ where $I(x:y)=K(y)-K(y|x)$ means the where I(x : y) = K(y) − K(y|x) means the mutual algorithmic information. I(x:y) K(y) means the shared information (in bits) per bit of information contained in the most informative sequence, and Equation (7) here is a specific case of Equation (6). Normalized Compression Distance (NCD) is a computable version of NID based on real-world compressors. In this context, K(x) can be viewed as the length of x after being maximally compressed. Suppose we have C(x) as the length of compressed x produced by a real-world compressor, then NCD is defined as: $$\text{NCD}(x,y)=\frac{C(xy)-\min\{C(x),C(y)\}}{\max\{C(x),C(y)\}}.\tag{8}$$ NCD is thus computable in that it not only uses compressed length to approximate K(x) but also replaces conditional Kolmogorov complexity with C(xy) that only needs a simple concatenation of x, y. ## B Dataset Details In addition to statistics of the datasets we use, we also include one example for each dataset in Table 7. We then briefly introduce what the dataset is about and how are they collected. AG News7contains more than 1 million news articles from an academic news search engine ComeToMyHead and is collected for a research purpose; DBpedia (Lehmann et al., 2015) is extracted from Wikipedia as a crowd-sourced project and we use the version in torchtext version 0.11. YahooAnswers is introduced in Zhang et al. (2015) through the Yahoo! Webscope program and use the 10 largest main categories for topic classification corpus. 20News (Lang, 1995) is originally collected by Ken Lang and is widely used to evaluate text classification and we use the version in scikit-learn. Ohsumed (Hersh et al., 1994) is collected from 270 medical journals over a five-year period (19871991) with 23 cardiovascular diseases. We use the subset introduced in (Yao et al., 2019) to create a single-label classification. Both R8 and R52 are two subsets from Reuters21578 collection (Joachims, 1998) which can be downloaded from Text Categorization Corpora. KirundiNews (KirNews) and KinyarwandaNews (KinNews) are introduced in (Niyongabo et al., 2020), collected as a benchmark for text classification on two low-resource African languages, which can be freely downloaded from the repository. SwahiliNews (Swahili)8is a news dataset in Swahili. It's spoken by 100-150 million people across East Africa, and the dataset is created to help leverage NLP techniques across the African continent, which can be freely downloaded from huggingface datasets. DengueFilipino (Filipino) (Livelo and Cheng, 2018) is a multi-label low-resource classification dataset, which can be freely downloaded from huggingface datasets. We process it as a single-label classification task - we randomly select a label if an instance have multiple labels and use the same processed dataset for every model. SogouNews is collected by Wang et al. (2008), segmented and labeled by Zhang et al. (2015). We use the version that's publicly available on torchtext. 7http://groups.di.unipi.it/gulli/AG_corpus_of_news ˜ _articles.html 8https://doi.org/10.5281/zenodo.5514203 | Dataset | Sample Text | |--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | AGNews | "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling band of ultra-cynics, are seeing green again." | | DBpedia | "European Association for the Study of the Liver", "The European Association for the Study of the Liver (EASL) is a European professional association for liver disease." | | YahooAnswers | "Is a transponder required to fly in class C airspace?","I've heard that it may not be for some aircraft. What are the rules?","the answer is that you must have a transponder in order to fly in a class C airspace." "Subject: WHAT car is this!? Nntp-Posting-Host: rac3.wam.umd.edu Organization: University of Maryland, College Park Lines: 15 I was wondering if anyone out there could enlighten me on this car I saw the other day. It was a 2-door sports car, looked to be from the late 60s/ early 70s. It was called a Bricklin. The doors were really small. In addition, the front bumper was separate from the rest of the body. This is all I know. If anyone can tellme a model name, engine specs, years of production, where this car is made, history, or whatever info you have on this funky looking car, please e-mail. Thanks,- IL —- brought to you by your neighborhood Lerxst —-" | | 20News | "Protection against allergen-induced asthma by salmeterol.The effects of the long-acting beta 2-agonist salmeterol on early and late phase airways events provoked by inhaled allergen were assessed in a group of atopic asthmatic patients.In a placebo-controlled study, salmeterol 50 micrograms inhaled before allergen challenge ablated both the early and late phase of allergen-induced bronchoconstriction over a 34 h time period.Salmeterol also completely inhibited the allergen-induced rise in non-specific bronchial responsiveness over the same time period.These effects were shown to be unrelated to prolonged bronchodilatation or functional antagonism.These data suggest novel actions for topically active longacting beta 2-agonists in asthma that extend beyond their protective action on airways smooth muscle." | | Ohsumed | "champion products ch approves stock split champion products inc said its board of directors approved a two for one stock split of its common shares for shareholders of record as of april the company also said | | R8 | its board voted to recommend to shareholders at the annual meeting april an increase in the authorized capital stock from five mln to mln shares reuter " "january housing sales drop realty group says sales of previously owned homes dropped pct in january to a seasonally adjusted annual rate of mln units the national association of realtors nar said but the december rate of mln units had been the highest since the record mln unit sales rate set in november the group said the drop in january is not surprising considering that a significant portion of december s near record pace was made up of sellers seeking to get favorable capital gains treatment under the old tax laws said the nar s john tuccillo reuter" | | R52 | "mutzig beer fest itegerejwe n'abantu benshi kigali mutzig beer fest thedition izabera juru parki rebero hateganyijwe imodoka zizajya zifata abantu buri minota zibakura sonatubei remera stade kumarembo areba miginai remera mugiporoso hamwe mumujyi rond point nini kigali iki gitaramo kizaba cyatumiwemo abahanzi batandukanye harimo kizigenza mugihugu cy'u burundi uzwi izina kidum benshi bakaba bamuziho gucuranga neza live music iki gitaramo kikazatangira isaha saa kumi n'ebyiri z'umugoroba taliki kugeza saa munani mugitondo taliki kwinjira bizasaba amafaranga y'u rwanda kubafite mutzig golden card aha niho tike zigurirwa nakumat la gallette simba super market flurep" | | KinNews | "sentare yiyungurizo ntahangwa yagumije munyororo abamenyeshamakuru bane abo bamenyeshamakuru bakaba bakorera ikinyamakuru iwacu bakaba batawe mvuto kwezi kw'icumi umwaka bakaba bagiye ntara bubanza kurondera amakuru yavuga hari abagwanya leta binjiye gihugu abajejwe umutekano baciye babafata bagishika komine bukinanyana ahavugwa bagwanyi bakaba baciye bashikirizwa sentare nkuru bubanza umushikirizamanza akaba yaciye abagiriza icaha co kwifatanya n'abagwanyi gutera igihugu icaha cahavuye gihindurwa citwa icaha co gushaka guhungabanya umutekano w'igihugu iyo sentare yaciye ibacira imyaka ibiri nusu n'amande y'amafaranga umuriyoni umwe umwe icabafashe cane n'ubutumwe bwafatanwe umwe muribo buvuga 'bagiye i bubanza gufasha abagwanyi" ababuranira bakaba baragerageje kwerekana kwabo bamenyeshamakuru ataco bapfana n'abagwanyi ikinyamakuru iwacu kikaba carunguruje sentare yiyungurizo ntahangwa ariko sentare yafashe ingingo kubagumiza mumunyororo ikinyamakuru iwacu kikavuga kigiye kwitura sentare ntahinyuzwa" | | Filipino | "Kung hindi lang absent yung ibang pipirma sa thesis namen edi sana tapos na hardbound" | | KirNews | "TIMU ya taifa ya Tanzania, Serengeti Boys jana ilijiweka katika nafasi fi nyu katika mashindano ya Mataifa ya Afrika kwa wachezaji wenye umri chini ya miaka 17 baada ya kuchapwa mabao 3-0 na Uganda kwenye Uwanja wa Taifa, Dar es Salaam.Uganda waliandika bao lao la kwanza katika dakika ya 15 lililofungwa na Kawooya Andrew akiunganisha wavuni krosi ya Najibu Viga huku lile la pili likifungwa na Asaba Ivan katika dakika ya 27 Najib Yiga.Serengeti Boys iliendelea kulala, Yiga aliifungia Uganda bao la tatu na la ushindi na kuifanya Serengeti kushika mkia katika Kundi A na kuacha simanzi kwa wapenzi wa soka nchini. Serengeti Boys inasubiri mchezo wa mwisho dhidi ya Senegal huku Nigeria ikisonga mbele baada ya kushinda mchezo wake wa awali kwenye uwanja huo na kufikisha pointi sita baada ya kushinda ule wa ufunguzi dhidi ya Tanzania." | | SwahiliNews | "2008 di4 qi1 jie4 qi1ng da3o guo2 ji4 che1 zha3n me3i nv3 mo2 te4 ","2008di4 qi1 jie4 qi1ng da3o guo2 ji4 che1 zha3n yu2 15 ri4 za4i qi1ng da3o guo2 ji4 hui4 zha3n zho1ng xi1n she4ng da4 ka1i mu4 . be3n ci4 che1 zha3n jia1ng chi2 xu4 da4o be3n yue4 19 ri4 . ji1n nia2n qi1ng da3o guo2 ji4 che1 zha3n shi4 li4 nia2n da3o che2ng che1 zha3n gui1 mo2 zui4 da4 di2 yi1 ci4 , shi3 yo4ng lia3o qi1ng da3o guo2 ji4 hui4 zha3n zho1ng xi1n di2 qua2n bu4 shi4 ne4i wa4i zha3n gua3n . yi3 xia4 we2i xia4n cha3ng mo2 te4 tu2 pia4n ." | | SogouNews | Table 7: Sample text for each dataset. | | Paper | Model | Emb | AGNews | DBpedia | YahooAnswers | 20News | Ohsumed | R8 | R52 | SogouNews | |-----------------------|---------|-------|----------|-----------|----------------|----------|-----------|-------|-------|-------------| | Zhang et al. (2015) | LSTM | ✓ | 0.860 | 0.985 | 0.708 | - | - | - | - | 0.951 | | charCNN | ✗ | 0.914 | 0.985 | 0.680 | - | - | - | - | 0.956 | | | Yang et al. (2016) | HAN | ✓ | - | - | 0.758 | - | - | - | - | - | | charCNN | ✗ | 0.872 | 0.983 | 0.712 | - | - | - | - | 0.951 | | | Joulin et al. (2017) | VDCNN | ✗ | 0.913 | 0.987 | 0.734 | - | - | - | - | 0.968 | | fastText | ✗ | 0.915 | 0.981 | 0.720 | - | - | - | - | 0.939 | | | Conneau et al. (2017) | VDCNN | ✗ | 0.908 | 0.986 | 0.724 | - | - | - | - | 0.962 | | Yao et al. (2019) | LSTM | ✗ | - | - | - | 0.657 | 0.411 | 0.937 | 0.855 | - | | fastText | ✓ | - | - | - | 0.797 | 0.557 | 0.947 | 0.909 | - | | | fastText | ✓ | 0.925 | 0.986 | 0.723 | 0.114 | 0.146 | 0.860 | 0.716 | - | | | Liu et al. (2020) | BiLSTM | ✓ | - | - | - | 0.732 | 0.493 | 0.963 | 0.905 | - | | BERT | ✗ | - | - | - | 0.679 | 0.512 | 0.960 | 0.897 | - | | Table 8: Results reported in previous works on datasets with abundant resources with embedding (Emb) information. | Paper | Model | Emb | PT | KinyarwandaNews | KirundiNews | SwahiliNews | DengueFilipino | |------------------------------|-------------|----------------|----------------|-------------------|---------------|---------------|------------------| | Niyongabo et al. (2020) | charCNN | ✗ | ✗ | 0.717 | 0.692 | - | - | | BiGRU | ✓(Kin. W2V) | ✗ | 0.887 | 0.859 | - | - | | | CNN | ✓(Kin. W2V) | ✗ | 0.875 | 0.857 | - | - | | | Kastanos and Martin (2021) | fastText | ✗ | ✗ | - | - | 0.675 | - | | BERTBP E | ✗ | ✓(Kin. Corpus) | 0.883 | - | - | - | | | BERTMORP HO | ✗ | ✓(Kin. Corpus) | 0.869 | - | - | - | | | Nzeyimana and Rubungo (2022) | KinyaBERT | ✗ | ✓(Kin. Corpus) | 0.880 | - | - | - | Table 9: Results reported in previous works on low resource languages with embedding (Emb) and pre-training (PT) information. Paper Model AGNews DBpedia Shnarch et al. (2022)BERT 0.619 0.312 BERTIT:CLUSTER 0.807 0.670 Table 10: Results reported in previous works on 64sample learning, corresponding to 14-shot for AGNews and ≈5-shot for DBpedia. ## C Implementation Details We use different hyper-parameters for full-dataset settings and few-shot settings. For both LSTM, Bi-LSTM+Attn, fastText, we use embedding size = 256, dropout rate = 0.3. For full-dataset setting, the learning rate is set to be 0.001 and decay rate = 0.9 for Adam optimizer (Kingma and Ba, 2015), number of epochs = 20, with batch size = 64; for few-shot setting, the learning rate = 0.01, the decay rate = 0.99, batch size = 1, number of epochs = 50 for 50-shot and 100-shot, epoch = 80 for 5-shot and 10-shot. For LSTM and Bi-LSTM+Attn, we set RNN layer = 1, hidden size = 64. For fastText, we use 1 hidden layer whose dimension is set to 10. For HAN, we use 1 layer for both word-level RNN and sentence-level RNN, the hidden size of both of them are set to 50, and the hidden sizes of both attention layers are set to 100. It's trained with batch size = 256, 0.5 decay rate for 6 epochs. For BERT, the learning rate is set to be 2e−5 and the batch size is set to be 128 for English and SogouNews while for low-resource languages, we set the learning rate to be 1e−5 with batch size to be 16 for 5 epochs. We use publicly available transformers library (Wolf et al., 2020) for BERT and specifically we use bert-base-uncased checkpoint for BERT and bert-base-multilingual-uncased for mBERT. For charCNN and textCNN, we use the same hyper-parameters setting in Adhikari et al. (2019b) except when in the few-shot learning setting, we reduce the batch size to 1, reducing the learning rate to 1e − 4 and increase the number of epochs to 60. We also use their open source hedwig repo for implementation. For VDCNN, we use the shallowest 9-layer version with embedding size set to be 16, batch size set to be 64 learning rate set to be 1e − 4 for full-dataset setting, and batch size = 1, epoch number = 60 for few-shot setting. For RCNN, we use embedding size = 256, hidden size of RNN = 256, learning rate = 1e − 3, and the same batch size and epoch setting as VDCNN for full-dataset and few-shot settings. In general, we perform grid search for hyperparameters on all the neural network models and we use a test set to validate, which only overestimates the accuracy. For preprocessing, we don't use any pretrained word embedding for any word-based models. The reason is that we have a strict categorization between "training" and "pre-training", involving pretrained embedding will make DNNs' categories ambiguous. Neither do we use data augmentation during the training. The procedures of tokenization for both word-level and character-level, padding for batch processing are, however, inevitable. For all non-parametric methods, the only hyperparameter is k. We set k = 2 for all the methods on all the datasets and we report the maximum possible accuracy getting from the experiments for each method. For Sentence-BERT, we use the paraphrase-MiniLM-L6-v2 checkpoint. Our method only requires CPUs and we use 8core CPUs to take advantage of multi-processing. The time of calculating distance matrix using *gzip* takes about half an hour on AGNews, two days on DBpedia and SogouNews, and six days on YahooAnswers. ## D Few-Shot Results The exact numerical values of accuracy shown in Figure 2 is listed in three tables below. Dataset AGNews ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) #Shot 5 10 50 100 fastText 0.273±0.021 0.329±0.036 0.550±0.008 0.684±0.010 Bi-LSTM+Attn 0.269±0.022 0.331±0.028 0.549±0.028 0.665±0.019 HAN 0.274±0.024 0.289±0.020 0.340±0.073 0.548±0.031 W2V 0.388±0.186 0.546±0.162 0.531±0.272 0.395±0.089 BERT 0.803±0.026 0.819±0.019 0.869±0.005 0.875±0.005 SentBERT 0.716±0.032 0.746±0.018 0.818±0.008 0.829±0.004 gzip (ours) 0.587±0.048 0.610±0.034 0.699±0.017 0.741±0.007 Table 11: Few-Shot result on AG News ![15_image_3.png](15_image_3.png) ![15_image_5.png](15_image_5.png) Dataset DBpedia ![15_image_4.png](15_image_4.png) #Shot 5 10 50 100 fastText 0.475±0.041 0.616±0.019 0.767±0.041 0.868±0.014 Bi-LSTM+Attn 0.506±0.041 0.648±0.025 0.818±0.008 0.862±0.005 HAN 0.350±0.012 0.484±0.010 0.501±0.003 0.835±0.005 W2V 0.325±0.113 0.402±0.123 0.675±0.05 0.787±0.015 BERT 0.964±0.041 0.979±0.007 0.986±0.002 0.987±0.001 SentBERT 0.730±0.008 0.746±0.018 0.819±0.008 0.829±0.004 gzip (ours) 0.622±0.022 0.701±0.021 0.825±0.003 0.857±0.004 Table 12: Few-Shot result on DBpedia Dataset SogouNews #Shot 5 10 50 100 fastText 0.545±0.053 0.652±0.051 0.782±0.034 0.809±0.012 Bi-LSTM+Attn 0.534±0.042 0.614±0.047 0.771±0.021 0.812±0.008 HAN 0.425±0.072 0.542±0.118 0.671±0.102 0.808±0.020 W2V 0.141±0.005 0.124±0.048 0.133±0.016 0.395±0.089 BERT 0.221±0.041 0.226±0.060 0.392±0.276 0.679±0.073 SentBERT 0.485±0.043 0.501±0.041 0.565±0.013 0.572±0.003 gzip (ours) 0.649±0.061 0.741±0.017 0.833±0.007 0.867±0.016 Table 13: Few-Shot result on SogouNews ## E Other Reported Results In Table 3 and Table 5, we report the result from our hyper-parameter setting and implementation. However, we find that we couldn't replicate previously reported results in some cases - we get higher or lower results than previously reported ones, which may be due to different experiment settings (e.g., they may use pretrained word embeddings while we don't) or different hyper-parameter settings. Thus, we provide results reported by some previous papers for reference in Table 8, Table 9 and Table 10. Note that SogouNews is listed in the first table as it has abundant resources and is commonly used as a benchmark for DNNs that excel at large datasets. As the studies carried out in low-resource languages and few-shot learning scenarios are insufficient, in Table 9 and in Table 10, we also report the result of variants of our models like BiGRU using Kinyarwanda embeddings (Kin. W2V) and BERT*MORP HO* incorporating morphology and pretrained on Kinyarwanda corpus (Kin. Corpus) in addition to models we use in the paper. We don't find any result reported for DengueFilipino as previous works' evaluation uses multi-label metrics. To understand the merits and shortcomings of using gzip for classification, we evaluate *gzip*'s performance in terms of both the absolute accuracy and the relative performance compared to the neural methods. An absolute low accuracy with a high relative performance suggests that the dataset itself is difficult, while a high accuracy with a low relative performance means the dataset is better solved by a neural network. As our method performs well on OOD datasets, we are more interested in analyzing ID cases. We carry out seven in-distribution datasets and one out-of-distribution dataset across fourteen models to account for different ranks. We analyze both the relative performance and the absolute accuracy regarding the vocabulary size and the compression rate of both datasets (i.e., how easily a dataset can be compressed) and compressors (i.e., how well a compressor can compress). To represent the relative performance with regard to other methods, we use the normalized rank percentage, computed as rank of gzip total\#methods; the lower the score, the better *gzip* is. We use "bits per character"(bpc) to evaluate the compression rate. The procedure is to randomly sample a thousand in- ## F Performance Analysis ![16_image_0.png](16_image_0.png) vocabulary size has on the relative performance, our method with *gzip* may be more susceptible to the vocabulary size than neural network methods. To distinguish between a "hard" dataset and an "easy" one, we average all models' accuracies. The datasets that has the lowest accuracies are 20News and Ohsumed, which are two datasets that have the longest average length of texts. stances from the training and test set respectively, calculate the compressed length, and divide by the number of characters. Sampling is to keep the size of the dataset constant. ## F.1 Relative Performance Combining Table 1 and Table 3, we see that accuracy is largely unaffected by the average length of a single sample: with the Spearman coefficient rs = −0.220. But the relative performance is more correlated with vocabulary size (rs = 0.561) as we can see in Figure 5. SogouNews is an outlier in the first plot: on a fairly large vocabulary-sized dataset, gzip ranks first. The second plot may provide an explanation for that - the compression ratio for SogouNews is high which means even with a relatively large vocabulary size, there is also repetitive information that can be squeezed out. With rs = 0.785 on the correlation between the normalized rank percentage and the compression rate, we can see when a dataset is easier to compress, our method may be a strong candidate as a classifier. ## F.2 Absolute Accuracy Similarly, we evaluate the accuracy of classification with respect to the vocabulary size and we've found there is almost no monotonic relation (rs = 0.071). With regard to bpc, the monotonic relation is not as strong as the one with the rank percentage (rs = −0.56). Considering the effect that ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7. ✓ A2. Did you discuss any potential risks of your work? Section 8. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3. ✓ B1. Did you cite the creators of artifacts you used? Appendix B and C. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix B and C. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B and C. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix B. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 and Appendix B. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Section 4.1 Table 1. ## C ✓ **Did You Run Computational Experiments?** Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 and Appendix C. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix C. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.3, 4.4, 4.5. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
palen-michel-lignos-2023-lr
{LR}-Sum: Summarization for Less-Resourced Languages
https://aclanthology.org/2023.findings-acl.427
We introduce LR-Sum, a new permissively-licensed dataset created with the goal of enabling further research in automatic summarization for less-resourced languages.LR-Sum contains human-written summaries for 40 languages, many of which are less-resourced. We describe our process for extracting and filtering the dataset from the Multilingual Open Text corpus (Palen-Michel et al., 2022).The source data is public domain newswire collected from from Voice of America websites, and LR-Sum is released under a Creative Commons license (CC BY 4.0), making it one of the most openly-licensed multilingual summarization datasets. We describe abstractive and extractive summarization experiments to establish baselines and discuss the limitations of this dataset.
# Lr-Sum: Summarization For Less-Resourced Languages Chester Palen-Michel and **Constantine Lignos** Mitchom School of Computer Science Brandeis University {cpalenmichel, lignos}@brandeis.edu ## Abstract We introduce LR-Sum, a new permissivelylicensed dataset created with the goal of enabling further research in automatic summarization for less-resourced languages. LR-Sum contains human-written summaries for 40 languages, many of which are less-resourced. We describe our process for extracting and filtering the dataset from the Multilingual Open Text corpus (Palen-Michel et al., 2022). The source data is public domain newswire collected from from Voice of America websites, and LR-Sum is released under a Creative Commons license (CC BY 4.0), making it one of the most openlylicensed multilingual summarization datasets. We describe abstractive and extractive summarization experiments to establish baselines and discuss the limitations of this dataset. ## 1 Introduction Datasets for automatic summarization have historically focused largely on English, and while there has recently been a greater focus on datasets that include other languages (Cao et al., 2020; Giannakopoulos et al., 2015, 2017; Hasan et al., 2021; Scialom et al., 2020), there still remains a need for high-quality summarization data for lessresourced languages. Datasets with human-written summaries are important for both training statistical summarization models and for automatic evaluation of them. While recently there have been a growing number of multilingual summarization datasets, many are relatively small, have limited language coverage, have restrictive licenses, or a combination of these drawbacks. In this paper, we present LR-Sum, a new 40language summarization dataset with a focus on less-resourced languages.1 We created it with the Summary: Fuad Huseyîn li civîna NY jiber êrî¸sê Enqere tawanbar kir; daxwaza tazmînat û lêpirsîneke navneteweyî kir. Article: Wezîrê derve yê Îraqê Fuad Huseyîn doh Sê¸semê li civîna awarte ya Civata Ewlekarîyê ya Neteweyên Yekbûyî (NY), daxwaza veki¸sîna hêzên Tirkîyê ji axa Îraqê kir. "Em hebûna neqanûnî ya hêzên artê¸sa Tirkîyê li ser axa Îraqê ¸sermezar dikin," Huseyîn got. Civata Ewlekarîyê ya Neteweyên Yekbûyî (NY) ser daxwaza Îraqê kom bû, di derbarê êrî¸sa hefteya borî ya li Duhokê hat kirin û di encamê de 9 kes hatibûn ku¸stin û 23 kesên din jî birîndar bûbûn. Îraq jiber êrî¸sa kujer hêzên Tirkiyê tawanbar dike û wezîrê derve Huseyîn çû New Yorkê da ku be¸sdarî civîna awarte ya NY bibe. Fuad Huseyîn li civînê ser navê Bexdayê, jiber êrî¸sê Enqere tawanbar kir û daxwaza tazmînatê û lêpirsîneke navneteweyî kir. [...] Table 1: Example summary and article pair from Kurmanji Kurdish. Colors mark approximate content equivalence between summary and a portion of the article. goal of providing high-quality, human-written summaries in as many languages as possible. The collection of curated and filtered summaries that comprise LR-Sum are licensed using a Creative Commons Attribution license (CC BY 4.0), and the articles that it was collected from are in the public domain. This allows LR-Sum to be distributed freely and annotated without restriction, unlike many summarization datasets which use copyrighted material, often redistributed without appropriate licensing. For many of the languages in LR-Sum, this is the largest collection of summarization data with such a permissive license. Tables 1 and 2 show example article-summary pairs from LR-Sum and highlight how similar content in the summary is not merely simple extraction from the text. Results of experiments described in Section 4 show that for many less-resourced languages, the task of producing summaries remains challenging, enabling LR-Sum to serve as a benchmark of progress. LR-Sum is released via GitHub at https://github.com/bltlab/lr-sum. Summary: First-ever aerial census will be conducted simultaneously across five states to determine elephant migration patterns and numbers Article: Five southern African countries, with more than half the continent's elephants, are conducting a first-ever aerial census to determine the elephant population and how to protect it. Light aircraft will fly simultaneously across the plains of Angola, Botswana, Namibia, Zambia and Zimbabwe - in a conservation area known as the Kavango-Zambezi Trans-frontier Conservation Area (KAZA) - in an exercise that will run until October 20. [...] We hope to see what the results come up with," Ives said. "What we will be interested in seeing is not only how many elephants there are but the distribution, therefore, and what the likelihood of those elephants moving between countries is. Table 2: Example summary and article pair from English. Colors mark approximate content equivalence between summary and a portion of the article. ## 2 Related Work In this section, we briefly list existing English and multilingual summarization datasets and discuss work in dataset creation for less-resourced languages more generally. ## 2.1 English Summarization Datasets Document Understanding Conference (DUC)2 (Harman and Over, 2004; Dang, 2006) create English summarization datasets for evaluations. The NYT Annotated Corpus (Sandhaus, 2008) is a corpus of New York Times articles and 600k summaries written by library scientists. CNN/Daily Mail (Hermann et al., 2015) was originally created for question answering, but Nallapati et al. (2016) adapt this dataset for summarization. XSum (Narayan et al., 2018a) uses the first sentence "story body introduction" tag of a BBC article as the summary and the remainder of the text as the article and show that XSum favors abstractive summaries. ## 2.2 Multilingual Summarization Datasets MLSUM (Scialom et al., 2020) is an extension of the CNN/Daily Mail dataset for five languages: French, German, Spanish, Turkish, and Russian. MultiLing (Giannakopoulos et al., 2015, 2017) is a shared task that focuses on multilingual summarization covering upwards of 40 languages, but the dataset size is somewhat limited, with training sets of only around 10,000 articles in total. 2http://duc.nist.gov/ XL-Sum (Hasan et al., 2021) includes 44 languages, many of which are less-resourced languages, by scraping BBC News and making use of bullet points as summaries. XL-Sum has a more restrictive license than LR-Sum. MassiveSumm (Varab and Schluter, 2021) is a very large web-scraped summarization corpus that covers the majority of languages covered both in our dataset, LR-Sum, and also XL-Sum, and it does so in larger quantities. However, MassiveSumm cannot be easily redistributed due to copyright and being scraped from various news sites. MassiveSumm's GitHub README contains the disclaimer "The data is noisy and recall-oriented."3 MultiSumm (Cao et al., 2020) creates summaries from titles for Bosnian and Croatian. ## 2.3 Data For Less-Resourced Languages A number of other text corpora have been created for less-resourced languages for summarization and other tasks. Abdulrahman et al. (2019) create a Kurdish corpus of textbooks. Vasili et al. (2018) conduct a study on summarization of Albanian and build a small dataset. Malajyan et al. (2020) create a corpus of paraphrases in Armenian. Niyongabo et al. (2020) create a corpus for classification of news in Kinyarwanda and Kirundi. Azime and Mohammed (2021) create a news dataset in Amharic. Marivate et al. (2020) investigate corpus creation for Setswana and Sepedi. Koto et al. (2020) create a summarization corpus for Indonesian with over 200k articles, but it has significant license restrictions. Das and Bandyopadhyay (2010) create a system for opinion article summarization for Bangla. Nguyen et al. (2020) create a dataset and experiment with sentence compression in Vietnamese. Birhanu (2017) create an extractive summarization system and evaluate on a small dataset in Tigrinya. Jaruskulchai and Kruengkrai (2003) create an extractive model for Thai, and Chumpolsathien (2020) create a large-scale dataset for Thai summarization. Buoy et al. (2021) explore text classification with Khmer. Multilingual Open Text (MOT) (Palen-Michel et al., 2022) is a corpus collected from the websites of Voice of America, an international news service funded by the U.S. Government providing news articles and other short snippets like audio and image descriptions. Our work creates a summarization dataset for the majority of the lan3https://github.com/danielvarab/massive-summ guages within MOT. MOT has a permissive license (CC BY 4.0), and the original source articles are in the public domain. By comparison, many of the multilingual datasets derived from privately funded news sources like CNN or BBC News were collected from copyrighted data without the copyright owner's permission, limiting legal distribution. XL-Sum's license is CC BY-NC-SA 4.0, which restricts commercial usage. MOT contains news text data for many less-resourced languages, some of which overlap with XL-Sum and some of which are complementary. We discuss which languages are present in LR-Sum vs XL-Sum in more detail in Section 3.2. ## 3 Lr-Sum: Dataset 3.1 Methodology The approach for creating LR-Sum is to leverage the coverage of less-resourced languages in MOT to construct a summarization dataset. MOT (PalenMichel et al., 2022) semi-regularly releases new versions of the dataset as new articles are published on Voice of America's website. We use MOT release v1.6 from October 1, 2022 for the creation of LR-Sum. Only the content type of "article" is included in LR-Sum since the categories of photo, audio, etc. already tend to be short snippets describing content, which typically are too short to make useful article-summary pairs. While bold text or bullet points are used in some other summarization datasets (Hasan et al., 2021; Hermann et al., 2015; Narayan et al., 2018a), these ways of extracting summaries are not available in VOA articles. Instead a description field is present for VOA articles. This description field in VOA new articles can be noisy. While it is generally used to give a brief summary of the article contents, there are numerous instances where the description contains the first few lines of the article, information about the authors, or general information about what VOA is. A number of filtering steps are taken to ensure high-quality summaries. First, we filter to ensure that the description field has content and that the content of the description field is at least 10 tokens long.4 Then, we filter out any articles that do not have a minimum of 10 sentences. We also filter by total number of tokens to remove outlier articles with fewer than 30 or more than 6,000 tokens. 4All tokenization comes from the tokenizers used in the creation of the MOT corpus. When an article does not have a human-written summary, the description field simply contains the first few sentences. Because ellipses can signal that the description is just a copy of the first few sentences of the article, we also filter out all descriptions that end with ellipses. We further remove these instances from the dataset by limiting token overlap of the description and the first 3 sentences to 85%.5 With the goal of keeping LR-Sum from being purely extractive, we also block descriptions where an oracle extractive approach selecting the best sentence in the article produces a ROUGE-2 score above 95. We manually created a list of 254 sentences to remove from summaries based on strings that appear the most frequently in the description field. Examples include "Amerikan basınında haftaiçi hergün öne çıkan ba¸slıkları Amerika'nın Sesi'nde bulubilirsiniz" ("You can find the highlights of the American press every weekday on Voice of America" in Turkish) or "Këtë javë në Uashington" ("Live from Washington" in Albanian).6 While MOT includes data in the Lingala and Oromo languages, we do not include them in LRSum since fewer than 100 articles made it through our filtering process. Lingala had only 3 articles, and Oromo 29. MOT also includes data in Bambara, but it contains so few articles that none made it through the filtering process. ## 3.2 Dataset Description LR-Sum includes 40 languages in total. We show various statistics of the dataset in Table 3. Figure 1 provides a histogram of article lengths, and Figure 2 provides a histogram of summary lengths. We measure mean length of articles and summaries in token counts. Compression is 1 - the ratio between summary length and article length as used by Bommasani and Cardie (2020) and Hasan et al. (2021). Mean novelty is the proportion of tokens in the summary that do not occur in the article. LR-Sum's measures are comparable with MLSUM (Scialom et al., 2020) and XL-Sum (Hasan et al., 2021) for languages shared between datasets. The overall mean article length for LR-Sum is 520.7 and the overall mean summary length is 36.5. For comparison, MLSUM's English section has a mean article length of 709.2 and mean summary length ![3_image_0.png](3_image_0.png) of 55.6, while their Turkish section has mean article length of 309.1 and mean summary length of 22.8. LR-Sum includes fourteen languages that are not covered by XL-Sum. However, Dari Persian and Kinyarwanda are quite close to Persian Farsi and Kirundi, which are contained in XL-Sum. Seven of the remaining twelve languages have more than 1,000 article-summary pairs for training: Albanian, Bosnian, Khmer, Sorani Kurdish, Lao, Macedonian, and Northern Ndebele. Armenian, Georgian, Haitian Creole, and Shona have fewer than 1,000 training examples. Tibetan and Greek have fewer than 1,000 article-summary pairs overall, which is not enough for training and test splits. Instead, the Tibetan and Greek data could still be useful as a test set for automatic evaluation of models built for those languages or used in few-shot training. LR-Sum includes languages which can be complementary to existing resources. For example, LR-Sum includes almost twice as many articles in Burmese as XL-Sum. For many languages (i.e. Turkish, Azerbaijani, Persian, Korean) adding LRSum to XL-Sum results in more than double the amount of data available in XL-Sum alone. LR-Sum also has some unique subdivisions and special focuses for certain languages. Its English section can be subdivided into Zimbabwe and Cambodia-focused sections. Similarly, the French and Portuguese found in LR-Sum tends to be news focused on Africa. Chinese is divided into simplified and traditional varieties. Kurdish is subdivided into the Kurmanji and Sorani dialects. LR-Sum separates Farsi and Dari as separate languages based on their provenance from separate VOA sites, despite their being largely mutually intelligible. ![3_image_1.png](3_image_1.png) ## 3.3 Dataset Splits We report the size of the dataset splits for LR-Sum in Appendix 9. Splits are 80% train, 10% validation, and 10% test, except for languages where the number of examples was quite small. To ensure enough test and validation data when possible, in cases where the total was below 4,000 examples, we took 500 for validation and test each and left the rest for training. For languages where the total number of examples was fewer than 1,000, we only created test sets and did not create training or validation data (Amharic, Bangla, Greek, Hausa, Kinyarwanda, Somali, Swahili, Tibetan, and Tigrinya). ## 4 Experiments 4.1 Methodology We conduct three experiments to demonstrate the usefulness of LR-Sum and establish baseline performance on the dataset. For all abstractive models trained, we use mT5 (Xue et al., 2021) as the base model. We report ROUGE-1 and 2 (R1, R2) and ROUGE-L (RL; Lin, 2004) scores.7,8 1. We train individual baseline models for 12 lessresourced languages that are unique to LR-Sum and not present in XL-Sum.9 2. We conduct a series of experiments with extractive models for the less-resourced languages unique to LR-Sum. 3. We train a multilingual model using the concatenation of LR-Sum and XL-Sum training sets and | ISO 639-3 | Mean | Mean | | | | | | |----------------------------------------------------------------------------------------------------------------------|---------|---------|--------|-------------|---------|---------|--------| | Language | Article | Summary | Mean | Vocab. | Article | | | | Language | Code | Length | Length | Compression | Novelty | Size | Count | | Albanian | sqi | 503.30 | 21.23 | .9578 | .2349 | 204,334 | 22,890 | | Amharic | amh | 291.47 | 25.52 | .9124 | .4781 | 16,833 | 154 | | Armenian | hye | 321.39 | 24.43 | .9240 | .3582 | 53,659 | 1,920 | | Azerbaijani | aze | 390.86 | 14.98 | .9617 | .2915 | 178,330 | 8,108 | | Bangla | ben | 310.45 | 29.23 | .9058 | .1032 | 27,288 | 715 | | Bosnian | bos | 493.40 | 20.29 | .9589 | .2367 | 288,205 | 14,559 | | Burmese | mya | 973.14 | 35.19 | .9638 | .1906 | 598,594 | 9,901 | | Dari Persian | prs | 426.93 | 27.17 | .9364 | .2442 | 101,723 | 15,046 | | English | eng | 717.98 | 32.11 | .9553 | .2053 | 194,901 | 38,697 | | French | fra | 430.05 | 24.77 | .9424 | .1101 | 41,642 | 2,126 | | Georgian | kat | 419.85 | 14.84 | .9647 | .2265 | 73,081 | 1,511 | | Greek | ell | 482.42 | 12.96 | .9731 | .2442 | 28,976 | 583 | | Haitian Creole | hat | 445.92 | 26.49 | .9406 | .1943 | 27,128 | 1,452 | | Hausa | hau | 375.16 | 24.91 | .9336 | .2196 | 11,718 | 390 | | Indonesian | ind | 363.69 | 20.06 | .9448 | .2069 | 39,907 | 1,968 | | Khmer | khm | 896.77 | 32.76 | .9635 | .0764 | 54,986 | 4,860 | | Kinyarwanda | kin | 351.41 | 18.34 | .9478 | .4274 | 39,678 | 698 | | Korean | kor | 437.81 | 30.81 | .9296 | .4189 | 425,980 | 13,123 | | Kurdish | kur | 541.53 | 22.36 | .9587 | .2781 | 128,429 | 4,021 | | Lao | lao | 378.93 | 24.99 | .9340 | .1162 | 86,992 | 14,955 | | Macedonian | mkd | 407.68 | 19.91 | .9512 | .3074 | 66,815 | 2,223 | | Mandarin Chinese | cmn | 781.43 | 53.64 | .9314 | .2472 | 143,505 | 4,586 | | Northern Ndebele | nde | 304.58 | 20.27 | .9335 | .2889 | 122,312 | 2,739 | | Pashto | pus | 459.74 | 33.58 | .9270 | .2111 | 152,499 | 21,067 | | Persian Farsi | fas | 512.21 | 31.08 | .9393 | .0870 | 126,339 | 13,429 | | Portuguese | por | 489.19 | 18.23 | .9627 | .1637 | 46,578 | 1,643 | | Russian | rus | 622.29 | 14.59 | .9766 | .2958 | 273,560 | 13,514 | | Serbian | srp | 348.44 | 20.24 | .9419 | .4427 | 145,175 | 6,217 | | Shona | sna | 276.12 | 17.47 | .9367 | .3189 | 45,808 | 1,383 | | Somali | som | 463.87 | 24.73 | .9467 | .2599 | 12,736 | 165 | | Spanish | spa | 651.14 | 32.13 | .9507 | .2116 | 66,094 | 3,544 | | Swahili | swh | 361.96 | 24.53 | .9322 | .1777 | 23,110 | 588 | | Thai | tha | 406.96 | 25.11 | .9383 | .3472 | 35,823 | 3,278 | | Tibetan | bod | 904.99 | 62.44 | .9310 | .0357 | 6,886 | 182 | | Tigrinya | tir | 281.30 | 13.02 | .9537 | .3217 | 10,156 | 115 | | Turkish | tur | 447.94 | 23.67 | .9472 | .2915 | 308,870 | 35,839 | | Ukrainian | ukr | 487.99 | 18.20 | .9627 | .2545 | 163,270 | 7,229 | | Urdu | urd | 651.21 | 37.65 | .9422 | .0609 | 108,357 | 13,558 | | Uzbek | uzb | 425.33 | 20.58 | .9516 | .3130 | 211,099 | 11,959 | | Vietnamese | vie | 670.42 | 25.37 | .9622 | .1149 | 198,478 | 14,595 | | Total Article Count | 315,530 | | | | | | | | Table 3: Metrics across languages in the LR-Sum Dataset. Compression ratio is the ratio of article length to summary | | | | | | | | Table 3: Metrics across languages in the LR-Sum Dataset. Compression ratio is the ratio of article length to summary length. Mean novelty is the mean proportion of tokens in the summary that do not occur in the article. Vocabulary is the number of unique tokens (types). All measures are computed using tokens. compare with using a multilingual model checkpoint trained on XL-Sum alone. For this experiment, we evaluate both models on LR-Sum's test sets and evaluate on all less-resourced languages. ## 4.1.1 Individual Models We fine-tune 12 models for each of the lessresourced languages not present in XL-Sum. We use mT5 (Xue et al., 2021) as the base model. For these experiments, we use the same training script as Hasan et al. (2021), which is a modified version of a script from the Hugging Face Transformers Library (Wolf et al., 2020). We use the same hyperparameter settings as Hasan et al. (2021). The details of hyper-parameters can be found in Appendix A.1. ## 4.1.2 Extractive Baselines We conducted experiments to determine whether extractive approaches might work better given the small training set sizes. Previous work by Nallapati et al. (2017); Narayan et al. (2018b); Zhang et al. (2018) and Scialom et al. (2020), among others, has shown lead-3 (the first three sentences of the article) to be a strong summarization baseline. To demonstrate the strongest possible extractive performance, we also report the oracle, which here is simply selecting the single sentence in the article which produces the highest ROUGE score. We additionally report results for LexRank (Erkan and Radev, 2004) and Luhn (Luhn, 1958) extractive methods. For implementations of these extractive approaches, we used sumy10 (Belica, 2013). The sentence segmentation and tokenizations from the MOT corpus were used for the extractive approaches requiring segmentation and tokenization. ## 4.1.3 Multilingual Models Following Hasan et al. (2021)'s reported better performance with multilingual training, we train a multilingual model but instead with the concatenation of the training sets of LR-Sum and XL-Sum. In this experiment we also use the same modified Hugging Face script (Wolf et al., 2020) that Hasan et al. (2021) use for training along with the same hyper-parameters as Hasan et al. (2021) used for multilingual training. Hyper-parameter settings can be found in Appendix A.2. ## 5 Results And Discussion Overall, we find that abstractive models fail to beat extractive ones for some languages, while extractive models and even the lead-3 baseline remain competitive for others. The fact that the baselines and extractive approaches still outperform abstractive neural models demonstrates the potential use of this corpus for further summarization research to improve abstractive models in less-resourced settings. Results comparing the different approaches for 12 languages are shown in Table 4. The multilingual models tend to produce higher scores, likely due to positive transfer between languages. However, the advantage is often only a few points beyond individual or extractive models. The results of combining datasets (Table 7) show how LRSum can be combined with existing summarization datasets like XL-Sum to improve multilingual summarization model coverage. The additional data from the concatenation of LR-Sum and XL-Sum shows an expected advantage for languages not seen by the XL-Sum-only multilingual model. ## 5.1 Individual Model Results The results of training the individual models are shown in Tables 4 and 5. The scores are generally slightly lower than the multilingual model with the exception of Albanian, Lao, and Northern Ndebele. The difference in training set size does not appear to be a factor in the performance, potentially because all the training set sizes for these less-resourced languages are small compared to the usual hundreds of thousands of examples found in datasets like MLSUM (Scialom et al., 2020). A language's presence in mT5's pre-training also does not appear to be indicative of better performance. ## 5.2 Extractive Results The results for extractive models can be found in Table 6. Oracle gives a sense of the upper bound that can be achieved through extractive models. The scores for the oracle are higher than both individual and multilingual abstractive models, which suggests there is plenty of room for improving performance for the abstractive baselines. For all the languages we evaluated, LexRank had higher scores than Luhn in terms of ROUGE1, though Luhn was slightly higher in ROUGE-2 and ROUGE-L for Haitian Creole, Bosnian and 10https://miso-belica.github.io/sumy/ | Oracle | Lead3 | LexRank | Individual Models | Multilingual Model | | | | | | | | | | | | |----------|---------|-----------|---------------------|----------------------|-----|------|------|------|------|------|-----|------|------|------|------| | Lang. | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | | sqi | 43.9 | 29.8 | 39.6 | 19.5 | 6.7 | 15.0 | 19.6 | 5.9 | 15.1 | 23.3 | 7.8 | 19.4 | 22.6 | 7.1 | 18.7 | | hye | 35.4 | 23.3 | 32.0 | 18.8 | 7.1 | 14.7 | 11.4 | 4.8 | 8.5 | 16.3 | 6.2 | 14.3 | 20.5 | 8.5 | 17.5 | | bos | 49.3 | 38.7 | 47.2 | 14.1 | 5.0 | 11.5 | 14.8 | 5.1 | 12.1 | 14.3 | 5.6 | 12.7 | 15.0 | 6.3 | 13.2 | | kat | 50.0 | 40.3 | 48.7 | 11.4 | 5.3 | 10.2 | 10.9 | 4.9 | 9.9 | 9.7 | 4.3 | 9.3 | 13.2 | 7.2 | 12.6 | | hat | 49.0 | 34.0 | 43.7 | 23.6 | 9.2 | 17.0 | 21.1 | 7.4 | 15.2 | 14.4 | 3.8 | 11.9 | 24.1 | 8.5 | 19.0 | | khm | 44.3 | 39.8 | 44.4 | 5.5 | 1.8 | 5.2 | 8.3 | 4.8 | 8.0 | 3.4 | 1.1 | 3.3 | 3.7 | 1.2 | 3.6 | | kur-k | 58.2 | 46.7 | 55.7 | 17.9 | 6.4 | 13.9 | 20.2 | 7.5 | 15.8 | 18.2 | 6.7 | 15.4 | 25.4 | 12.4 | 22.1 | | kur-s | 35.9 | 23.3 | 35.3 | 13.3 | 5.5 | 12.3 | 21.9 | 13.2 | 19.4 | 14.7 | 4.6 | 13.3 | 16.6 | 5.4 | 15.1 | | lao | 28.9 | 22.7 | 28.9 | 7.6 | 2.2 | 7.3 | 8.9 | 3.9 | 8.6 | 12.0 | 5.6 | 11.9 | 11.3 | 5.2 | 11.1 | | mkd | 35.8 | 21.9 | 31.7 | 17.4 | 5.6 | 13.6 | 17.2 | 4.9 | 13.3 | 20.2 | 7.1 | 17.0 | 21.3 | 7.6 | 18.0 | | nde | 49.6 | 40.3 | 48.5 | 18.4 | 9.9 | 16.3 | 17.1 | 9.1 | 15.4 | 14.2 | 8.1 | 13.3 | 14.1 | 8.0 | 13.5 | | sna | 43.8 | 33.2 | 42.7 | 14.8 | 6.8 | 12.8 | 14.7 | 6.8 | 12.9 | 12.6 | 4.8 | 11.3 | 15.9 | 7.5 | 15.0 | Table 4: Comparison of different summarization approaches. Best scores in bold excluding oracle. | Training | | | | | | |------------|--------|--------|-------|-------|-------| | Language | Size | In mT5 | R1 | R2 | RL | | sqi | 18,312 | ✓ | 23.32 | 7.76 | 19.44 | | hye | 920 | ✓ | 16.27 | 6.18 | 14.31 | | bos | 11,648 | 14.33 | 5.63 | 12.72 | | | kat | 511 | ✓ | 9.71 | 4.28 | 9.26 | | hat | 452 | ✓ | 14.43 | 3.82 | 11.92 | | khm | 3,888 | ✓ | 3.37 | 1.11 | 3.29 | | kur-k | 791 | ✓ | 18.24 | 6.73 | 15.38 | | kur-s | 1,230 | ✓ | 14.72 | 4.55 | 13.25 | | lao | 11,964 | ✓ | 12.00 | 5.62 | 11.85 | | mkd | 1,223 | ✓ | 20.20 | 7.14 | 17.03 | | nde | 1,739 | 14.15 | 8.14 | 13.32 | | | sna | 383 | ✓ | 12.63 | 4.81 | 11.35 | Albanian. Lead-3 proves to be a strong baseline and scores higher than the extractive models for RL and frequently for R1 and R2. In terms of R1, LexRank outperforms the individual abstractive models for Khmer, Georgian, Bosnian, Northern Ndebele, and Shona but ROUGE-L scores tend to be higher for the individual abstractive models. The multilingual model still beats the lead-3 baseline except for Northern Ndebele and Khmer as shown in Table 4. ## 5.3 Multilingual Model Results Table 7 shows the results of mT5 (Xue et al., 2021) trained on the concatenation of training data from LR-Sum and XL-Sum compared with the model checkpoint of mT5 trained on XL-Sum only. As expected, languages not present in XL-Sum had much better performance with the model trained on both datasets. Dari Persian did not perform better likely due to Farsi already being represented in XL-Sum and the two languages being very similar. Scores for Greek and Tibetan were effectively zero as there is only enough data in LR-Sum for a test set and so there was no training data in those languages due to data scarcity. The results for additional training data for languages present in both languages are more mixed. Despite both datasets being news data, it is possible there are differences in dialect, topic, or standardization that account for the differences. We discuss the performance of the two multilingual models evaluated on the XL-Sum test set in Appendix B. ## 6 Conclusions And Future Work We have presented LR-Sum, a permissivelylicensed summarization dataset for less-resourced languages based on news data. We have demonstrated LR-Sum's usefulness in augmenting the training data of other multilingual summarization models and demonstrated potential for further research in summarization for less-resourced languages. Even with the best performing model, the results are only slightly higher than the lead-3 baseline, which indicates ample room for improvement and future research directions. In future work, we plan to experiment with leveraging additional training data like the remaining portions of the MOT data which were not suitable for extracting summaries but may still be use- | Oracle | Lead 3 | LexRank | Luhn | TextRank | | | | | | | | | | | | |----------|----------|-----------|--------|------------|------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | Lang. | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | | sqi | 43.90 | 29.71 | 39.58 | 19.45 | 6.75 | 14.96 | 19.57 | 5.85 | 3.52 | 18.25 | 6.13 | 3.88 | 16.66 | 5.13 | 3.06 | | hye | 35.31 | 23.34 | 32.04 | 18.78 | 7.09 | 14.72 | 11.39 | 4.80 | 2.72 | 10.33 | 4.14 | 2.34 | 10.06 | 4.07 | 2.27 | | bos | 49.27 | 38.72 | 47.19 | 14.13 | 4.98 | 11.48 | 14.78 | 5.09 | 3.32 | 13.81 | 5.21 | 3.56 | 13.30 | 5.30 | 3.74 | | kat | 49.87 | 40.43 | 48.81 | 11.41 | 5.29 | 10.21 | 10.90 | 4.91 | 3.06 | 9.50 | 4.10 | 2.63 | 8.83 | 3.96 | 2.68 | | hat | 48.92 | 34.06 | 43.76 | 23.63 | 9.22 | 16.98 | 21.13 | 7.43 | 4.24 | 19.75 | 7.67 | 4.65 | 18.52 | 6.90 | 4.12 | | khm | 44.32 | 39.71 | 44.31 | 5.48 | 1.83 | 5.23 | 8.31 | 4.76 | 3.25 | 6.99 | 3.83 | 2.68 | 6.56 | 3.65 | 2.46 | | kur-k | 58.31 | 46.66 | 55.46 | 17.94 | 6.37 | 13.89 | 20.24 | 7.55 | 4.84 | 18.42 | 7.23 | 4.69 | 16.82 | 6.57 | 4.38 | | kur-s | 35.87 | 23.21 | 35.33 | 13.28 | 5.51 | 12.26 | 21.88 | 13.22 | 11.62 | 20.94 | 13.01 | 11.51 | 18.91 | 10.91 | 9.39 | | lao | 29.00 | 22.66 | 29.01 | 7.60 | 2.18 | 7.31 | 8.92 | 3.90 | 2.03 | 7.73 | 3.36 | 1.78 | 7.36 | 3.20 | 1.74 | | mkd | 35.79 | 21.86 | 31.76 | 17.44 | 5.64 | 13.63 | 17.24 | 4.90 | 2.65 | 14.86 | 3.73 | 1.74 | 14.27 | 3.54 | 1.66 | | nde | 49.60 | 40.37 | 48.50 | 18.37 | 9.90 | 16.30 | 17.13 | 9.14 | 6.28 | 14.00 | 7.71 | 5.79 | 13.14 | 7.06 | 5.31 | | sna | 43.83 | 33.13 | 42.59 | 14.78 | 6.78 | 12.77 | 14.73 | 6.80 | 4.14 | 12.49 | 5.62 | 3.67 | 11.19 | 4.51 | 2.58 | ful in fine-tuning a multilingual language model to perform better on certain less-resourced languages. LR-Sum also presents opportunities for few- and zero-shot experimentation for languages where there are not enough examples to use as training data, but where the data that does exist may be useful as a test set. We look forward to collaborating with speakers of the languages included in LRSum to further increase the quality and quantity of summarization data for less-resourced languages. ## Limitations A limitation of this work is that the dataset has not yet been thoroughly vetted by native speakers of the languages contained in the dataset. We acknowledge the importance of working with native speakers and manually reviewing datasets in greater detail as argued for by Kreutzer et al. (2022) and Lignos et al. (2022). We hope to do more manual review of LR-Sum and other summarization datasets in the near future. ## Ethics Statement Our work provides a dataset for further research on summarization for less-resourced languages. Automatic summarization has the potential to assist users in digesting information. It is our intention that providing a summarization dataset with coverage of less-resourced languages will benefit speakers of languages that may otherwise not have had access to this technology. However, there is also cause for caution. The results of our work used automatic evaluation metrics and generated summaries have not yet been subjected to more rigorous human review. Even just based on automated metrics, it is clear there is still room for improvement of the models as they tend to score lower than higher resourced counterparts on similar tasks. Therefore, the models presented in this work should be considered baselines for further work. The dataset and models presented in this work are meant to support further research in summarization of less-resourced languages and not intended for immediate deployment in applications. In particular, the abstractive summarization models, like most text generation models, have the potential to make factual errors, which have the potential to mislead or misinform. Additionally, both extractive and abstractive models may lack adequate context or miss important information. As mentioned in the limitations section, this dataset, like most summarization news datasets, has not been fully manually reviewed and so may contain a few erroneous summaries despite our best efforts. ## Acknowledgments Chester Palen-Michel was supported by a grant from eBay while performing this work. | LR- & XL-Sum | XL-Sum | | | | | | | |----------------------------------------------------------------------------------------------------------------|---------------|-------|-------|-------|-------|-------|-------| | Language | Not in XL-Sum | R1 | R2 | RL | R1 | R2 | RL | | Albanian | ✓ | 22.55 | 7.07 | 18.72 | 8.98 | 0.94 | 7.69 | | Amharic | 13.04 | 5.82 | 11.71 | 12.35 | 4.44 | 10.56 | | | Armenian | ✓ | 20.49 | 8.47 | 17.46 | 0.41 | 0.15 | 0.41 | | Azerbaijani | 15.99 | 8.33 | 15.19 | 14.44 | 5.89 | 13.34 | | | Bangla | 12.99 | 5.58 | 11.72 | 11.15 | 3.95 | 9.82 | | | Bosnian | ✓ | 15.00 | 6.31 | 13.23 | 11.49 | 2.36 | 9.71 | | Burmese | 28.69 | 14.51 | 26.13 | 2.07 | 0.43 | 1.99 | | | Dari Persian | ✓ | 14.62 | 1.84 | 10.88 | 31.74 | 11.62 | 25.87 | | Georgian | ✓ | 13.20 | 7.17 | 12.60 | 0.09 | 0.00 | 0.09 | | Haitian Creole | ✓ | 24.09 | 8.46 | 18.98 | 13.23 | 3.19 | 10.97 | | Hausa | 27.13 | 10.05 | 21.91 | 28.89 | 10.64 | 22.70 | | | Indonesian | 26.93 | 13.87 | 23.84 | 27.00 | 11.84 | 23.24 | | | Khmer | ✓ | 3.67 | 1.17 | 3.62 | 0.42 | 0.15 | 0.40 | | Kinyarwanda | ✓ | 15.48 | 5.94 | 13.22 | 15.60 | 6.14 | 13.10 | | Korean | 21.68 | 9.20 | 19.17 | 16.48 | 6.27 | 14.75 | | | Kurmanji Kurdish | ✓ | 25.41 | 12.44 | 22.13 | 8.29 | 1.62 | 7.47 | | Lao | ✓ | 11.26 | 5.24 | 11.09 | 2.28 | 0.49 | 2.26 | | Macedonian | ✓ | 21.29 | 7.63 | 18.03 | 11.08 | 1.50 | 9.20 | | Northern Ndebele | ✓ | 14.14 | 8.05 | 13.55 | 3.91 | 1.20 | 3.76 | | Pashto | 35.95 | 14.37 | 29.16 | 36.14 | 13.86 | 29.30 | | | Persian Farsi | 11.79 | 0.71 | 8.64 | 21.24 | 6.37 | 16.79 | | | Portuguese | 20.55 | 9.19 | 18.02 | 18.16 | 4.59 | 14.71 | | | Russian | 12.32 | 5.51 | 11.48 | 12.86 | 4.32 | 11.66 | | | Serbian | 16.63 | 5.07 | 14.03 | 15.24 | 3.32 | 12.42 | | | Shona | ✓ | 15.88 | 7.50 | 14.99 | 4.79 | 1.27 | 4.49 | | Somali | 28.80 | 11.54 | 24.30 | 31.39 | 13.15 | 26.21 | | | Sorani Kurdish | ✓ | 16.60 | 5.44 | 15.08 | 5.76 | 0.46 | 5.14 | | Swahili | 26.54 | 9.98 | 21.27 | 27.11 | 9.46 | 21.03 | | | Thai | 4.52 | 1.87 | 4.46 | 3.65 | 1.38 | 3.62 | | | Tigrinya | 13.07 | 3.70 | 11.30 | 12.79 | 3.50 | 10.80 | | | Turkish | 28.42 | 17.24 | 26.02 | 22.37 | 10.77 | 19.90 | | | Ukrainian | 14.83 | 6.84 | 13.28 | 14.71 | 5.42 | 13.05 | | | Urdu | 29.64 | 13.77 | 24.01 | 26.90 | 8.89 | 20.56 | | | Uzbek | 15.96 | 8.32 | 14.51 | 12.61 | 4.13 | 11.36 | | | Vietnamese | 25.06 | 14.13 | 21.52 | 26.51 | 13.67 | 21.62 | | | Table 7: Results from a multilingual model trained on both LR-Sum and XL-Sum data compared with a multilingual | | | | | | | | Table 7: Results from a multilingual model trained on both LR-Sum and XL-Sum data compared with a multilingual model trained only on XL-Sum. We additionally omit Tibetan and Greek from the results as they have only enough data for test sets. Higher-resourced languages are also omitted. ## References Roshna Omer Abdulrahman, Hossein Hassani, and Sina Ahmadi. 2019. Developing a fine-grained corpus for a less-resourced language: the case of Kurdish. *arXiv* preprint arXiv:1909.11467. Israel Abebe Azime and Nebil Mohammed. 2021. An amharic news text classification dataset. arXiv preprint arXiv:2103.05639. Michal Belica. 2013. Methods of document summarization on the web. Master's thesis, Brno University of Technology. Guesh Amiha Birhanu. 2017. Automatic text summarizer for Tigrinya language. Master's thesis, Addis Ababa University. Rishi Bommasani and Claire Cardie. 2020. Intrinsic evaluation of summarization datasets. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 8075–8096, Online. Association for Computational Linguistics. Rina Buoy, Nguonly Taing, and Sovisal Chenda. 2021. Khmer text classification using word embedding and neural networks. *arXiv preprint arXiv:2112.06748*, abs/2112.06748. Yue Cao, Xiaojun Wan, Jinge Yao, and Dian Yu. 2020. Multisumm: Towards a unified model for multilingual abstractive summarization. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 34, pages 11–18. Nakhun Chumpolsathien. 2020. Using knowledge distillation from keyword extraction to improve the informativeness of neural cross-lingual summarization. Master's thesis, Beijing Institute of Technology. Hoa Trang Dang. 2006. DUC 2005: Evaluation of question-focused summarization systems. In *Proceedings of the Workshop on Task-Focused Summarization and Question Answering*, pages 48–55, Sydney, Australia. Association for Computational Linguistics. Amitava Das and Sivaji Bandyopadhyay. 2010. Topicbased Bengali opinion summarization. In *Coling* 2010: Posters, pages 232–240, Beijing, China. Coling 2010 Organizing Committee. Günes Erkan and Dragomir R Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. *Journal of Artificial Intelligence Research*, 22:457–479. George Giannakopoulos, John M Conroy, Jeff Kubina, Peter A Rankel, Elena Lloret, Josef Steinberger, Marina Litvak, and Benoit Favre. 2017. MultiLing 2017 overview. *MultiLing 2017*, page 1. George Giannakopoulos, Jeff Kubina, John Conroy, Josef Steinberger, Benoit Favre, Mijail Kabadjov, Udo Kruschwitz, and Massimo Poesio. 2015. MultiLing 2015: multilingual summarization of single and multi-documents, on-line fora, and call-center conversations. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 270–274. Donna Harman and Paul Over. 2004. The effects of human variation in DUC summarization evaluation. In *Text Summarization Branches Out*, pages 10–17, Barcelona, Spain. Association for Computational Linguistics. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. *Advances in neural information* processing systems, 28. Chuleerat Jaruskulchai and Canasai Kruengkrai. 2003. A practical text summarizer by paragraph extraction for Thai. In Proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages, pages 9–16, Sapporo, Japan. Association for Computational Linguistics. Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2020. Liputan6: A large-scale Indonesian dataset for text summarization. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 598–608, Suzhou, China. Association for Computational Linguistics. Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2022. Quality at a glance: An audit of web-crawled multilingual datasets. *Transactions of the Association for Computational Linguistics*, 10:50–72. Constantine Lignos, Nolan Holley, Chester PalenMichel, and Jonne Sälevä. 2022. Toward more meaningful resources for lower-resourced languages. In Findings of the Association for Computational Linguistics: ACL 2022, pages 523–532, Dublin, Ireland. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Zoey Liu, Crystal Richardson, Richard Hatcher, and Emily Prud'hommeaux. 2022. Not always about you: Prioritizing community needs when developing endangered language technology. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3933–3944, Dublin, Ireland. Association for Computational Linguistics. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. *IBM Journal of research and development*, 2(2):159–165. Arthur Malajyan, Karen Avetisyan, and Tsolak Ghukasyan. 2020. ARPA: Armenian paraphrase detection corpus and models. arXiv preprint arXiv:2009.12615. Vukosi Marivate, Tshephisho Sefara, Vongani Chabalala, Keamogetswe Makhaya, Tumisho Mokgonyane, Rethabile Mokoena, and Abiodun Modupe. 2020. Investigating an approach for low resource language dataset creation, curation and classification: Setswana and sepedi. In *Proceedings of the first* workshop on Resources for African Indigenous Languages, pages 15–20, Marseille, France. European Language Resources Association (ELRA). Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-first AAAI conference on artificial intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gulçehre, and Bing Xiang. 2016. ˘ Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Association for Computational Linguistics. Thi-Trang Nguyen, Huu-Hoang Nguyen, and KiemHieu Nguyen. 2020. A study on seq2seq for sentence compressionin Vietnamese. In Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation, pages 488–495, Hanoi, Vietnam. Association for Computational Linguistics. Rubungo Andre Niyongabo, Qu Hong, Julia Kreutzer, and Li Huang. 2020. KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5507–5521, Barcelona, Spain (Online). International Committee on Computational Linguistics. Chester Palen-Michel, June Kim, and Constantine Lignos. 2022. Multilingual open text release 1: Public domain news in 44 languages. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2080–2089, Marseille, France. European Language Resources Association. Evan Sandhaus. 2008. The New York Times annotated corpus. *Linguistic Data Consortium, Philadelphia*, 6(12):e26752. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. MLSUM: The multilingual summarization corpus. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 8051–8067, Online. Association for Computational Linguistics. Daniel Varab and Natalie Schluter. 2021. MassiveSumm: a very large-scale, very multilingual, news summarisation dataset. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 10150–10161, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Roland Vasili, Endri Xhina, Ilia Ninka, and Thomas Souliotis. 2018. A study of summarization techniques in Albanian language. *KNOWLEDGEInternational Journal*, 28(7):2251–2257. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784, Brussels, Belgium. Association for Computational Linguistics. ## A Hyper-Parameter Settings The hyper-parameters for both individual models and multilingual models were chosen following Hasan et al. (2021). ## A.1 Individual Models For hyperparameters in training each individual model we use a learning rate of 5.0e-4, per device train batch size of 2, 16 gradient accumulation steps, 100 warm-up steps, a maximum input length of 512, a maximum inference length of 84, a beam size of 4, no repeat ngram size of 2, length penalty of 0.6, label smoothing factor of 0.1 and weight decay of 0.01. We train for 10 epochs. Training time was roughly 2 days in total to train the 12 individual models. ## A.2 Multilingual Model We use a learning rate of 1.0, 5000 warmup steps, weight decay of 0.01, per device train batch size of 2, 16 gradient accumulation steps, maximum steps of 50,000, a label smoothing factor of 0.1, and an upsampling factor of 0.5. We trained on two NVIDIA GeForce RTX 3090 GPUs. Training time was roughly 3 days. ## B Evaluating Multilingual Models On Xl-Sum We additionally evaluated the two multilingual models on the XL-Sum test data. The results can be seen in Table 8. We found that the addition of LR-Sum data did not have a positive impact on performance but instead tended to degrade model performance slightly. We speculate that despite both being news summarization datasets there could be some amount of difference in content or style that accounts for the slightly lower performance. Another plausible explanation could be that adding relatively small amounts of data for additional languages degrades performance due to the model's limited capacity to add additional languages. ## C Dataset Splits Table 9 shows the dataset splits as described in Section 3.3 | LR- & XL-Sum | XL-Sum | | | | | | | |---------------------------------------------------------------------------------------------------------------|-----------|-------|-------|-------|-------|-------|-------| | Language | In LR-Sum | R1 | R2 | RL | R1 | R2 | RL | | Amharic | ✓ | 18.21 | 6.87 | 16.45 | 20.08 | 7.41 | 18.06 | | Azerbaijani | ✓ | 18.73 | 8.00 | 17.15 | 21.37 | 9.54 | 19.35 | | Bengali | ✓ | 21.79 | 8.95 | 19.03 | 24.35 | 10.12 | 21.25 | | Burmese | 15.16 | 4.49 | 13.75 | 16.17 | 5.15 | 14.41 | | | Gujarati | 20.55 | 7.15 | 18.59 | 21.94 | 7.74 | 19.91 | | | Hausa | ✓ | 37.27 | 16.07 | 29.85 | 39.42 | 17.67 | 31.64 | | Hindi | 34.88 | 14.45 | 28.93 | 36.91 | 16.32 | 30.88 | | | Igbo | 27.46 | 8.30 | 21.13 | 31.66 | 10.21 | 24.56 | | | Indonesian | ✓ | 35.03 | 15.70 | 29.13 | 36.99 | 17.02 | 30.74 | | Japanese | 39.10 | 23.17 | 31.61 | 41.71 | 25.19 | 33.65 | | | Kirundi | 29.77 | 12.65 | 23.80 | 31.99 | 14.44 | 25.82 | | | Korean | ✓ | 21.89 | 10.51 | 20.54 | 23.76 | 11.53 | 22.42 | | Kyrgyz | 16.24 | 7.17 | 14.72 | 18.36 | 8.02 | 16.46 | | | Marathi | 20.48 | 8.64 | 18.51 | 22.05 | 9.54 | 20.02 | | | Nepali | 24.55 | 9.12 | 22.25 | 26.58 | 10.22 | 24.24 | | | Oromo | 16.37 | 5.42 | 14.40 | 18.75 | 6.22 | 16.16 | | | Pashto | ✓ | 36.30 | 13.74 | 29.71 | 38.25 | 15.48 | 31.74 | | Persian | ✓ | 33.47 | 13.22 | 26.86 | 35.71 | 15.06 | 29.12 | | Portuguese | ✓ | 33.52 | 13.85 | 25.91 | 35.29 | 15.39 | 27.50 | | Punjabi | 28.82 | 10.73 | 23.92 | 30.80 | 12.18 | 25.56 | | | Russian | ✓ | 22.75 | 9.10 | 19.31 | 25.28 | 10.78 | 21.51 | | Scottish Gaelic | 27.37 | 9.81 | 22.00 | 29.04 | 10.95 | 22.89 | | | Serbian-cyrillic | 21.07 | 6.48 | 17.84 | 23.76 | 7.98 | 20.15 | | | Serbian-latin | ✓ | 20.58 | 5.70 | 17.14 | 21.64 | 6.68 | 18.24 | | Sinhala | 20.73 | 7.95 | 17.99 | 21.47 | 8.06 | 18.85 | | | Somali | ✓ | 30.13 | 10.54 | 22.97 | 31.52 | 11.53 | 24.21 | | Swahili | ✓ | 36.52 | 17.14 | 29.72 | 37.67 | 17.86 | 30.94 | | Tamil | 22.54 | 9.97 | 20.56 | 24.33 | 11.03 | 22.06 | | | Telugu | 16.12 | 5.26 | 14.44 | 17.72 | 5.72 | 15.84 | | | Thai | ✓ | 10.34 | 4.07 | 9.90 | 12.28 | 4.78 | 11.87 | | Tigrinya | ✓ | 23.01 | 7.05 | 19.21 | 25.25 | 7.99 | 21.08 | | Turkish | ✓ | 26.07 | 11.95 | 23.56 | 28.90 | 13.79 | 26.15 | | Ukrainian | ✓ | 22.06 | 8.90 | 19.24 | 23.99 | 10.14 | 20.92 | | Urdu | ✓ | 37.39 | 16.48 | 30.79 | 39.48 | 18.33 | 32.81 | | Uzbek | ✓ | 15.52 | 5.58 | 14.18 | 16.82 | 6.35 | 15.35 | | Vietnamese | ✓ | 28.11 | 12.80 | 22.04 | 30.26 | 14.38 | 24.14 | | Welsh | 30.43 | 9.73 | 24.46 | 32.62 | 11.61 | 26.12 | | | West African Pidgin | 36.54 | 14.29 | 28.56 | 37.98 | 15.11 | 29.86 | | | Yoruba | 29.45 | 10.36 | 23.11 | 31.62 | 11.66 | 25.06 | | | Table 8: Results evaluating on the XL-Sum test set. Results from an mT5 multilingual model fine-tuned on both | | | | | | | | | Language | ISO 639-3 | Train | Validation | Test | |--------------------------------------------------------------------------|-------------|---------|--------------|--------| | Albanian | sqi | 18,312 | 2,289 | 2,289 | | Amharic | amh | 0 | 0 | 154 | | Armenian | hye | 920 | 500 | 500 | | Azerbaijani | aze | 6,487 | 810 | 811 | | Bangla | ben | 0 | 0 | 715 | | Bosnian | bos | 11,648 | 1,455 | 1,456 | | Burmese | mya | 7,921 | 990 | 990 | | Chinese Simplified | cmn | 2,103 | 500 | 500 | | Chinese Traditional | cmn | 483 | 500 | 500 | | Dari Persian | prs | 12,037 | 1,504 | 1,505 | | English | eng | 20,976 | 2,621 | 2,622 | | French | fra | 1,126 | 500 | 500 | | Georgian | kat | 511 | 500 | 500 | | Greek | ell | 0 | 0 | 583 | | Haitian Creole | hat | 452 | 500 | 500 | | Hausa | hau | 0 | 0 | 390 | | Indonesian | ind | 968 | 500 | 500 | | Khmer | khm | 3,888 | 486 | 486 | | Kinyarwanda | kin | 0 | 0 | 698 | | Korean | kor | 10,499 | 1,312 | 1,312 | | Kurmanji Kurdish | kur | 791 | 500 | 500 | | Lao | lao | 11,964 | 1,495 | 1,496 | | Macedonian | mkd | 1,223 | 500 | 500 | | Northern Ndebele | nde | 1,739 | 500 | 500 | | Pashto | pus | 16,854 | 2,106 | 2,107 | | Persian Farsi | fas | 10,744 | 1,342 | 1,343 | | Portuguese | por | 643 | 500 | 500 | | Russian | rus | 10,812 | 1,351 | 1,351 | | Serbian | srp | 4,974 | 621 | 622 | | Shona | sna | 383 | 500 | 500 | | Somali | som | 0 | 0 | 165 | | Sorani Kurdish | kur | 1,230 | 500 | 500 | | Spanish | spa | 2,544 | 500 | 500 | | Swahili | swh | 0 | 0 | 588 | | Thai | tha | 2,278 | 500 | 500 | | Tibetan | bod | 0 | 0 | 182 | | Tigrinya | tir | 0 | 0 | 115 | | Turkish | tur | 28,672 | 3,583 | 3,584 | | Ukrainian | ukr | 5,784 | 722 | 723 | | Urdu | urd | 10,847 | 1,355 | 1,356 | | Uzbek | uzb | 9,568 | 1,195 | 1,196 | | Vietnamese | vie | 11,676 | 1,459 | 1,460 | | Table 9: Train, validation, and test split sizes for LR-Sum by language. | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section is unnumbered but is last section. (would be section 7). ✗ A2. Did you discuss any potential risks of your work? The risks of this work are no different than the risks of automatic summarization more broadly. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4 ✓ B1. Did you cite the creators of artifacts you used? Section 3 and 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. See Section 3 and Appendix C ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? See Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? See Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 footnote 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 footnote 5 and Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
mohammadshahi-etal-2023-rquge
{RQUGE}: Reference-Free Metric for Evaluating Question Generation by Answering the Question
https://aclanthology.org/2023.findings-acl.428
Existing metrics for evaluating the quality of automatically generated questions such as BLEU, ROUGE, BERTScore, and BLEURT compare the reference and predicted questions, providing a high score when there is a considerable lexical overlap or semantic similarity between the candidate and the reference questions. This approach has two major shortcomings. First, we need expensive human-provided reference questions. Second, it penalises valid questions that may not have high lexical or semantic similarity to the reference questions. In this paper, we propose a new metric, RQUGE, based on the answerability of the candidate question given the context. The metric consists of a question-answering and a span scorer modules, using pre-trained models from existing literature, thus it can be used without any further training. We demonstrate that RQUGE has a higher correlation with human judgment without relying on the reference question. Additionally, RQUGE is shown to be more robust to several adversarial corruptions. Furthermore, we illustrate that we can significantly improve the performance of QA models on out-of-domain datasets by fine-tuning on synthetic data generated by a question generation model and reranked by RQUGE.
# Rquge: Reference-Free Metric For Evaluating Question Generation By Answering The Question Alireza Mohammadshahi∗ 1,2,3 Thomas Scialom1 **Majid Yazdani**† 4 Pouya Yanki1 Angela Fan1 James Henderson2 **Marzieh Saeidi**1 1 Meta AI 2IDIAP Research Institute 3 EPFL 4 BYJU's LAB {alireza.mohammadshahi,james.henderson}@idiap.ch {tscialom,pya,angelafan,marzieh}@meta.com {majid.yazdani}@byjus.com ## Abstract Existing metrics for evaluating the quality of automatically generated questions such as BLEU, ROUGE, BERTScore, and BLEURT compare the reference and predicted questions, providing a high score when there is a considerable lexical overlap or semantic similarity between the candidate and the reference questions. This approach has two major shortcomings. First, we need expensive human-provided reference questions. Second, it penalises valid questions that may not have high lexical or semantic similarity to the reference questions. In this paper, we propose a new metric, RQUGE, based on the answerability of the candidate question given the context. The metric consists of a question-answering and a span scorer modules, using pre-trained models from existing literature, thus it can be used without any further training. We demonstrate that RQUGE has a higher correlation with human judgment without relying on the reference question. Additionally, RQUGE is shown to be more robust to several adversarial corruptions. Furthermore, we illustrate that we can significantly improve the performance of QA models on out-of-domain datasets by fine-tuning on synthetic data generated by a question generation model and reranked by RQUGE.1 ## 1 Introduction Given the context (e.g. paragraph), the goal of question generation (QG) is to generate questions with or without providing the answer spans. Automatic question generation can be used in several applications: improving the question answering (QA) task (Duan et al., 2017; Du and Cardie, 2018; Puri et al., 2020; Cheng et al., 2021), automatic assess- ![0_image_0.png](0_image_0.png) Figure 1: Normalised scores for different candidate questions. Metrics based on similarity to a reference question can penalise valid candidate questions, and compute a high score for unacceptable questions that are lexically similar to the reference. This can lead to the failure of reference-based metrics for valid questions, such as Q1. Additionally, even paraphrases of the reference, like Q2, may receive low scores. Furthermore, reference-based metrics may not detect small corruptions or variations in the reference, such as Q3. ments (Rebuffel et al., 2021; Lee et al., 2021), especially for the educational domain (Chen et al., 2018), and the evaluation of factual consistency in the text generation tasks (Scialom et al., 2019a, 2021; Fabbri et al., 2022). Previous work (Hosking and Riedel, 2019; Scialom et al., 2019b; Zhang and Bansal, 2019; Laban et al., 2022) has shown that QG models can generate questions inconsistent with the corresponding context and the answer span. So, measuring the acceptability of candidate questions is a critical challenge. Human judgment is the most accurate method in natural language generation, but it is expensive, time-consuming, and not scalable. Consequently, several metrics e.g. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), BERTScore (Zhang et al., 2020) are proposed to automatically measure the quality of the generated text. Specifically for the question generation task, previous work has utilised reference-based metrics e.g. BLEU, ROUGE, BERTScore, and BLEURT (Sellam et al., 2020; Ushio et al., 2023a,b, 2022) to evaluate the quality of the candidate question given the reference question. However, these methods highly depend on the diversity of the reference questions for a given answer span. Due to the huge cost of human annotations, existing QA/QG datasets mostly provide one reference question for the given context and answer, which results in wrongly penalising some valid questions. In Figure 1, the first candidate question (Q1) is generated by paying attention to different evidence in the context, and Q2 is a paraphrase of the reference, but both BLEU and BERTScore fail to assign high scores to them. Furthermore, reference-based metrics are not sensitive to very small corruptions of the reference questions, which makes the candidate question unacceptable (Q3). In this paper, we propose RQUGE , a Referencefree QUestion Generation Evaluation metric that can compute the quality of the candidate question without requiring a reference question. Given the corresponding context and answer span, our metric calculates the acceptability score by applying a general question-answering module, followed by a span scorer. The former module generates the answer span for the given candidate question, and the latter computes the semantic similarity of the predicted and gold answer spans. Our metric is extremely valuable in cases where the reference question is not well-formed 2 or there is one (or no) reference for a given context and answer span. We evaluate our metric on several datasets, including SQuAD (v1) (Rajpurkar et al., 2016), Natural Questions (NQ) (Kwiatkowski et al., 2019), and MS-MARCO (Bonifacio et al., 2021), and show that it consistently has a better correlation with human judgment compared to previous QG metrics. We also integrate RQUGE into the decoding step by re-ranking candidate questions of each instance by our metric, leading to a better correlation with the human evaluation. Additionally, we demonstrate that RQUGE is more robust to adversaries than previous metrics with +13.1% relative improvement. Finally, we improve the performance of question answering models on an out-of-domain dataset by fine-tuning them on synthetic data generated by a question generation model, then re-ranked with RQUGE to choose the best candidate question for the given answer span, resulting in an +18.3% F1 and +22.2% EM relative improvement. To sum up, our contributions are as follows: - We propose RQUGE, an evaluation metric for measuring the quality of the automatically generated questions, without requiring access to any reference questions. - We show that our metric has a significantly higher correlation with human judgment in terms of the acceptability of the candidate questions on SQuAD (v1), NQ, and MSMARCO datasets. Also, re-ranking candidate questions with RQUGE leads to a better correlation with human judgment. - We demonstrate that RQUGE metric is more robust compared to previous work on several adversarial strategies such as negation, entity swapping, gender reversing, or paraphrasing the reference questions. - Finally, we illustrate that the performance of QA models significantly improves on the outof-domain datasets by fine-tuning them on the synthetic data, created by applying a question generator model, then re-ranking with RQUGE metric. ## 2 Related Work Previous work on automatic evaluation of Natural Language Generation (NLG) tasks have been categorized as follows: Unsupervised Metrics. It contains the most commonly used metrics e.g. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), chrF (Popovic´, 2015), and METEOR (Denkowski and Lavie, 2010). These metrics calculate the correlation of reference ![2_image_0.png](2_image_0.png) and predicted sequences in a discrete space by utilising token-level matching functions. Then, recent work e.g. BERTScore (Zhang et al., 2020) and MoverScore (Zhao et al., 2019) use BERT (Devlin et al., 2019) embeddings to provide a soft token-level matching instead of the hard n-gram overlap. These metrics have been applied to various NLG tasks (Du et al., 2017; Zhou et al., 2017; Xiong et al., 2019; Pan et al., 2020; Lewis et al., 2020; Cheng et al., 2021; Mohammadshahi et al., 2022a,b). Specifically for QG evaluation, Nema and Khapra (2018) propose a scoring function, focusing on the *answerability* of the candidate question, which improves the human correlation when integrated with existing unsupervised metrics. Regression-based Metrics. These metrics e.g. COMET (Rei et al., 2020), BLEURT (Sellam et al., 2020), S3 (Peyrard et al., 2017), and VRM (Hirao et al., 2007) train a regression layer in a supervised manner to mimic the human judgment. Ranking-based Metrics. The aim of these metrics is to assign a higher score to a better candidate compared to worse predictions. The most popular ones include BEER (Stanojevic and Sima'an ´ , 2014) and COMET (Rei et al., 2020). Generation-based Metrics. The idea is to formulate the evaluation of NLG, as a text generation problem from pre-trained language models. Given the source sequence, the better candidate should be generated with a higher score (probability) compared to the worse ones. The most popular ones are BARTScore (Yuan et al., 2021) and PRISM (Thompson and Post, 2020). Additionally, we include CTC (Deng et al., 2021) and QRelScore (Wang et al., 2022) as referencefree metrics for better comparison. CTC (Deng et al., 2021) proposes an evaluation framework for NLG tasks by providing several reference-free metrics, which are computed by aggregating the alignment scores between the input, context and the predicted sequence. To measure the alignment, CTC (Deng et al., 2021) uses BERT (Devlin et al., 2019) embedding matching, discriminative, and regression models. 3 QRelScore computes the answerability of the candidate question by applying word-level hierarchical matching and sentence-level prompt-based generation. Different from previous work, RQUGE combines question answering and span scoring modules to compute the acceptability of the candidate, which leads to a significantly better correlation with human judgement in multiple datasets with different domains and answer lengths. ## 3 Rquge Architecture RQUGE architecture is illustrated in Figure 2. It consists of two components: question answering and span scorer modules. Given the context, gold answer span, and the candidate question, generated by a question generation model (QG), RQUGE 3CTC can be considered as an unsupervised metric when BERT embeddings are used to compute the alignment. computes the acceptance score (κ) of the candidate question as follows: $$\begin{array}{l}{{\left\{\begin{array}{l l}{a_{c}=\mathrm{QA}(q_{c},D)}\\ {\kappa=\mathrm{S}(q_{c},a_{c},a_{r},D)}\end{array}\right.}}\end{array}$$ $$(1)$$ where the qc = QG(ar, D) is the generated candidate question for the gold answer span ar and context D. To calculate the score, the question answering model QA(.) predicts the answer span ac, given the candidate question qc and the context D. Finally, the span scorer S(.) computes the acceptance score κ, conditioned on the candidate question, predicted answer, gold answer, and context. In the following, we will describe each module in detail. ## 3.1 Question Answering Module Given the context and the candidate question, the question answering model predicts the answer span. To make our metric general to several domains, we use UnifiedQAv2 (Khashabi et al., 2022) model to generate the answer span. UnifiedQAv2 is a T5based encoder-decoder model, which is trained on 20 QA datasets, and achieves competitive performance with the state-of-the-art models in several in-domain and out-of-domain datasets.4 The input to the model is the concatenation of the candidate question and corresponding context. ## 3.2 Span Scorer Module Given the predicted answer span ac of the candidate question qc, the span scorer calculates the score (ranging from 1 to 5) of the candidate question. Inspired by Chen et al. (2020) and Fabbri et al. (2022), we use an encoder-only BERT-based model to calculate the acceptance score. Specifically, we first encode the input sequence, then pass the vector representation of [CLS] to the regression layer to compute the acceptance score κ. The input to the module is: [CLS] cand. question [q] gold answer [r] pred answer [c] context We employ pre-trained RoBERTa model, provided by Fabbri et al. (2022). The model is first pretrained with a QA-infused pre-training objective,5 4We refer to Khashabi et al. (2022) for further details. A list of evaluated datasets is provided in Appendix A. We use unifiedqa-v2-t5-large checkpoint, provided in https: //github.com/allenai/unifiedqa. 5It includes pre-training contextual embeddings with a bi-encoder extractive QA loss, which results in encoding then fine-tuned on MOCHA human ratings QA dataset (Chen et al., 2020). MOCHA is a dataset of human judgment scores for training and testing reading comprehension metrics, where annotators are asked to score candidate spans, given the context, gold answer, and the corresponding question. ## 4 Experimental Setup Datasets. We evaluate metrics on three widelyused QA datasets, including SQuAD(v1) (Rajpurkar et al., 2016), NQ (Kwiatkowski et al., 2019), and MS-MARCO (Bonifacio et al., 2021). NQ is used to demonstrate the benefit of our metric in cases where reference questions are not wellformed and are derived from the Google engine. 6 For MS-MARCO, we use DESCRIPTION type of the dataset to show the effectiveness of our metric on candidate questions with long answer spans (13 tokens on average). Unlike SQuAD and NQ, MSMARCO is not included in the training data of the question answering module of RQUGE (i.e. UnifiedQAv2 (Khashabi et al., 2022)). We use MS-MARCO to demonstrate that RQUGE can be generalised to out-of-domain datasets. 7 Question Generators. We fine-tune two commonly used QG models, including GPT2 (Radford et al., 2019), trained with causal language modelling objective, and MixQG (Murakhovs'ka et al., 2022), which is the state-of-the-art question generator and is a T5-based (Raffel et al., 2020) sequenceto-sequence model. We choose GPT2 and MixQG as our question generators as there is a significant gap in their performance, making them suitable for evaluating the metrics. 8 Baselines. We include BLEU-4 (Papineni et al., 2002), ROUGE-1, ROUGE-L (Lin, 2004), METEOR (Denkowski and Lavie, 2010), MoverScore (Zhao et al., 2019), and BERTScore (Zhang et al., 2020) 9as unsupervised metrics, that are commonly used for the question generation task. We additionally use QBLEU, which is specific for | Metric | Grammaticality | Answerability | Relevance | | | | | | | |----------------------------|------------------|-----------------|-------------|-------|-------|-------|-------|-------|-------| | r | ρ | τ | r | ρ | τ | r | ρ | τ | | | Unsupervised BLEU-4 | 0.133 | 0.096 | 0.077 | 0.273 | 0.335 | 0.258 | 0.213 | 0.235 | 0.191 | | ROUGE-1 | 0.156 | 0.096 | 0.077 | 0.312 | 0.274 | 0.217 | 0.330 | 0.322 | 0.264 | | ROUGE-L | 0.210 | 0.148 | 0.120 | 0.321 | 0.294 | 0.233 | 0.322 | 0.316 | 0.259 | | METEOR | 0.143 | 0.086 | 0.069 | 0.334 | 0.321 | 0.251 | 0.317 | 0.315 | 0.255 | | QBLEU | 0.160 | 0.134 | 0.106 | 0.227 | 0.235 | 0.183 | 0.240 | 0.248 | 0.200 | | MOVERScore | 0.161 | 0.103 | 0.082 | 0.294 | 0.318 | 0.248 | 0.280 | 0.313 | 0.254 | | BERTScore | 0.262 | 0.203 | 0.160 | 0.336 | 0.333 | 0.260 | 0.309 | 0.311 | 0.253 | | Regression-based BLEURT-20 | 0.203 | 0.144 | 0.113 | 0.359 | 0.341 | 0.268 | 0.363 | 0.363 | 0.295 | | Ranking-based COMET | 0.309 | 0.274 | 0.215 | 0.319 | 0.312 | 0.243 | 0.300 | 0.307 | 0.248 | | Generation-based BARTScore | 0.212 | 0.145 | 0.115 | 0.349 | 0.345 | 0.269 | 0.332 | 0.323 | 0.262 | | Ref-Free CTC | 0.120 | 0.131 | 0.110 | 0.291 | 0.243 | 0.185 | 0.195 | 0.179 | 0.145 | | QRelScore | 0.202 | 0.102 | 0.102 | 0.366 | 0.285 | 0.22 | 0.294 | 0.212 | 0.188 | | RQUGE | 0.380 | 0.278 | 0.220 | 0.604 | 0.436 | 0.344 | 0.551 | 0.403 | 0.325 | QG evaluation. Furthermore, we utilise BLEURT20 (Sellam et al., 2020), and COMET (Rei et al., 2020) as regression-based and ranking-based metrics. Finally, we include BARTScore (Yuan et al., 2021), fine-tuned on ParaBank2 (Hu et al., 2019). In all aforementioned metrics, scores are calculated between the candidate and reference questions. As reference-free baselines, we use QRelScore (Wang et al., 2022), and also adopt the factual consistency scorer of CTC (Deng et al., 2021) to calculate the consistency score as: κctc = mean(align([ar, D] → qc)) where ar, D, and qc are answer span, context, and candidate question, respectively. The function align(.) estimates the alignment for tokens of the candidate question with the given context and answer. we use albert-xlarge-vitaminc-mnli, which uses a discriminative model to compute the alignment.10 Human Annotation. We use 3 volunteer annotators to rate the candidate questions of QG models.11 All annotators are fluent English speakers. Inspired by previous work (Rus et al., 2010; Nema and Khapra, 2018), we ask annotators to score each candidate question based on three criteria: grammaticality, *answerability*, and *relevance*. Grammaticality measures the syntactic structure of the question. Answerability checks whether the question contains all the important entities, and relevance checks the relatedness of the generated questions with the given answer span. Grammaticality and answerability scores are on a 3-point scale (3 as acceptable, and 1 as rejection), and relevance is on a 2-point scale. We sample 600 questions generated from fine-tuned QG models on SQuAD(v1) (Rajpurkar et al., 2016), NQ (Kwiatkowski et al., 2019), and MS-MARCO (Bonifacio et al., 2021) datasets. We then randomly shuffle and anonymise them for annotators. Further details of the human annotation procedure are provided in Appendix C. 12 ## 5 Results And Discussion We evaluate our RQUGE and previous metrics on various datasets and tasks. First, we evaluate the correlation of metrics with human judgment in Sections 5.1 and 5.2. We then demonstrate their robustness on the adversarial subset in Section 5.3. ![5_image_0.png](5_image_0.png) Finally, Section 5.4 illustrates that fine-tuning QA models on the synthetic data, created by our metric, improves their performance on out-of-domain datasets. ## 5.1 Correlation With Human Judgment Annotator Agreement. The pairwise interannotator agreements, calculated using Cohen's Kappa are 88.91%, 85.32%, and 83.54%. 13 We use the average score of three annotators for the remaining experiments. Metric-to-Human Correlation. Table 1 illustrates the correlations of automatic metrics with the human judgment, averaged over all datasets.14 RQUGE metric has a considerably higher correlation with human judgment on all criteria. For instance, it outperforms the best previous work with +7.1%, +23.8%, +18.8% absolute improvement (based on Pearson (r) score) for grammaticality, *answerability*, and *relevance*, respectively. Appendix D illustrates the result of correlation with the human judgment for each dataset, separately. In in-domain evaluation sets, RQUGE results in 13Calculated on the summation of grammaticality, answerability, and relevance. 14For a fair comparison on *grammaticality*, we just evaluate on SQuAD evaluation set, where reference questions are mostly well-formed. +29.7% and +24.8% absolute point improvement for SQuAD and NQ datasets, respectively, based on answerability measurement. For MS-MARCO as the out-of-domain dataset, RQUGE reaches +12.2% absolute improvement for the relevancy criterion, while having competitive results with CTC on answerability measurement. These results show the effectiveness of our metric in different domains, and question structures (well-formedness) and confirm the generalisation of our metric to outof-domain settings. ## 5.2 Re-Ranking With Rquge To further demonstrate the effectiveness of RQUGE, we use it to re-rank the output predictions of the question generation model to choose the best generated question. Given the context and answer span, QG model 15 generates a bag of candidate questions (here, we apply Nucleus sampling (Holtzman et al., 2020) to increase the diversity)16, that are sorted based on the perplexity (PPL) of the question generator. At each step, we choose K- ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Paraphrasing **Orange County** is a rapidly developing business center that includes Downtown Santa Ana, the Negation **Ondemar Dias** is accredited Entity Swap ..., a plague claimed some 1.7 Reverse Gender ... For example, **Joseph Haas** first candidates of each sample and re-rank them with RQUGE. Then, other automatic metrics are computed for the best one chosen by RQUGE. Figure 3 demonstrates the relative score gains of QG metrics compared to K = 1 (best candidate by PPL) for different values of K. 17 Interestingly, it illustrates that other metrics drop as the number of candidate outputs of each sample (K) is increasing, meaning that the best candidates chosen by our metric are not correlated with other metrics. To confirm these results, annotators are asked to score 250 samples 18 from the evaluation dataset. For each sample, the best candidate questions of K = 1 (best predictions of PPL), K = 5 (where the RQUGE is starting to become *plateau*), and K = 50 (best predictions of RQUGE) are chosen, and annotators are asked to score these three generated questions based on criteria defined in Section 4. 19 Figure 4 depicts the relative difference of NLG metrics compared to the score of K = 1, alongside human scores. We can see from the figure that annotators significantly prefer highest ranking questions of K = 5 and K = 50 chosen by RQUGE compared to the best candidate questions picked based on PPL, while the average scores of other automatic metrics drop as the number of candidate questions (K) increases. For K = 5, the average human score of highest ranking questions is rela- ![6_image_2.png](6_image_2.png) tively +7.49% better than best candidate questions, chosen by PPL of question generator (K = 1). It confirms the effectiveness of RQUGE , when integrated into the decoding step. Additionally, there is not a significant difference (-1.1%) between the average human scores of candidate questions chosen based on RQUGE for K = 5 and K = 50. This is correlated with Figure 3, as RQUGE also becomes plateau from K = 5. Figure 5 illustrates an example in which annotators prefer the second candidate question, while BLEU-4 and BERTScore compute higher scores for the first candidate question. 20 ## 5.3 Robustness Analysis To further assess the robustness of the QG metrics on adversarial corruptions of reference questions, we evaluate metrics on a subset of positive and negative samples, created from SQuAD (Ra-20For transparency, we will provide the human evaluation of the re-ranking experiment. | Metric | Total | Neg. | Rev. Gen | Swap Ents | |----------------------------|---------|--------|------------|-------------| | Unsupervised BLEU4 | 0.239 | 0.241 | 0.219 | 0.241 | | ROUGE-1 | 0.148 | 0.126 | 0.209 | 0.272 | | ROUGE-L | 0.13 | 0.11 | 0.209 | 0.272 | | METEOR | 0.13 | 0.09 | 0.198 | 0.250 | | QBLEU | 0.220 | 0.172 | 0.117 | 0.558 | | MOVERScore | 0.180 | 0.169 | 0.161 | 0.236 | | BERTScore | 0.408 | 0.44 | 0.148 | 0.285 | | Regression-based BLEURT-20 | 0.632 | 0.69 | 0.24 | 0.489 | | Ranking-based COMET | 0.456 | 0.523 | 0.137 | 0.216 | | Generation-based BARTScore | 0.581 | 0.647 | 0.205 | 0.336 | | Ref-Free CTC | 0.539 | 0.576 | 0.372 | 0.376 | | QRelScore | 0.546 | 0.566 | 0.420 | 0.535 | | RQUGE | 0.715 | 0.759 | 0.371 | 0.57 | jpurkar et al., 2016) evaluation set, as shown in Table 2. 21 The remaining subset contains 2,500 samples equally selected from positive and negative questions. Inspired by Chen et al. (2021) and Honovich et al. (2022), positive questions are paraphrases of references, created by two methods, either translating to a high-resource language, then back-translating to English, or applying a T5 (Raffel et al., 2020) model fine-tuned on Quora paraphrasing task.22 For negative samples, we use three strategies: negation, reversing the gender, and swapping the entities of the reference question with relevant entities in the corresponding context. Further details of the adversarial evaluation set are provided in Appendix F. Results and Discussion. We use RQUGE and previous metrics on the adversarial subset to classify the corrupted candidate questions based on their acceptability score. Table 3 illustrates the area of the ROC curve of QG metrics on the adversarial subset. 23 Overall, RQUGE metric significantly 21For a fair comparison, we omit NQ, and MS-MARCO evaluation sets as their reference questions are not always well-formed. 22https://www.kaggle.com/competitions/ quora-question-pairs/data. 23Number of instances for negation, gender reversing, and entity swapping are 1000, 150, and 100, respectively. outperforms BLEURT-20 (the best previous work) by +13.1% relative improvement. Previous unsupervised metrics drop significantly for all types of negative samples, while BLEURT-20, BARTScore, and reference-free metrics perform better comparatively, especially for *negation*. Our RQUGE metric decreases the error relatively by +22.2% and +7.5% for negation, and entity swapping compared to previous work and has the second-best results on reversing the gender. This confirms the robustness of our metric for different adversarial corruption. ## 5.4 Domain Adaptation Of Qa Task Generated questions using a QG model can be used to improve the performance of a question answering model on out-of-domain datasets. In this section, we show that fine-tuning on the generated synthetic data, re-ranked with RQUGE improves the performance of the question answering model. Implementation Details. For the out-of-domain dataset, we choose MS-MARCO (Bonifacio et al., 2021) dataset, since the UnifiedQAv2 (Khashabi et al., 2022) (utilised in the calculation of RQUGE) has not used it for training. 24 Given the context, we apply Stanza (Qi et al., 2020a) Named-Entity Recognition (NER) model to extract candidate answer spans. A QG model is then applied to a randomly chosen candidate span, creating a bag of output predictions, using Nucleus sampling (Holtzman et al., 2020). Then, we apply the same reranking mechanism, described in Section 5.2 using RQUGE , CTC, QRelScore, and the PPL of the QG. We also use a beam search of size 5 with no re-ranking as a baseline. We use MixQG (Murakhovs'ka et al., 2022) to generate questions. For QA, we first fine-tune T5-small (Raffel et al., 2020) on SQuAD (Rajpurkar et al., 2016) (zero-shot for our setting), then fine-tune it on the generated synthetic data. Further implementation details and hyper-parameters are provided in Appendix G. Results and Discussion. Figure 6 demonstrates the performance of the QA model on the out-ofdomain dataset, fine-tuned for different amounts of synthetic data. Generally, fine-tuned QA model reaches significantly better performance compared to the zero-shot setting. This is important for domains in which we do not have annotated QA data. Furthermore, fine-tuning on the re-ranked data with 24For compatibility with the NER model, we use instances with *short-form* answer span (less than 4 tokens). ![8_image_0.png](8_image_0.png) RQUGE consistently improves the performance of the QA model for a different amount of synthetic data, compared to other baselines. Specifically, it significantly outperforms baselines by +18.3% F1, and +22.2% EM, on average. It again shows the effectiveness of our RQUGE by employing it in the domain adaptation of QA models for the outof-domain dataset. ## 6 Conclusion We propose RQUGE , Reference-free QUestion Generation Evaluation metric to measure the quality of the generated questions, by better encoding the relevant context and answer without requiring a reference question. It consists of two modules, a question answering model, and a span scorer, which are existing pre-trained models without further fine-tuning. We compare the performance of RQUGE with existing QG metrics on SQuAD, MSMARCO, and NQ datasets, and show that RQUGE achieves a significantly better correlation with human judgment. Additionally, we integrate RQUGE into the decoding step by using it to re-rank the candidate questions, which leads to a better correlation with human. For robustness, we evaluate QG metrics on adversarial data by corrupting the reference questions and show that RQUGE achieves significantly better performance compared to previous work. Finally, we show that fine-tuning QA models on the synthetic data, generated with a QG model and re-ranked with RQUGE , improves the performance of QA models on out-of-domain datasets. ## Acknowledgement We thank Parth Pathak, Yatharf Saraf, and Omprakash Sonie for their helpful discussion and support. We are grateful to anonymous reviewers for their fruitful comments and corrections. ## Limitations The main limitation of our work is that we have applied and verified the effectiveness of our metric on the English question answering datasets. Since RQUGE depends on a strong question answering module, one has to find an alternative model to the UnifiedQA (Khashabi et al., 2022) we have used in calculation of RQUGE. Additionally, we did an error analysis on the subset that RQUGE and human evaluation have a significant difference in Appendix H, which shows that mistakes are categorised into syntactic-based and knowledge-based errors. It gives us directions for future improvement of RQUGE metric. ## References Stéphane Aroca-Ouellette, Cory Paik, Alessandro Roncone, and Katharina Kann. 2021. PROST: Physical reasoning about objects through space and time. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4597–4608, Online. Association for Computational Linguistics. Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the ai: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662–678. Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In *EMNLP*. Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural language processing with Python: analyzing text* with the natural language toolkit. " O'Reilly Media, Inc.". Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. Luiz Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mmarco: A multilingual version of the ms marco passage ranking dataset. Anthony Chen, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2020. MOCHA: A dataset for training and evaluating generative reading comprehension metrics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6521–6532, Online. Association for Computational Linguistics. Guanliang Chen, Jie Yang, Claudia Hauff, and GeertJan Houben. 2018. Learningq: A large-scale dataset for educational question generation. *Proceedings* of the International AAAI Conference on Web and Social Media, 12(1). Yiran Chen, Pengfei Liu, and Xipeng Qiu. 2021. Are factuality checkers reliable? adversarial metaevaluation of factuality in summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2082–2095, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yi Cheng, Siyao Li, Bang Liu, Ruihui Zhao, Sujian Li, Chenghua Lin, and Yefeng Zheng. 2021. Guiding the growth: Difficulty-controllable question generation through step-by-step rewriting. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5968–5978, Online. Association for Computational Linguistics. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*, abs/1803.05457. Shaobo Cui, Xintong Bao, Xinxing Zu, Yangyang Guo, Zhongzhou Zhao, Ji Zhang, and Haiqing Chen. 2021. Onestop qamaker: Extract question-answer pairs from text in a one-stop approach. Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. ´ Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In *EMNLP*. Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Michael Denkowski and Alon Lavie. 2010. Extending the METEOR machine translation evaluation metric to the phrase level. In *Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational* Linguistics, pages 250–253, Los Angeles, California. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from Wikipedia. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352, Vancouver, Canada. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 866–874, Copenhagen, Denmark. Association for Computational Linguistics. Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. QAFactEval: Improved QAbased factual consistency evaluation for summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2587–2601, Seattle, United States. Association for Computational Linguistics. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *Transactions of the* Association for Computational Linguistics, 9:346– 361. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Wei Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In *2021 International Conference on Learning Representations*. Under review. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. Tsutomu Hirao, Manabu Okumura, Norihito Yasuda, and Hideki Isozaki. 2007. Supervised automatic evaluation for summarization with voted regression model. *Information Processing & Management*, 43(6):1521–1535. Text Summarization. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *International Conference on Learning* Representations. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 161– 175, Dublin, Ireland. Association for Computational Linguistics. Tom Hosking and Sebastian Riedel. 2019. Evaluating rewards for question generation models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2278–2283, Minneapolis, Minnesota. Association for Computational Linguistics. J. Edward Hu, Abhinav Singh, Nils Holzenberger, Matt Post, and Benjamin Van Durme. 2019. Large-scale, diverse, paraphrastic bitexts via sampling and clustering. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 44–54, Hong Kong, China. Association for Computational Linguistics. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391–2401, Hong Kong, China. Association for Computational Linguistics. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567– 2577, Hong Kong, China. Association for Computational Linguistics. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In *Proceedings of* ACL 2018, System Demonstrations, pages 116–121, Melbourne, Australia. Association for Computational Linguistics. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. Tushar Khot, Peter Clark, Michal Guerquin, Peter Alexander Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence composition. *ArXiv*, abs/1910.11473. Tomáš Kociský, Jonathan Schwarz, Phil Blunsom, Chris ˇ Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs'ka, Wenhao Liu, and Caiming Xiong. 2022. Quiz design task: Helping teachers create quizzes with automated question generation. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 102–111, Seattle, United States. Association for Computational Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794, Copenhagen, Denmark. Association for Computational Linguistics. Hwanhee Lee, Thomas Scialom, Seunghyun Yoon, Franck Dernoncourt, and Kyomin Jung. 2021. Qace: Asking questions to evaluate an image caption. *arXiv* preprint arXiv:2108.12560. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yichan Liang, Jianheng Li, and Jian Yin. 2019. A new multi-choice reading comprehension dataset for curriculum learning. In *Proceedings of The Eleventh* Asian Conference on Machine Learning, volume 101 of *Proceedings of Machine Learning Research*, pages 742–757. PMLR. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. *ArXiv*, abs/1908.05852. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. *Commun. ACM*, 38(11):39–41. Alireza Mohammadshahi and James Henderson. 2020. Graph-to-graph transformer for transition-based dependency parsing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3278–3289, Online. Association for Computational Linguistics. Alireza Mohammadshahi and James Henderson. 2021a. Recursive Non-Autoregressive Graph-toGraph Transformer for Dependency Parsing with Iterative Refinement. *Transactions of the Association* for Computational Linguistics, 9:120–138. Alireza Mohammadshahi and James Henderson. 2021b. Syntax-aware graph-to-graph transformer for semantic role labelling. Alireza Mohammadshahi, Rémi Lebret, and Karl Aberer. 2019. Aligning multilingual word embeddings for cross-modal retrieval task. In *Proceedings* of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN), pages 11–17, Hong Kong, China. Association for Computational Linguistics. Alireza Mohammadshahi, Vassilina Nikoulina, Alexandre Berard, Caroline Brun, James Henderson, and Laurent Besacier. 2022a. SMaLL-100: Introducing shallow multilingual machine translation model for low-resource languages. In *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing, pages 8348–8359, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Alireza Mohammadshahi, Vassilina Nikoulina, Alexandre Berard, Caroline Brun, James Henderson, and Laurent Besacier. 2022b. What do compressed multilingual machine translation models forget? In *Findings of the Association for Computational Linguistics:* EMNLP 2022, pages 4308–4329, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Lidiya Murakhovs'ka, Chien-Sheng Wu, Philippe Laban, Tong Niu, Wenhao Liu, and Caiming Xiong. 2022. MixQG: Neural question generation with mixed answer types. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1486–1497, Seattle, United States. Association for Computational Linguistics. Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3950–3959, Brussels, Belgium. Association for Computational Linguistics. Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, and Manfred Pinkal. 2018. MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge. *arXiv e-prints*, page arXiv:1803.05223. Simon Ostermann, Michael Roth, and Manfred Pinkal. 2019. MCScript2.0: A machine comprehension corpus focused on script events and participants. In *Proceedings of the Eighth Joint Conference on Lexical* and Computational Semantics (*SEM 2019), pages 103–117, Minneapolis, Minnesota. Association for Computational Linguistics. Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, and Min-Yen Kan. 2020. Semantic graphs for generating deep questions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1463–1475, Online. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. Learning to score system summaries for better content selection evaluation. In *Proceedings of* the Workshop on New Frontiers in Summarization, pages 74–84, Copenhagen, Denmark. Association for Computational Linguistics. Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, and Bryan Catanzaro. 2020. Training question answering models from synthetic data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5811–5826, Online. Association for Computational Linguistics. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020a. Stanza: A Python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020b. Stanza: A python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 101–108, Online. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. In *OpenAI blog*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Clement Rebuffel, Thomas Scialom, Laure Soulier, Benjamin Piwowarski, Sylvain Lamprier, Jacopo Staiano, Geoffrey Scoutheeten, and Patrick Gallinari. 2021. Data-QuestEval: A referenceless metric for data-to-text semantic evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8029–8036, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics. Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting closer to ai complete question answering: A set of prerequisite real tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8722–8731. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Christian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99–106. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computational Linguistics. Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, and Alex Wang. 2021. Questeval: Summarization asks for fact-based evaluation. arXiv preprint arXiv:2103.12693. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019a. Answers unite! unsupervised metrics for reinforced summarization models. *arXiv preprint arXiv:1909.01610*. Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019b. Self-attention architectures for answer-agnostic neural question generation. In *Proceedings of the 57th annual meeting of the Association for Computational Linguistics*, pages 6027– 6032. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Miloš Stanojevic and Khalil Sima'an. 2014. ´ BEER: BEtter evaluation as ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 414–419, Baltimore, Maryland, USA. Association for Computational Linguistics. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Brian Thompson and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121, Online. Association for Computational Linguistics. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2022. Generative language models for paragraph-level question generation. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2023a. An empirical comparison of lm-based question and answer generation methods. In *Proceedings of the 61th Annual Meeting of the* Association for Computational Linguistics, Toronto, Canada. Association for Computational Linguistics. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2023b. A practical toolkit for multilingual question and answer generation, acl 2022, system demonstration. In *Proceedings of the* 61th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Toronto, Canada. Association for Computational Linguistics. David Vilares and Carlos Gómez-Rodríguez. 2019. HEAD-QA: A healthcare dataset for complex reasoning. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 960–966, Florence, Italy. Association for Computational Linguistics. Xiaoqiang Wang, Bang Liu, Siliang Tang, and Lingfei Wu. 2022. Qrelscore: Better evaluating generated questions with deeper understanding of contextaware relevance. Chien-Sheng Wu, Andrea Madotto, Wenhao Liu, Pascale Fung, and Caiming Xiong. 2022. QAConv: Question answering on informative conversations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5389–5411, Dublin, Ireland. Association for Computational Linguistics. Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. TWEETQA: A social media focused question answering dataset. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5020– 5031, Florence, Italy. Association for Computational Linguistics. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning. In *International Conference on Learning Representations*. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading comprehension. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509, Hong Kong, China. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. ## Appendix A Evaluated Datsets Of Unifiedqav2 Model UnifiedQAv2 is evaluated on SQuAD (v1) (Rajpurkar et al., 2016), SQuAD (v2) (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2016), Quoref (Dasigi et al., 2019), ROPES (Lin et al., 2019), NarrativeQA (Kociský et al. ˇ , 2018), DROP (Dua et al., 2019), NaturalQuestions (Kwiatkowski et al., 2019), MCTest (Richardson et al., 2013), RACE (Lai et al., 2017), OpenBookQA (Mihaylov et al., 2018), ARC (Clark et al., 2018), CommonsenseQA (Talmor et al., 2019), QASC (Khot et al., 2020), PhysicalIQA (Bisk et al., 2019), SocialIQA (Sap et al., 2019), Winogrande (Sakaguchi et al., 2021), BoolQ (Clark et al., 2019), MultiRC (yes/no) (Khashabi et al., 2018), and BoolQ-NP as *in-domain* datasets. Additionally, it is evaluation on AdversarialQA (Bartolo et al., 2020), ReCoRD (Zhang et al., 2018), RACE-C (Liang et al., 2019), HeadQA (Vilares and Gómez-Rodríguez, 2019), MMMLU (Hendrycks et al., 2020), ReClor (Yu et al., 2020), Quail (Rogers et al., 2020), OneStopQA (Cui et al., 2021), MCScript (Ostermann et al., 2018), MCScript 2.0 (Ostermann et al., 2019), CosmosQA (Huang et al., 2019), DREAM (Sun et al., 2019), ProcessBank (Berant et al., 2014), PROST (Aroca-Ouellette et al., 2021), StrategyQA (Geva et al., 2021), PubmedQA (Jin et al., 2019), QAConv (Wu et al., 2022), and TweetQA (Xiong et al., 2019) as *out-of-domain* evaluation sets. ## Appendix B Implementation Details B.1 Details Of Evaluated Datasets We evaluate QG metrics on three datasets, SQuAD (v1) (Rajpurkar et al., 2016) (under CC BY-SA 4.0 license), Natural Questions (Kwiatkowski et al., 2019) (under Creative Commons Share-Alike 3.0 license), and MS-MARCO (Bonifacio et al., 2021) (fully open-source, no license) datasets. Table 4 illustrates the number of samples in training and evaluation sets. | Dataset | Training Data | Evaluation Data | |-------------------|-----------------|-------------------| | SQuAD | 86,821 | 5,928 | | Natural Questions | 104,071 | 12,836 | | MS-MARCO | 502,939 | 55,578 | Table 4: Number of instances for the training and evaluation sets of SQuAD, *short-form* of NQ, and DESCRIPTION types of MS-MARCO datasets. ## B.2 Hyper-Parameters For Fine-Tuning Qg Models All models are trained on NVIDIA A100-SXM4-40GB GPUs. T5 (Raffel et al., 2020) is under Apache License 2.0. GPT2 (Radford et al., 2019) is under modified MIT License. We use AdamW optimiser (Loshchilov and Hutter, 2019), used in several previous works (Mohammadshahi et al., 2019; Devlin et al., 2019; Mohammadshahi and Henderson, 2021b,a, 2020). | Hyper-parameter | Specification | Hyper-parameter | Specification | |--------------------|-----------------|-------------------|-----------------| | Architecture | T5-base(220M) | | | | No. Encoder Layers | 12 | | | | No. Decoder Layers | 12 | | | | No. Epochs | 15 | | | | Dropout | 0.1 | | | | Learning rate | 3e-5 | | | | Batch size | 32 | | | | No. GPUs | 8 | Architecture | GPT2(117M) | | No. Encoder Layers | 12 | | | | No. Epochs | 12 | | | | Dropout | 0.1 | | | | Learning rate | 3e-4 | | | | Batch size | 32 | | | | No. GPUs | 8 | | | | (b) GPT2 | | | | | (a) MixQG | | | | Table 5: Hyper-parameters for fine-tuning QG models on evaluated datasets. ## Appendix C Instruction Of Human Evaluation Annotators are asked to evaluate the quality of a question, given the context and answer span. An input example is shown in Figure 7. They should provide 3 scores for grammaticality, answerability, and relevance. For grammar, the syntactic structure of the sentence is evaluated. They should score 3 for "no grammatical errors", 2 for "not grammatically acceptable but able to get the meaning", and 1 for "unacceptable" questions. For answerability, the score should express the completeness of the candidate question and its consistency with the given answer. So, annotators are required to consider two criteria while scoring; the question should contain question words (e.g. wh-words) and necessary entities, and it should not include the answer. They should score 3, if the question contains all important information, and is consistent with the answer. Score 2 is for the cases, in which some important information is missing in the question or it contains the answer. They should score 1 if all important information is missing in the question and the question is not consistent with the answer. For relevance, annotators should score the relatedness of the question to the answer, given the context. They should score 2 if the question is answerable by the context and related to the given answer. They should score 1, if the question is out-of-context, or not related to the given answer. We sample 600 examples (200 for each dataset) from the evaluation sets of SQuAD (v1), NQ, and MS-MARCO. Samples are shuffled and anonymized. All annotators are fluent English speakers. | Context: The Rhine is the longest river in Germany. It is here that the Rhine encounters some more of its main tributaries, such as the Neckar, the Main and, later, the Moselle, which contributes an average discharge of more than 300 m3/s (11,000 cu ft/s). Northeastern France drains to the Rhine via the Moselle; smaller rivers drain the Vosges and Jura Mountains uplands. Most of Luxembourg and a very small part of Belgium also drain to the Rhine via the Moselle. As it approaches the Dutch border, the Rhine has an annual mean discharge of 2,290 m3/s (81,000 cu ft/s) and an average width of 400 m (1,300 ft). Question What is the average discharge of the Moselle? Grammaticality(1-3): Answerability(1-3): Relevance(1-2): | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Figure 7: The input example of the human evaluation. ## Appendix D Correlation With Human Evaluation | Metric | Answerability | Relevance | | | | | |----------------------------|-----------------|-------------|-------|-------|-------|-------| | r | ρ | τ | r | ρ | τ | | | Unsupervised BLEU-4 | 0.256 | 0.291 | 0.224 | 0.213 | 0.207 | 0.168 | | ROUGE-1 | 0.317 | 0.292 | 0.230 | 0.303 | 0.281 | 0.231 | | ROUGE-L | 0.345 | 0.332 | 0.263 | 0.312 | 0.286 | 0.235 | | METEOR | 0.337 | 0.316 | 0.249 | 0.308 | 0.291 | 0.237 | | QBLEU | 0.296 | 0.300 | 0.238 | 0.272 | 0.274 | 0.223 | | MOVERScore | 0.296 | 0.317 | 0.248 | 0.258 | 0.270 | 0.218 | | BERTScore | 0.344 | 0.343 | 0.266 | 0.306 | 0.288 | 0.233 | | Regression-based BLEURT-20 | 0.340 | 0.311 | 0.242 | 0.326 | 0.306 | 0.247 | | Ranking-based COMET | 0.355 | 0.359 | 0.279 | 0.271 | 0.276 | 0.222 | | Generation-based BARTScore | 0.391 | 0.383 | 0.300 | 0.378 | 0.334 | 0.270 | | Ref-Free CTC | 0.236 | 0.130 | 0.099 | 0.271 | 0.189 | 0.152 | | QRelScore | 0.332 | 0.276 | 0.212 | 0.262 | 0.213 | 0.175 | | RQUGE | 0.688 | 0.388 | 0.303 | 0.588 | 0.404 | 0.327 | Table 6: Correlation of human judgment and evaluation metrics based on Pearson r, Spearman ρ, and Kendall τ correlation coefficients on SQuAD (v1) (Rajpurkar et al., 2016) dataset. 6861 | Metric | Answerability | Relevance | | | | | |----------------------------|-----------------|-------------|-------|-------|-------|-------| | r | ρ | τ | r | ρ | τ | | | Unsupervised BLEU-4 | 0.405 | 0.494 | 0.398 | 0.393 | 0.467 | 0.380 | | ROUGE-1 | 0.533 | 0.530 | 0.430 | 0.517 | 0.510 | 0.418 | | ROUGE-L | 0.514 | 0.523 | 0.425 | 0.491 | 0.491 | 0.403 | | METEOR | 0.533 | 0.535 | 0.430 | 0.513 | 0.505 | 0.411 | | QBLEU | 0.502 | 0.524 | 0.417 | 0.500 | 0.497 | 0.405 | | MOVERScore | 0.434 | 0.482 | 0.385 | 0.419 | 0.480 | 0.391 | | BERTScore | 0.480 | 0.490 | 0.398 | 0.488 | 0.495 | 0.406 | | Regression-based BLEURT-20 | 0.501 | 0.506 | 0.408 | 0.488 | 0.509 | 0.413 | | Ranking-based COMET | 0.405 | 0.393 | 0.310 | 0.397 | 0.403 | 0.325 | | Generation-based BARTScore | 0.421 | 0.439 | 0.351 | 0.419 | 0.437 | 0.358 | | Ref-Free CTC | 0.270 | 0.266 | 0.208 | 0.270 | 0.254 | 0.207 | | QRelScore | 0.415 | 0.309 | 0.276 | 0.394 | 0.292 | 0.264 | | RQUGE | 0.781 | 0.564 | 0.446 | 0.783 | 0.592 | 0.476 | Table 7: Correlation of human judgment and evaluation metrics based on Pearson r, Spearman ρ, and Kendall τ correlation coefficients on Natural Questions (Kwiatkowski et al., 2019) dataset. Metric Answerability Relevance r ρ τ r ρ τ Unsupervised BLEU-4 0.222 0.272 0.211 0.096 0.109 0.089 ROUGE-1 0.107 0.086 0.070 0.174 0.199 0.165 ROUGE-L 0.146 0.131 0.106 0.180 0.200 0.165 METEOR 0.168 0.167 0.131 0.167 0.181 0.145 QBLEU 0.138 0.128 0.101 0.134 0.134 0.108 MOVERScore 0.200 0.217 0.168 0.197 0.206 0.167 BERTScore 0.201 0.205 0.159 0.153 0.165 0.134 Regression-based BLEURT-20 0.246 0.255 0.202 0.275 0.280 0.229 Ranking-based COMET 0.209 0.229 0.181 0.244 0.261 0.213 Generation-based BARTScore 0.221 0.236 0.184 0.181 0.207 0.168 Ref-Free CTC **0.401 0.376** 0.290 0.154 0.183 0.150 QRelScore 0.351 0.272 0.172 0.226 0.133 0.126 RQUGE 0.400 0.366 0.293 **0.397 0.356 0.288** ![18_image_0.png](18_image_0.png) ## Appendix E Re-Ranking With Rquge Appendix F Adversarial Evaluation Set As discussed in Section 5.3 , we create positive samples by two mechanisms: - Back-Translation. Translating the reference question to an intermediate language, then translating it back to English. We apply Marian model (Junczys-Dowmunt et al., 2018), and use Chinese and French as intermediate languages, as Marian model has reasonable performance for these language directions. - Quora Paraphrasing. We first train a T5-small (Raffel et al., 2020) model on Quora paraphrasing dataset, 25 and use it for paraphrasing the reference question. Outputs of both methods are questions that are semantically similar to the reference questions with a few lexical differences. For the negative samples, as shown in Table 2 , we apply the following methods: - Negation. We first scan the reference question to find auxiliary and modal verbs. Then, we randomly either add not to the sentence or replace the verb with its antonyms by using WordNet (Miller, 1995) inside the NLTK package (Bird et al., 2009). - Reverse Gender. The reference question is first scanned to find pronouns, and then pronouns are replaced with pronouns with the opposite gender. - **Swap Entity.** Stanza (Qi et al., 2020b) named-entity recognition model is applied to the reference question and the context. Then, we randomly select one extracted entity of the reference question and replace it with a random entity of the context with the same entity type. ## Appendix G Implementation Details Of Fine-Tuning Qa Models All models are trained on NVIDIA A100-SXM4-40GB GPUs. | Hyper-parameter | Specification | |--------------------|-----------------| | Architecture | T5-small | | No. Encoder Layers | 6 | | No. Decoder Layers | 6 | | No. Training Steps | 2K | | Dropout | 0.1 | | Learning rate | 3e-5 | | Batch size | 32 | | No. GPUs | 8 | Table 9: Hyper-parameters for fine-tuning QA models on the synthetic data of MS-MARCO. ## Appendix H Error Analysis We investigate on cases, in which there is a substantial difference between the human evaluation and RQUGE score. The errors are categorised into syntactic and knowledge-based types, as shown in Table 10. For the syntactic error, RQUGE sometimes computes unacceptable scores for sentences that either miss the question word (e.g. wh-words) or have wrong word order, as QA module of RQUGE focuses more on the semantic aspect of the candidate question to predict the answer span. For the knowledge-based mistakes, RQUGE requires further domain specific and commonsense knowledge to compute the correct score e.g. full moon is instant, not period of time as illustrated in the sample of Table 10. As shown in Table 11, RQUGE computes wrong values for some samples in "reversing gender" and "swapping entities" categories of evaluation set in Section 5.3. These errors shows the limitations of RQUGE metric, and lead the future work to apply larger and better QA and span scorer modules. | Error Type | Question | Answer & Context | RQUGE | Avg Human | |-----------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------| | Syntactic (1) | cost of wooden shutters | Exterior window shutters cover ... Typical costs: Wooden or vinyl exterior window shutters in stock sizes cost $20-$200 per pair of panels. | 4.81/5 | grammaticality:1.66/3 | | (2) | SAT solvers routinely | ... Similarly, | | | | handle large instances | algorithms can solve | | | | | of what? | the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. | 4.76/5 | grammaticality:2/3 | | | Knowledge-based | how long is a full moon | A full lunar cycle lasts almost a month (about 29.5 days), and ... However, a full moon, a new moon, and a half moon (first and third quarter) are instants, not periods of time. | 4.95/5 | answerability:1/3, relevance:1/3 | | Table 10: Different categories of errors that RQUGE metric computes wrong scores. | | | | | | Corruption Type | Ref Question | Corrupted Question | Answer & Context | RQUGE | | Reversing gender | In what year was the | In what year was the | In 1929, the university's | | | university's 5th president | university's 5th president | fifth president, Robert | | | | granted his position? | granted hers position? | Maynard Hutchins, took office; the university underwent many changes during his 24-year tenure... | 2.25/5 | | | Swapping entities | The Kuznets curve says with economic development, inequality will decrease after what? | The Piketty curve says with economic development, inequality will decrease after what? | Studies on income inequality and growth have sometimes found evidence confirming the Kuznets curve hypothesis, which states that with economic development, inequality first increases, then decreases. Economist Thomas Piketty challenges this notion... | 4.65/5 | Table 11: Some samples from the adversarial subset of Section 5.3, that RQUGE metric is not sensitive to the corruption. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section and Appendix H. A2. Did you discuss any potential risks of your work? Not applicable. We provide an evaluation metric for QG task. limitations are provided in Conlusion and Limitations sections. ✓ A3. Do the abstract and introduction summarize the paper's main claims? We include our contributions in both abstract and introduction sections. Specifically, we provide them at the end of introduction section. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** For the model, we use T5 and GPT2 in our paper (section 3 and section 5). For the metric, we use different automatic evaluation metric (sections 2,4,5). For the dataset, we use SQuAD, NQ, and MS-MARCO (sections 4 and 5). We also create the human annotation for the evaluation (sections 4 and 5, Appendix C). ✓ B1. Did you cite the creators of artifacts you used? Sections 4, 2, 3.1 and 3.2. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendices B.1 and B.2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Since our annotations are created from SQuAD, NQ, and MS-MARCO datasets, we use the same license for the distribution of our human annotations (Appendix C). For pre-trained models, we use publicly available models (Appendix B.2). B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. We use SQuAD, NQ, and MS-MARCO datasets, which organizers already checked these concerns before making them available. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendices B.1, C, and F. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendices B.1, C, and F. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendices B.2 and G. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendices B.2 and G. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. For the evaluation of metrics, we use pearson, spearman, and kendall metrics to find the correlation with the human judgment. For QA experiment, we run F1 and EM metrics for the evaluation. In both cases, there were a significant different between our model and previous works. For both QG and QA models, results are single-run, as mentioned in the paper. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 and Appendix F. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We Collect The Data For The Evaluation Of Question Generation Metrics: Section 4 And Appendix C. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? The annotators were volunteer fluent English speaking students (mentioned in Appendix C), and we created our internal website to get the annotations. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. We inform them that the data will be used for the evaluation of question generation task (Appendix C). ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We use the same protocol as previous work (which was approved) in the question generation task. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
aida-bollegala-2023-unsupervised
Unsupervised Semantic Variation Prediction using the Distribution of Sibling Embeddings
https://aclanthology.org/2023.findings-acl.429
Languages are dynamic entities, where the meanings associated with words constantly change with time. Detecting the semantic variation of words is an important task for various NLP applications that must make time-sensitive predictions. Existing work on semantic variation prediction have predominantly focused on comparing some form of an averaged contextualised representation of a target word computed from a given corpus. However, some of the previously associated meanings of a target word can become obsolete over time (e.g. meaning of gay as happy), while novel usages of existing words are observed (e.g. meaning of cell as a mobile phone).We argue that mean representations alone cannot accurately capture such semantic variations and propose a method that uses the entire cohort of the contextualised embeddings of the target word, which we refer to as the sibling distribution. Experimental results on SemEval-2020 Task 1 benchmark dataset for semantic variation prediction show that our method outperforms prior work that consider only the mean embeddings, and is comparable to the current state-of-the-art. Moreover, a qualitative analysis shows that our method detects important semantic changes in words that are not captured by the existing methods.
# Unsupervised Semantic Variation Prediction Using The Distribution Of Sibling Embeddings Taichi Aida Tokyo Metropolitan University aida-taichi@ed.tmu.ac.jp Danushka Bollegala Amazon, University of Liverpool danushka@liverpool.ac.uk ## Abstract Languages are dynamic entities, where the meanings associated with words constantly change with time. Detecting the semantic variation of words is an important task for various NLP applications that must make time-sensitive predictions. Existing work on semantic variation prediction have predominantly focused on comparing some form of an averaged contextualised representation of a target word computed from a given corpus. However, some of the previously associated meanings of a target word can become obsolete over time (e.g. meaning of gay as *happy*), while novel usages of existing words are observed (e.g. meaning of *cell* as a mobile phone). We argue that mean representations alone cannot accurately capture such semantic variations and propose a method that uses the entire cohort of the contextualised embeddings of the target word, which we refer to as the *sibling distribution*. Experimental results on SemEval-2020 Task 1 benchmark dataset for semantic variation prediction show that our method outperforms prior work that consider only the mean embeddings, and is comparable to the current state-of-the-art. Moreover, a qualitative analysis shows that our method detects important semantic changes in words that are not captured by the existing methods. 1 ## 1 Introduction The meaning of words evolves over time, and even in everyday life, technological innovations and cultural aspects can cause a word to have a different meaning than in the past. For example, the meaning of the word gay has completely changed from happy to *homosexual* (Figure 1a), and *cell* has added *cell phone* to its previous meanings of *prison* and *biology* (Figure 1b). In the semantic change detection task, the goal is to detect the words whose meanings have changed across time-specific corpora (Kutuzov et al., 2018; Tahmasebi et al., 2021). 1Source code is available at https://github.com/ a1da4/svp-gauss . ![0_image_0.png](0_image_0.png) As illustrated in Figure 1, we can identify two types of semantic changes associated with words - (a) the word gay obtains a new meaning by **replacing** its past meaning (Figure 1a), whereas (b) the word *cell* obtains a new meaning, while **preserving** its past meanings (Figure 1b). On the other hand, much prior work have resort to a scheme where they first individually represent the meaning of a target word in a given time-specific corpora using a single embedding, such as the mean of the non-contextualised (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016; Yao et al., 2018; Dubossarsky et al., 2019; Aida et al., 2021) or contextualised (Martinc et al., 2020; Beck, 2020; Kutuzov and Giulianelli, 2020; Rosin et al., 2022; Rosin and Radinsky, 2022) embeddings of the target word taken over all of its occurring contexts in the corpus. Next, various distance measures are used to compare those embeddings to quantify the semantic variation of the target word across corpora. However, as seen from Figure 1, using the mean embedding of a target word alone for predicting semantic variations of words can be misleading especially when the variance of the embedding distribution is large. To address the above-mentioned limitations, we use the distribution of contextualised embeddings of a target word w in all of its occurrence contexts S(w) in a given corpus, which we refer to as the sibling distribution (Zhou et al., 2022) of w. We then approximate the sibling distribution of a word using a multivariate Gaussian, which has shown to accurately capture the uncertainty in word embedding spaces (Vilnis and McCallum, 2015; Iwamoto and Yukawa, 2020; Yuksel et al. ¨ , 2021). We can then use a broad range of distance and divergence measures defined over Gaussian distributions to quantify the semantic variation of a target word across multiple time-specific corpora. Experimental results on SemEval-2020 Task 1 benchmark dataset show that our proposed method outperforms several prior methods, and achieves comparable performance to the current state-ofthe-art (SoTA) (Rosin and Radinsky, 2022). More importantly, our proposal to model both the mean and variance of sibling embeddings consistently outperforms methods that use only the mean contextualised embedding from the same Masked Language Model (MLM) (Rosin and Radinsky, 2022). Moreover, for computational convenience, prior work had assumed the covariance matrix of sibling embeddings to be diagonal (Iwamoto and Yukawa, 2020; Yuksel et al. ¨ , 2021), but we show that further performance improvements can be obtained by using the full covariance matrix. ## 2 Related Work Historically, the diachronic semantic changes of words have been studied by linguists (Tahmasebi et al., 2021), which has also received much attention lately within the NLP community. Automatic detection of words whose meanings change over time has provided important insights for diverse fields such as linguistics, lexicology, sociology, and information retrieval (IR) (Traugott and Dasher, 2001; Cook and Stevenson, 2010; Michel et al., 2011; Kutuzov et al., 2018). For example, in IR one must know the seasonal association of keywords used in user queries to provide relevant results pertaining to a particular time period. Moreover, it has been shown that the performance of publicly available pretrained foundation models (Bommasani et al., 2021) declines over time when applied to emerging data (Loureiro et al., 2022; Lazaridou et al., 2021) because they are trained using a static snapshot. Su et al. (2022) showed that the temporal generalisation of foundation models is closely related to their ability to detect semantic variations of words. Semantic change detection is modelled in the literature as an unsupervised task of detecting words whose meanings change between two given timespecific corpora (Kutuzov et al., 2018; Tahmasebi et al., 2021). In recent years, several shared tasks have been held (Schlechtweg et al., 2020; Basile et al., 2020; Kutuzov and Pivovarova, 2021), where participants are required to predict the degree or presence of semantic changes for a given target word between two given corpora sampled from different time periods. For this purpose, much prior work have used non-contextualised or contextualised word embeddings to represent the meaning of the target word in each corpus. Unlike noncontextualised word embeddings, which represent a word by the same vector in all of its contexts, contextualised word embeddings represent the same target word with different vectors in different contexts. Various methods have been proposed to map vector spaces from different time periods, such as initialisation (Kim et al., 2014), alignment (Kulkarni et al., 2015; Hamilton et al., 2016), and joint learning (Yao et al., 2018; Dubossarsky et al., 2019; Aida et al., 2021). The existing methods that have been proposed for the semantic variation detection of words can be broadly categorised into two groups: (a) methods that compare word/context clusters (Hu et al., 2019; Giulianelli et al., 2020; Montariol et al., 2021), and (b) methods that compare embeddings of the target words computed from different corpora sampled at different time periods (Martinc et al., 2020; Beck, 2020; Kutuzov and Giulianelli, 2020; Rosin et al., 2022). Recently, it has been reported that adding time-specific attention mechanisms (Rosin and Radinsky, 2022) achieves SoTA performance. However, this model requires additional training of the entire MLM including the time-specific mechanisms, which is computationally costly for largescale MLMs. Despite the recent success of using word embeddings for the semantic change detection task, many of these methods struggle to detect meaning changes of words which have a wide range of usages because they use only the mean embedding to represent a target word (Kutuzov et al., 2022). Although methods that use point estimates in the embedding space, such as using non-contextualised word embeddings or comparing the average of contextualised word embeddings, are able to detect semantic variations that result in a loss of a prior meaning (e.g. gay in Figure 1a), they are inadequate when detecting semantic variations due to novel usages of words, while preserving their former meanings (e.g. *cell* in Figure 1b). To alleviate this problem, some studies have used Gaussian Embeddings (Vilnis and McCallum, 2015) for semantic change detection (Iwamoto and Yukawa, 2020; Yuksel et al. ¨ , 2021). They used the mean and the diagonal approximation of the covariance matrix computed using non-contextualised word embeddings. However, as argued previously, contextualised embeddings provide useful clues regarding the meaning of a word as used in a context. Therefore, in our proposed method, we consider the entire cohort of contextualised word embeddings of a target word taken across all of its occurring contexts (i.e. siblings) obtained from an MLM. As confirmed later by the evaluations presented in § 4.4, our proposed method consistently outperforms the methods proposed by Iwamoto and Yukawa (2020) and Yuksel et al. ¨ (2021) that use non-contextualised embeddings. ## 3 Semantic Variation Prediction Let us consider a target word w that occurs in two given corpora C1 and C2. For example, C1 and C2 could have been sampled at two distinct time slots, respectively T1 and T2, reflecting any *temporal* semantic variations of words, or alternatively sampled at similar periods in time but from distinct domains (e.g. *biology* vs. law) expressing semantic variations of words due to the differences in the domains. Our goal in this paper is to propose a method that can accurately predict whether w is used in the same meaning in both C1 and C2 (i.e. w is semantically invariant across the two corpora) or otherwise (i.e. its meaning is different in the two corpora). Although we consider two corpora in the subsequent description for simplicity of the disposition, our proposed method can be easily extended to measure the semantic variation of a word over multiple corpora. According to the distributional hypothesis (Firth, 1957), the context in which a word occurs provides useful clues regarding its meaning. Contextualised word embeddings such as the ones produced by MLMs have shown to concisely and accurately encode contextual information related to a target word in a given context. For example, Zhou and Bollegala (2021) showed that contextualised word embeddings can be used to induce word-sense embeddings that represent the distinct senses of an ambiguous word with different vectors. Inspired by such prior work using contextualised word embeddings as a proxy for accessing contextual information related to a target word, we propose a method to detect the semantic variations of a target word using its multiple occurrences in a corpus. To describe our proposed method in detail, let us denote the set of contexts containing w in corpus Ci by S(*w, C*i). The scope of the context of w could be limited to a predefined fixed token window or extended to the entire sentence containing w as we do in our experiments. Let us denote the contextualised (token) embedding of w in a context s ∈ S(*w, C*i) produced by an MLM M by fM(*w, s*) ∈ R d, where d is the dimensionality of the token embeddings produced by M. Following the terminology introduced by Zhou et al. (2022), we refer to type embedding fM(*w, s*) as the *sibling* embeddings of w in context s. The number of siblings of w in Ciis denoted by Nw i = |S(*w, C*i)|. Moreover, let the set of sibling embeddings of w created from its occurrences in Cito be D(*w, C*i) = {fM(w, s)|s ∈ S(*w, C*i)}. As we later see, the distribution of sibling embeddings of a word w encodes information about the usage of w in a corpus, which is useful for predicting any semantic variations of w across different corpora. We can obtain a context-independent embedding, µ w i ∈ R dfor w by averaging all of its sibling embeddings over the contexts as given by (1). $$\mu_{i}^{w}=\frac{1}{N_{i}^{w}}\sum_{s\in\mathcal{S}(w,C_{i})}f_{M}(w,s)\qquad(1)$$ Although much prior work has used µ 'ork has used $\mu_i^w$ as a prox. ias a proxy 6870 for the usage of w in Ci for numerous tasks such as studying the properties of contextualised embeddings (Ethayarajh, 2019) and predicting semantic variation of words (Martinc et al., 2020; Beck, 2020; Kutuzov and Giulianelli, 2020; Rosin et al., 2022; Rosin and Radinsky, 2022), the mean of the sibling embedding distribution is insensitive to the rare yet important usages of the target word. In particular, when the sibling embedding distribution is not uniformly distributed around its mean, the mean embedding can be misleading as a representation of the distribution. To overcome this limitation, in addition to µ w i , we also use the covariance matrix V w i ∈ R d×dcomputed from the sibling embedding distribution of w as defined by (2). $$\mathbf{V}_{i}^{w}={\frac{1}{N_{i}^{w}(N_{i}^{w}-1)}}\sum_{s\in{\mathcal{S}}(w,C_{i})}\mathbf{f}_{M}(w,s)\mathbf{f}_{M}(w,s)^{\top}$$ We approximate the distribution of sibling embeddings of w using a Gaussian, N (µ w i , V w i ) with mean and variance given respectively by (1) and (2). Gaussian distribution is the maximum entropy distribution over the real values given a finite mean and covariance and no further information (Jaynes, 2003). Moreover, by approximating the sibling distribution as a Gaussian, we can use a broad range of distance and divergence measures for quantifying the semantic variation of w across corpora. In the field of information theory, MLMs have been shown to store the information of a given sentence in a vector (Pimentel et al., 2020). There is a strong correlation between the word frequency Nw iand the rank of its covariance matrix V w i(Figure 2 in Appendix A), which indicates that covariance matrix also retains important information regarding sibling embedding distribution. This observation further supports our proposal to represent target words by µ w iand V w i . ## 3.1 Quantifying Semantic Variations Given a target word w, following the method described above, we represent w in C1 and C2 respectively by the two Gaussian distributions N (µ w 1 , V w 1) and N (µ w 2 , V w 2). We can then compute a *semantic variation score* for w that indicates how likely the meaning of w has changed from C1 to C2 by using different distance (or divergence) measures to quantify the differences between two Gaussians. For this purpose, we use two types of measures. Divergence measures quantify the divergence between two distributions. We use two divergence measures in our experiments: Kullback-Liebler (KL) divergence and Jeffrey's divergence. Given that we approximate sibling distribution of w in a corpus by a Gaussian, we can analytically compute both KL and Jeffery's divergence measures using µ w 1 , µ w 2 , V w 1 and V w 2 in closed-form formulas (Appendix B). Distance measures are defined between two points in the sibling embedding space. We use the seven distance measures: Bray-Curtis, Canberra, Chebyshev, City Block, Correlation, Cosine, and Euclidean. The definitions of the distance measures used in this paper are provided in Appendix C. Given a distance measure ψ(w1, w2) that takes two d-dimensional sibling embeddings of w, each computed from contexts selected respectively from C1 and C2 and returns a nonzero real number indicating the distance between w1 and w2, we compute the semantic variation score, score(w), of w between C1 and C2 as the average distance over all pairwise comparisons between the sibling embeddings as given by (3). $$\operatorname{score}(w)={\frac{1}{N_{1}^{w}N_{2}^{w}}}\sum_{\begin{array}{l}{w_{1}\in\mathcal{D}(w,C_{1})}\\ {w_{2}\in\mathcal{D}(w,C_{2})}\end{array}}\psi(w_{1},w_{2}){\mathrm{~}}(3)$$ The number of occurrences of some target words w can be significantly different between C1 and C2, which can make the computation of (3) biased towards the corpus with more contexts for w. To overcome this issue, instead of using sibling embeddings of w computed from actual occurrence contexts of w, we sample equal numbers of sibling embeddings from N (µ w 1 , V w 1) and N (µ w 2 , V w 2). Samples can be drawn efficiently from a multidimensional Gaussian by first drawing samples from a standard normal distribution (i.e. with zero mean and unit variance) and subsequently applying a affine transformation parametrised by the µ w iand V w iof the associated sibling distribution. ## 4 Experiments 4.1 Data And Metric We use the SemEval-2020 Task 1 English dataset2(Schlechtweg et al., 2020) to evaluate the performance in detecting words whose meanings change between time periods. This task includes 2It is licensed under a Creative Commons Attribution 4.0 International License. | Time Period | #Sentences | #Tokens | #Types | |---------------|--------------|-----------|----------| | 1810s–1860s | 254k | 6.5M | 87k | | 1960s–2010s | 354k | 6.7M | 150k | Table 1: Statistics of the SemEval-2020 Task 1 English dataset (Schlechtweg et al., 2020). two subtasks, classification and ranking. In the classification task, the words in the evaluation set must be classified as to whether they have semantically changed over time or otherwise. Classification accuracy is used as the evaluation metric for this task. On the other hand, in the ranking task, the words in the evaluation set must be sorted according to the degree of semantic change. Spearman's rank correlation coefficient between the human-rated gold scores and the induced ranking scores is used as the evaluation metric for this task. In this study, the evaluation is conducted on the ranking task using English data. We do not perform the classification task because no validation set is available for tuning a classification threshold. Statistics of the data used in our experiments are in Table 1. This data includes two corpora from different centuries extracted from CCOHA (Alatrash et al., 2020). Let us denote the early 1800s and late 1900s to early 2000s corpora respectively by C1 and C2. The test set has 37 target words that are selected for indicating whether they have undergone a semantic change between the two time periods. These words are annotated indicating whether their meaning has changed over time and the degree of their semantic change. ## 4.2 Setup We use two types of BERT-base models as the MLM in our experiments: a publicly available pretrained model3(MLMpre) and a fine-tuned model (MLM*temp*) from MLMpre (Rosin et al., 2022). The base model consists of 12 layers, which we use in two different configurations: (a) we use the last layer (MLMpre|*temp*,last), and (b) the mean-pool over the last four layers (MLMpre|*temp*,four), which has shown good performance across languages following Laicher et al. (2021). Rosin and Radinsky (2022) recommend using the mean pooling over all (12) hidden layers. However, we found no statistically significant differences between the meanpool over all layers vs. the last four layers in our preliminary experiments. In the prediction of the degree of semantic change for a given word, the set of sibling embeddings for each time period D(*w, C*1) and D(*w, C*2) is acquired from all occurrences in each corpus using the MLM described above, and the distributions across time periods N (µ w 1 , V w 1) and N (µ w 2 , V w 2) are compared. For calculating the seven distance measures, we sample 1,000 sibling embeddings from each sibling distribution. We use the covariance matrix of the sibling embedding, which defines the distribution, only for the diagonal components (*diag*(cov)) in the divergence measures,4and both diagonal and full components (full(cov)) in the distance measures. Previous studies assume that the covariance matrix is diagonal (diag(cov)) (Iwamoto and Yukawa, 2020; Yuksel ¨ et al., 2021). This assumption increases computational efficiency compared to *full*(cov), at the expense of loosing information on the non-diagonal elements. In our settings, representation of a sibling distribution N (µ w i , V w i ) in *diag*(cov) or full(cov) requires 2d or d(1 + d) parameters, respectively. ## 4.3 Result We show the results of the proposed method under various conditions in Table 2 and Table 3. As reported in previous studies (Rosin et al., 2022; Rosin and Radinsky, 2022), we find that the fine-tuned model (MLM*temp*) achieves high performance in all settings. Moreover, for the hidden layers, we have confirmed that our method, by using the last four layers (MLMpre|*temp*,four), yields even higher correlations than using only the last layer (MLMpre|*temp*,last). Prediction measures. Our method allows us to try a variety of measures. In the diag(cov) setting, we try two divergences and seven distance measures. Comparing within divergence measures, Table 2 shows that KL(C1||C2) achieves high performance in all MLM conditions. This result means that many existing words acquire novel meanings. On the other hand, comparing the distance measures, we find that Canberra and Chebyshev outperform the commonly used cosine distance in MLM*temp* (Table 2 and Table 3). Since the cosine distance makes underestimations in MLMs (Zhou 3https://huggingface.co/bert-base-uncased | Model | | | | | |--------------|-------------|-------------|--------------|--------------| | Measure | MLMpre,last | MLMpre,four | MLMtemp,last | MLMtemp,four | | KL(C1||C2) | 0.075 | 0.130 | 0.414 | 0.431 | | KL(C2||C1) | 0.100 | 0.117 | 0.361 | 0.411 | | Jeff(C1||C2) | 0.090 | 0.129 | 0.391 | 0.409 | | Bray-Curtis | 0.217 | 0.241 | 0.464 | 0.480 | | Canberra | 0.192 | 0.251 | 0.455 | 0.517 | | Chebyshev | 0.154 | 0.166 | 0.517 | 0.478 | | City Block | 0.198 | 0.140 | 0.461 | 0.459 | | Correlation | 0.191 | 0.266 | 0.480 | 0.463 | | Cosine | 0.190 | 0.270 | 0.478 | 0.480 | | Euclidean | 0.198 | 0.249 | 0.473 | 0.474 | | Model | | | | | |-------------|-------------|-------------|--------------|--------------| | Measure | MLMpre,last | MLMpre,four | MLMtemp,last | MLMtemp,four | | Bray-Curtis | 0.219 | 0.263 | 0.460 | 0.467 | | Canberra | 0.195 | 0.246 | 0.502 | 0.489 | | Chebyshev | 0.145 | 0.132 | 0.529 | 0.451 | | City Block | 0.192 | 0.248 | 0.414 | 0.452 | | Correlation | 0.181 | 0.286 | 0.481 | 0.468 | | Cosine | 0.189 | 0.272 | 0.479 | 0.454 | | Euclidean | 0.204 | 0.231 | 0.454 | 0.457 | et al., 2022), this result suggests that it is better to calculate the absolute distance per dimension as in Canberra and Chebyshev. Components of the covariance matrices. When applying the distance measures, the vectors can be extracted from the full or diagonal covariance matrix. From Table 3 we see that using all components of the covariance matrix (full(cov)) further improves performance obtaining a correlation coefficient of 0.529 (MLM*temp*,last, full(cov), Chebyshev). Previous studies had assumed that the covariance matrix is diagonal for computational convenience (Iwamoto and Yukawa, 2020; Yuksel ¨ et al., 2021). However, as our results show, further performance improvements can be obtained by considering all components of the covariance matrix. Here onwards, we will refer to the best setting (i.e. MLM*temp*,last, *full*(cov), Chebyshev) as the **Proposed** method. ## 4.4 Comparisons Against Prior Work In this section, we compare our proposed method against related prior work. We do not re-implement or re-run those methods, but instead compare using the published results from the original papers. Word2Gausslight (Iwamoto and Yukawa, 2020): They apply Gaussian Embeddings (Vilnis and McCallum, 2015) based architecture in each time period. For each word, they define a computationally lightweight Gaussian embedding as follows: the mean vector is the vector of the word2vec learned by the initialization method (Kim et al., 2014), and the covariance matrix is the diagonal matrix, uniformly weighted by frequency. They calculate the KL divergence of the Gaussian embeddings for the semantic variation prediction. Word2Gauss (Yuksel et al. ¨ , 2021): They apply pure Gaussian Embeddings (Vilnis and McCallum, 2015). For a given word, the mean vector and the covariance matrix of the Gaussian Embedding are trained using the innerproduct with the positive examples and the KL divergence with the negative examples. For computational convenience and to reduce the number of parameters, they use a diagonal covariance matrix. After training separate word embedding models for each time period, the mean vectors are aligned between time periods using a rotation matrix (Hamilton et al., 2016), and predictions are made using cosine distance or Jeffrey's divergence. They have reported the cosine distance as the best metric. MLM*temp* (Rosin et al., 2022): They fine-tuned the published BERT model to specific time periods. To adapt to specific time periods, they insert a special token indicating the time period at the beginning of the sentence in the target corpus, and fine-tuned on the corpora available for each time period. They use two measures for prediction: (a) the distance between the predicted probability of the target word in the sentence at each time period, and (b) the cosine distance of the average token vector at each time period. Their results report that the cosine distance is the best metric (MLM*temp*, Cosine). However, Kutuzov and Giulianelli (2020) have shown that the average pairwise cosine distance (3) is better than the cosine distance between average sibling embeddings. Based on this result, we only run this setting that MLM*temp* model with the average pairwise cosine distance (MLM*temp*, APD). MLMpre **w/ Temp. Att.** (Rosin and Radinsky, 2022): They propose a time-specific attention mechanism to adapt MLMs to specific time periods. They add time-specific vectors and an attention weight matrix to the published BERT as trainable parameters and perform additional training on the target corpora. During prediction, they use the cosine distance following Rosin et al. (2022). MLM*temp* **w/ Temp. Att.** (Rosin and Radinsky, 2022): It is the combination of the above two methods (MLM*temp* and MLMpre **w/ Temp.** Att.), which is considered as the current SoTA model for semantic variation prediction. They | Model | Spearman | |-----------------------|------------| | Word2Gausslight | 0.358 | | Word2Gauss | 0.399 | | MLMtemp, Cosine | 0.467 | | MLMtemp, APD | 0.479 | | MLMpre w/ Temp. Att. | 0.520 | | MLMtemp w/ Temp. Att. | 0.548 | | Proposed | 0.529 | add time-specific special tokens to the beginning of each sentence in the target corpus, and conduct additional training on the publicly available BERT model with the time-specific attention mechanism. They also use the cosine distance as used by Rosin et al. (2022). Experimental results are summarised in Table 4. This result shows that our proposed method achieves the second best performance compared to prior work. We can see that the contextualised mean embeddings based method (MLM*temp*) outperforms the non-contextualised distribution based methods (**Word2Gauss**light and **Word2Gauss**), and further improvement can be obtained by adding the time-specific attention mechanisms (MLMpre w/ Temp. Att. and MLM*temp* **w/ Temp. Att.**). Moreover, the contextualised distribution based approach (**Proposed**) can yield performance improvement similar to adding time-specific attention mechanisms. We will discuss the detailed analyses as follows. Comparison within the base model (MLM*temp*). Since our method is based on MLM*temp*, we compare performance within MLM*temp*. As in the previous work (Rosin et al., 2022), we discuss the results when using the cosine distance. Table 4 shows that the average pairwise cosine distance (MLM*temp*, APD) outperforms the cosine distance between average sibling embeddings (MLM*temp*, Cosine). Moreover, from Table 2 and Table 3, we can see that our distribution based method outperforms the previous method using only the mean embeddings (0.467 in Table 4) in most settings (0.478 by MLM*temp*,last, diag(cov), 0.480 by MLM*temp*,four, *diag*(cov), and 0.479 by MLM*temp*,last, full(cov)). This result indicates the importance of considering not only the mean but also the variance of the sibling embeddings. Comparison against SoTA. Although our proposed method and the SoTA MLM*temp* **w/ Temp.** Att. are based on the same model MLM*temp*, their configurations are significantly different. Specifically, MLM*temp* **w/ Temp. Att.** adds a timespecific attention mechanism to the model and learns its parameters with additional training, whereas our proposed method uses only MLM*temp* and thus does not require additional parameters or training. Although according to Table 4, MLM*temp* w/ Temp. Att. reports a correlation of 0.548 and marginally outperforms the Proposed method, which obtains a correlation of 0.529, we find no statistically significant difference between those two methods.5 ## 4.5 Ablation Study We conduct an ablation study to understand the importance of (i) predicting semantic variation with sibling distributions N (µ w i , V w i ), and (ii) constructing sibling distributions from the mean µ w iand covariance V w iof sibling embeddings. Based on our best setting **Proposed** (MLM*temp*,last, full(cov), Chebyshev), we define two variants: (i) predicting semantic variation score using mean vectors µ w 1 and µ w 2 only as previous studies, and (ii) constructing a sibling distribution with the identity matrix N (µ w i ,I) instead of the covariance matrix V w i . In the SemEval-2020 Task 1 English evaluation set, the existence of a semantic change (binary judgement) and its degree (continuous judgement) are provided. Therefore, due to the limited space, we analyse the top eight semantically changed words with the highest degrees of semantic changes and the bottom eight semantically stable words with the lowest degrees of semantic change. From Table 5, we see that our distribution-based variants (V w i = I and Proposed) eliminate overestimation or underestimation problems in using mean vectors only (w/o V w i ). The variant w/o V w i correctly detects words *plane* and *graft* that have changed meaning significantly between time periods. However, this variant also reports underestimation (*stab* and bit) and overestimation (*contemplation* and *chairman*) in other words, whose meanings are changed/stable but the mean vec-5To measure the statistical significance, we use the Fisher transformation (Fisher, 1992). | Word | Gold | w/o | V w i = I | Proposed | | |---------------|--------|-------|-------------|------------|----| | V w i | | | | | | | rank ∆ | rank | rank | rank | | | | plane | 1 | ✓ | 3 | 18 | 15 | | tip | 2 | ✓ | 7 | 9 | 7 | | prop | 3 | ✓ | 16 | 1 | 4 | | graft | 4 | ✓ | 2 | 36 | 36 | | record | 5 | ✓ | 15 | 12 | 14 | | stab | 7 | ✓ | 31 | 10 | 11 | | bit | 9 | ✓ | 27 | 11 | 9 | | head | 10 | ✓ | 23 | 28 | 28 | | multitude | 30 | ✗ | 24 | 35 | 35 | | savage | 31 | ✗ | 20 | 26 | 26 | | contemplation | 32 | ✗ | 1 | 37 | 37 | | tree | 33 | ✗ | 33 | 31 | 30 | | relationship | 34 | ✗ | 26 | 34 | 34 | | fiction | 35 | ✗ | 21 | 29 | 29 | | chairman | 36 | ✗ | 5 | 32 | 33 | | risk | 37 | ✗ | 10 | 19 | 21 | | Spearman | 1.000 | 0.070 | 0.503 | 0.529 | | tors are changed little/significantly. This is because it makes predictions based only on the mean of sibling embeddings. On the other side, the distribution-based variants (V w i = I and Proposed) can appropriately rank semantically changed words (∆ = ✓) that have small changes in mean vectors (*stab* and bit), and stable words (∆ = ✗) that have large changes in mean vectors (*contemplation* and *chairman*).6 Moreover, we find that even with the distribution-based variants, using covariance matrices V w icomputed from sibling embeddings yields even better performance than identity matrices (V w i = I). This result further verifies our hypothesis that considering the mean and the variance of the sibling embeddings is important for semantic change detection tasks. ## 5 Conclusion We proposed a method to detect semantic variations of words using sibling embeddings. Experimental results on SemEval-2020 Task 1 English dataset show that the proposed method consistently outperforms methods that use only the mean embedding vectors, and reports results comparable to the current SoTA. Furthermore, a qualitative analysis shows that the proposed method correctly detects semantic variation of words, which are either over/underestimated by the existing methods. ## 6 Limitations Language-related limitations. For the ease of the analysis, we conducted experiments using only the English dataset in this study. Although our proposed method can be applied to any language, its performance must be evaluated on languages other than English. For example, the SemEval2020 Task 1 dataset includes Latin, German, and Swedish language datasets, in addition to English, and can be used for this purpose. In particular, our proposed method requires only pretrained MLMs and does not require additional training data for the target languages, which makes it easily scalable to many languages. Availability of MLMs for the target language. Experimental results show that the quality of the MLM is an important factor determining the performance of the proposed method. For example, the proposed method reports good performance with vanilla BERT model in Table 2 but further gains in performance can be obtained with the fine-tuned BERT model on masked time stamps. However, since our method assumes the availability of pretrained MLMs, a problem arises when trying to adapt our method to minor languages where no pretrained MLMs are available. This limitation could be mitigated to an extent by using multilingual MLMs. For example, Arefyev and Zhikov (2020) demonstrated that satisfactory levels of accuracies can be obtained for semantic change detection by using multilingual MLMs. Our proposed method can further benefit from the fact that new and larger MLMs are being publicly released for many languages in the NLP community. ## 7 Ethical Considerations In this paper, we proposed a distribution based method using publicly available MLMs, and evaluated with the SemEval-2020 Task 1 English data. Although we have not published any datasets or models, Basta et al. (2019) shows that pretrained MLMs encode and even amplify unfair social biases such as gender or racial biases. Given that we obtain sibling distributions from such potentially socially biased MLMs, we must further evaluate the sensitivity of our method for such undesirable social biases. ## Acknowledgements This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2139. Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon. ## References Taichi Aida, Mamoru Komachi, Toshinobu Ogiso, Hiroya Takamura, and Daichi Mochihashi. 2021. A comprehensive analysis of PMI-based models for measuring semantic differences. In Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, pages 21–31, Shanghai, China. Association for Computational Lingustics. Reem Alatrash, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2020. CCOHA: Clean corpus of historical American English. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6958–6966, Marseille, France. European Language Resources Association. Nikolay Arefyev and Vasily Zhikov. 2020. BOS at SemEval-2020 task 1: Word sense induction via lexical substitution for lexical semantic change detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 171–179, Barcelona (online). International Committee for Computational Linguistics. Pierpaolo Basile, Annalina Caputo, Tommaso Caselli, Pierluigi Cassotti, and Rossella Varvara. 2020. Diacrita @ evalita2020: Overview of the evalita2020 diachronic lexical semantics (diacr-ita) task. In *Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)*. CEUR Workshop Proceedings (CEUR-WS.org). Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, EVALITA 2020 ; Conference date: 17-12-2020. Christine Basta, Marta R. Costa-jussa, and Noe Casas. ` 2019. Evaluating the underlying gender bias in contextualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33–39, Florence, Italy. Association for Computational Linguistics. Christin Beck. 2020. DiaSense at SemEval-2020 task 1: Modeling sense change via pre-trained BERT embeddings. In *Proceedings of the Fourteenth Workshop* on Semantic Evaluation, pages 50–58, Barcelona (online). International Committee for Computational Linguistics. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Re, Dorsa ´ Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramer, Rose E. ´ Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the Opportunities and Risks of Foundation Models. Paul Cook and Suzanne Stevenson. 2010. Automatically identifying changes in the semantic orientation of words. In *Proceedings of the Seventh International* Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, and Dominik Schlechtweg. 2019. Time-out: Temporal referencing for robust modeling of lexical semantic change. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 457–470, Florence, Italy. Association for Computational Linguistics. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. John R. Firth. 1957. A synopsis of linguistic theory 1930-55. *Studies in Linguistic Analysis*, pages 1 – 32. R. A. Fisher. 1992. *Statistical Methods for Research* Workers, pages 66–70. Springer New York, New York, NY. Mario Giulianelli, Marco Del Tredici, and Raquel Fernandez. 2020. ´ Analysing lexical semantic change with contextualised word representations. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3960– 3973, Online. Association for Computational Linguistics. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics. Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908, Florence, Italy. Association for Computational Linguistics. Ran Iwamoto and Masahiro Yukawa. 2020. RIJP at SemEval-2020 task 1: Gaussian-based embeddings for semantic change detection. In *Proceedings of the* Fourteenth Workshop on Semantic Evaluation, pages 98–104, Barcelona (online). International Committee for Computational Linguistics. E. T. Jaynes. 2003. *Probability Theory*. Cambridge University Press. Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. In *Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science*, pages 61–65, Baltimore, MD, USA. Association for Computational Linguistics. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In *WWW 2015*, pages 625– 635. Andrei Kutuzov, Erik Velldal, and Lilja Ovrelid. 2022. Contextualized embeddings for semantic change detection: Lessons learned. *Northern European Journal of Language Technology*, 8. Andrey Kutuzov and Mario Giulianelli. 2020. UiOUvA at SemEval-2020 task 1: Contextualised embeddings for lexical semantic change detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 126–134, Barcelona (online). International Committee for Computational Linguistics. Andrey Kutuzov, Lilja Ovrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384–1397, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Andrey Kutuzov and Lidia Pivovarova. 2021. RuShiftEval: a shared task on semantic shift detection for Russian. In Computational linguistics and intellectual technologies: Papers from the annual conference Dialogue. Severin Laicher, Sinan Kurtyigit, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2021. Explaining and improving BERT performance on lexical semantic change detection. In *Proceedings of* the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 192–202, Online. Association for Computational Linguistics. Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Toma´s Ko ˇ cisk ˇ y, Sebastian Ruder, Dani Yogatama, ´ Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the gap: Assessing temporal generalization in neural language models. In Advances in Neural Information Processing Systems. Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-collados. 2022. TimeLMs: Diachronic language models from Twitter. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 251–260, Dublin, Ireland. Association for Computational Linguistics. Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2020. Leveraging contextual embeddings for detecting diachronic semantic shift. In *Proceedings of the* Twelfth Language Resources and Evaluation Conference, pages 4811–4819, Marseille, France. European Language Resources Association. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, null null, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. *Science*, 331(6014):176–182. Syrielle Montariol, Matej Martinc, and Lidia Pivovarova. 2021. Scalable and interpretable semantic change detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4642–4652, Online. Association for Computational Linguistics. Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4609–4622, Online. Association for Computational Linguistics. Guy D. Rosin, Ido Guy, and Kira Radinsky. 2022. Time masking for temporal language models. In *Proceedings of the Fifteenth ACM International Conference* on Web Search and Data Mining, WSDM '22, pages 833–841, New York, NY, USA. Association for Computing Machinery. Guy D. Rosin and Kira Radinsky. 2022. Temporal attention for language models. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1498–1508, Seattle, United States. Association for Computational Linguistics. Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. SemEval-2020 task 1: Unsupervised lexical semantic change detection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1–23, Barcelona (online). International Committee for Computational Linguistics. Zhaochen Su, Zecheng Tang, Xinyan Guan, Lijun Wu, Min Zhang, and Juntao Li. 2022. Improving temporal generalization of pre-trained language models with lexical semantic change. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6380–6393, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Nina Tahmasebi, Lars Borina, and Adam Jatowtb. 2021. Survey of computational approaches to lexical semantic change detection. Computational approaches to semantic change, 6:1. Elizabeth Closs Traugott and Richard B. Dasher. 2001. Prior and current work on semantic change, Cambridge Studies in Linguistics, page 51–104. Cambridge University Press. Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA. Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In *WSDM 2018*, page 673–681. Arda Yuksel, Berke U ¨ gurlu, and Aykut Ko ˘ c¸. 2021. Semantic change detection with gaussian word embeddings. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3349–3361. Kaitlyn Zhou, Kawin Ethayarajh, Dallas Card, and Dan Jurafsky. 2022. Problems with cosine as a measure of embedding similarity for high frequency words. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 2: Short Papers), pages 401–423, Dublin, Ireland. Association for Computational Linguistics. Yi Zhou and Danushka Bollegala. 2021. Learning sensespecific static embeddings using contextualised word embeddings as a proxy. In *Proceedings of the 35th* Pacific Asia Conference on Language, Information and Computation, pages 493–502, Shanghai, China. Association for Computational Lingustics. Yi Zhou and Danushka Bollegala. 2022. On the curious case of l2 norm of sense embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2593–2602, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. ## A Information Of Sibling Distribution In the semantic variation prediction, prior work have applied the mean embeddings µ w iof sibling distribution D(*w, C*i) for each word w. However, since these methods compress multiple vectors of D(*w, C*i) into a single vector µ w i , there is a risk of loosing the information contained in each vector (Pimentel et al., 2020). To discuss the amount of information a sibling distribution holds, we analyse the relationship between the size of a sibling distribution D(*w, C*i) (word frequency Nw i ) and the rank of a covariance matrix V w i calculated from D(*w, C*i). Figure 2 shows the relationship between the frequency of randomly sampled 1,000 words and the rank of their covariance matrices. For each word, we construct a covariance matrix from sibling embeddings as in (2). These matrices have d × d dimensions (BERT base models have d = 768 hidden size), and we use their full components (full(cov)) for computing their ranks. We see that there is a strong correlation between the frequency and the rank of the covariance matrix, and when the frequency exceeds the dimension size, the rank remains constant at the dimensionality of the contextualised embedding space. This result implies that, upto the dimensionality of the contextualised embedding space, the covariance matrix computed from the sibling distribution D(*w, C*i), retains information about the individual occurrences of a word. Given that contextualised embeddings are often high dimensional (e.g. 768, 1024 etc.) the covariance matrix V w icomputed from the sibling distribution D(*w, C*i) preserves sufficient information about w for semantic variations related to w. ![11_image_0.png](11_image_0.png) In this analysis, we show that an interesting trend of the word frequency and the rank of covariance matrix. We speculate that this result may be related to the trend of the sense frequency and the length of sense representation reported in the previous study (Zhou and Bollegala, 2022). However, we leave the investigation of this interesting trend to future research. ## B List Of Divergence Measures We describe the divergence measures as detailed next. For simplicity, we denote two Gaussian distributions N (µ w 1 , V w 1) and N (µ w 2 , V w 2) as N w 1and N w 2 , respectively. ## Kullback-Liebler $$\begin{array}{r}{\mathrm{KL}(\mathcal{N}_{1}^{w}||\mathcal{N}_{2}^{w})}\\ {=}\frac{1}{2}\Big(\mathrm{tr}(\mathbf{V}_{2}^{w-1}\mathbf{V}_{1}^{w})-d-\log\frac{\mathrm{det}(\mathbf{V}_{1}^{w})}{\mathrm{det}(\mathbf{V}_{2}^{w})}\\ {}\\ {+(\boldsymbol{\mu}_{2}^{w}-\boldsymbol{\mu}_{1}^{w})^{\top}\mathbf{V}_{2}^{w-1}(\boldsymbol{\mu}_{2}^{w}-\boldsymbol{\mu}_{1}^{w})\Big)}\end{array}\tag{4}$$ w $$|N_{2}^{w}\rangle$$ ![11_image_1.png](11_image_1.png) ## Jeffrey'S Jeff(N w 1||N w 2) = 1 2 KL(N w 1||N w 2) + 12 KL(N w 2||N w 1) = 1 4 tr(V w 2 −1V w 1) + tr(V w 1 −1V w 2) − 2d + (µ w 2 − µ w 1) ⊤V w 2 −1(µ w 2 − µ w 1) + (µ w 1 − µ w 2) ⊤V w 1 −1(µ w 1 − µ w 2) (5) ## C List Of Distance Measures We describe the distance measures as detailed next. w(i) denotes the i-th value of a word vector w and w denotes a subtracted vector from the average of all dimension values. ## Bray-Curtis $$\psi(\mathbf{w}_{1},\mathbf{w}_{2})={\frac{\sum_{i\in d}|\mathbf{w}_{1}(i)-\mathbf{w}_{2}(i)|}{\sum_{i\in d}|\mathbf{w}_{1}(i)+\mathbf{w}_{2}(i)|}}$$ $$(6)$$ Canberra #### berra $$\psi(\pmb{w_1},\pmb{w_2})=\sum_{i\in d}\frac{|\pmb{w_1}(i)-\pmb{w_2}(i)|}{|\pmb{w_1}(i)|+|\pmb{w_2}(i)|}$$ $$({\boldsymbol{\delta}})$$ $$\quad(7)$$ Chebyshev $$\psi(\mathbf{w}_{1},\mathbf{w}_{2})=\operatorname*{max}_{i}|\mathbf{w}_{1}(i)-\mathbf{w}_{2}(i)|$$ City Block $$\psi(\mathbf{w}_{1},\mathbf{w}_{2})=\sum_{i\in d}|\mathbf{w}_{1}(i)-\mathbf{w}_{2}(i)|\quad{\mathrm{~(9)~}}$$ Correlation #### Hallsof $$\psi(\color{blue}{w_1},\color{blue}{w_2})=1-\frac{\overline{\color{blue}{w}}_1\cdot\overline{\color{blue}{w}}_2}{||\overline{\color{blue}{w}}_1||_2\ ||\overline{\color{blue}{w}}_2||_2}$$ #9. $$(10)$$ Cosine $$\psi(\mathbf{w}_{1},\mathbf{w}_{2})=1-{\frac{\mathbf{w}_{1}\cdot\mathbf{w}_{2}}{\|\mathbf{w}_{1}\|_{2}\,\|\mathbf{w}_{2}\|_{2}}}$$ $$(11)$$ Euclidean $$\psi(\mathbf{w}_{1},\mathbf{w}_{2})=||\mathbf{w}_{1}-\mathbf{w}_{2}||_{2}$$ $$(12)$$ ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4 ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We have experimented with the same usage as in the shared task for which the data was provided. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We were unable to find the information for the dataset we used. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-transformer
{T}ran{SF}ormer: Slow-Fast Transformer for Machine Translation
https://aclanthology.org/2023.findings-acl.430
Learning multiscale Transformer models has been evidenced as a viable approach to augmenting machine translation systems. Prior research has primarily focused on treating subwords as basic units in developing such systems. However, the incorporation of fine-grained character-level features into multiscale Transformer has not yet been explored. In this work, we present a \textbf{S}low-\textbf{F}ast two-stream learning model, referred to as Tran\textbf{SF}ormer, which utilizes a {``}slow{''} branch to deal with subword sequences and a {``}fast{''} branch to deal with longer character sequences. This model is efficient since the fast branch is very lightweight by reducing the model width, and yet provides useful fine-grained features for the slow branch. Our TranSFormer shows consistent BLEU improvements (larger than 1 BLEU point) on several machine translation benchmarks.
# Transformer: Slow-Fast Transformer For Machine Translation Bei Li1∗ , Yi Jing1, Xu Tan2, Zhen Xing3, Tong Xiao1,4†**and Jingbo Zhu**1,4 1School of Computer Science and Engineering, Northeastern University, Shenyang, China 2Microsoft Research Asia, 3Fudan University 4NiuTrans Research, Shenyang, China {libei_neu,jingyi_neu}@outlook.com, xuta@microsoft.com zxing20@fudan.edu.cn, {xiaotong,zhujingbo}@mail.neu.edu.cn ## Abstract Learning multiscale Transformer models has been evidenced as a viable approach to augmenting machine translation systems. Prior research has primarily focused on treating subwords as basic units in developing such systems. However, the incorporation of finegrained character-level features into multiscale Transformer has not yet been explored. In this work, we present a Slow-Fast two-stream learning model, referred to as TranSFormer, which utilizes a "slow" branch to deal with subword sequences and a "fast" branch to deal with longer character sequences. This model is efficient since the fast branch is very lightweight by reducing the model width, and yet provides useful fine-grained features for the slow branch. Our TranSFormer shows consistent BLEU improvements (larger than 1 BLEU point) on several machine translation benchmarks. ## 1 Introduction Transformer (Vaswani et al., 2017) has demonstrated strong performance across a range of natural language processing (NLP) tasks. Recently, learning multiscale Transformer models has been evidenced as a promising approach to improving standard Transformer. Previous research on this line can be broadly categorized into two streams: one learns local fine-grained features by using a fixed-length window (Yang et al., 2019; Hao et al., 2019; Guo et al., 2020), linguistic-inspired local patterns (Li et al., 2022b), and a hybrid approach that combines convolution and self-attention models (Gulati et al., 2020) or run in parallel (Zhao et al., 2019); the other learns sequence representations by considering multiple subword segmentation/merging schemas (Wu et al., 2020). Despite the attractiveness of these approaches, previous work is based on an assumption that sub- ∗The work was done when the first author was an intern at Microsoft Research Asia. †Corresponding author. ![0_image_0.png](0_image_0.png) words are the basic units in sequence modeling, and therefore ignores smaller, more fine-grained character-level features. In fact, the benefits of using characters have long been appreciated, and character-based models have been discussed in several sub-fields of NLP, such as language modeling (Xue et al., 2022) and machine translation (Lee et al., 2017; Li et al., 2021; Gao et al., 2020). But there are still important problems one needs to address in multi-scale Transformer. The first of these is the computational challenge of dealing with long sequences. For example, when we represent an English text as a character sequence, the length of this sequence is in general 5× longer than that of the subword sequence. We therefore need to consider this length difference in model design. The second problem is that, from a multiscale learning perspective, learning text representations with features at different levels is not just making use of the syntactic hierarchy of language. To better model the problem, we need some mechanism to describe the interactions among these different linguistic units. In this study, we aim to exploit the potential of character-level representations in multiscale sequence models while maintaining computational efficiency. Drawing inspiration from the SlowFast convolutional models in video classification (Feichtenhofer et al., 2019), we propose the SlowFast Transformer (TranSFormer) model, which utilizes a fast, thin branch to learn fine-grained character-level features and a slow, wide branch to capture correlations among subword features. A cross-granularity attention layer is placed between the self-attention and feedforward sublayers to make exchanges of cross-granularity information. This enables the slow branch to be aware of fine-grained features while providing optimized high-level representations of the input sequence to the fast branch. We also make use of character-to-word boundary information to model the interactions among neighboring characters in a word. Additionally, we develop a boundary-wise positional encoding method to better encode the positional information within words for the fast branch. Through a series of extensive experiments on the WMT'14 English-German, WMT'17 Chinese-English and WMT'16 EnglishRomanian tasks, we demonstrate that TranSFormer yields consistent performance gains while having a negligible increase in the number of parameters and computational cost. As a bonus, our TranSFormer is robust to errors caused by suboptimal tokenization or subword segmentation. ## 2 Related Work Multiscale Transformer Learning multiscale Transformer is a promising way to acquire for further improvements in the machine translation task. A feasible way is to model global and local patterns to enhance Transformer models (Shaw et al., 2018; Yang et al., 2018, 2019; Zhao et al., 2019). These work mainly modeled the localness within a fixed window size upon subword input features. Apart from these, Wu et al. (2018) partitioned the input sequence according to phrase-level prior knowledge, and build attention mechanism upon phrases. Similarly, Hao et al. (2019) proposed a multi-granularity self-attention mechanism, designed to allocate different attention heads to phrases of varying hierarchical structures. Perhaps the most related work to ours is UMST (Li et al., 2022b). They re-defined the sub-word, word and phrase scales specific to sequence generation, and modeling the correlations among scales. However, more fine-grained character-level scale is not explored in the previous work due to the serve challenge for encoding long ## Character Sequences. Character-level NMT Fully character-level neural machine translation originates from recurrent machine translation system in Lee et al. (2017). They built a fully character-level encoder-decoder model, and utilize convolution layers to integrate information among nearby characters. Cherry et al. (2018) show the potential of character-level models which can outperform subword-level models under fully optimization. This contributes to their greater flexibility in processing and segmenting the input and output sequences, though modeling such long sequences is time-consuming. More recently, several studies analyze the benefits of character-level systems in multilingual translation scenarios (Gao et al., 2020), low-resource translation and translating to typologically diverse languages (Li et al., 2021). But these methods all simply view characters as basic units in language hierarchy, and it is still rare to see the effective use of multi-scale learning on character-based language features. Multi-Branch Transformer The utilization of multi-branch architectures has been extensively studied in Transformer models. Early efforts in this area include the Weighted Transformer (Ahmed et al., 2017), which replaced the vanilla self-attention by multiple self-attention branches. Subsequently, the Multi-attentive Transformer (Fan et al., 2020) and Multi-Unit Transformer (Yan et al., 2020) have advanced this design schema by incorporating branch-dropout and switching noise inputs, respectively. Additionally, Wu et al. (2020) investigated the potential advantages of utilizing dual cross-attention mechanisms to simultaneously attend to both Sentencepiece (Kudo and Richardson, 2018) and subword (Sennrich et al., 2016). In this work, we take a forward step to exploit the potential of character features. We argue that a lightweight branch is sufficient to encode useful fine-grained features, an aspect that has not been previously investigated. ## 3 Method The proposed TranSFormer follows a encoderdecoder paradigm (see Figure 1) which involves two encoder branches operating at different input granularities. The original subword encoder, which has a large model capacity for fully learning correlations among input individuals, is defined as the slow branch. The other branch, designed to handle Character: A _ s w i s s _ b i c y c l e **Subword**: A swi@@ ss bicy@@ cle ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) character-level representations using a thin encoder to efficiently capture correlations among characters, is referred to as the fast branch. Our goal is to use the fast branch to learn a fine-grained but less precise representation to complement the slow branch. In the following sections, we will elaborate the core design of Slow branch, Fast branch and the cross-granularity attention, respectively. ## 3.1 The Slow Branch For Subwords We use the standard Transformer as the slow branch due to its strong ability to model global interactions among input sequences. The input of the slow branch is the mixture of subwords and words since some high-frequency words have not been further divided into subwords. Following the suggestions in Li et al. (2022b), we adopt a graph convolutional network to model the inner correlations among words through the adjacency matrix As. To this end, the Slow branch then encodes the enhanced representation via the selfattention mechanism, SAN = Softmax( Q·KT √dk )·V , where Q, K, V are obtained through three independent projection matrix, such as Wq, Wk, Wv. A point-wise feed-forward network is followed, FFN = max(xW1 + b1, 0)W2 + b2, where W1 and W2 are transformation matrices and b1 and b2 are bias matrices. To bridge the gap between two granularities, we sandwich a new sublayer between the self-attention and the feed-forward network, to accomplish the feature interaction between the slow and fast branches. A straightforward idea is to employ a cross-attention similar with encoder-decoder attention in the decoder side. We will discuss more details in the Section 3.3. ## 3.2 The Fast Branch For Characters To enhance the efficiency of modeling long character-level inputs, we propose the use of a fast branch with a tiny hidden size. The hidden size is a critical factor in the computation of the selfattention network (Vaswani et al., 2017), and by reducing it, we can achieve faster computation. To the best of our knowledge, this is the first attempt to design multiscale Transformer models that considers character-level features, as the long input sequence has previously hindered such exploration. While the fast branch may not be as powerful as the slow branch, it is still effective in learning fine-grained features. Our initial experiments have yielded two notable findings: 1) a slow branch with hidden size of 32 is sufficient for transferring fine-grained knowledge to the slow branch, and 2) cross-granularity fusion is crucial for the slow branch, while removing the reversed fusion in the fast branch has only a moderate effect on performance. We would ablate this settings in the Section 4.2. To further improve the modeling ability, we introduce several techniques as follows: Char Boundary Information The use of wordboundary information has been shown to effectively reduce the redundant correlations among subwords, as demonstrated in (Li et al., 2022b). This leads to the consideration of character-level modeling, which poses a more challenging problem ![3_image_0.png](3_image_0.png) due to the greater number of characters typically present within a word in comparison to subwords. The statistical analysis in Figure 2c further evidences it that a significant proportion of words contain more than 5 characters, while a much smaller number are divided into subwords. Thus, model may be unable to discern the distinction between the same character that belongs to the same word and that of distinct words. To address this issue, we propose the use of a character-level graph convolution network (GCN) to learn local, fine-grained features while also allowing each character to be aware of its proximity to other characters. GCN(Kipf and Welling, 2017) is a suitable choice for this task as it aggregates feature information from the neighbors of each node to encapsulate the hidden representation of that node. The computation can be described as: $$\mathrm{GCN}_{\mathrm{Fast}}=\sigma(\tilde{D}_{f}^{-\frac{1}{2}}\tilde{\mathcal{A}}_{f}\tilde{D}_{f}^{-\frac{1}{2}}\cdot x W_{f}^{g}),\quad(1)$$ A˜f = Af + IL denotes the adjacency matrix of the undirected graph with self-connections. Here IL denotes the identity matrix. D˜f is the degree matrix of the adjacency matrix A˜f . W g f is a linear transformation which is a trainable parameter. The character-level encoder architecture is illustrated in Figure 3. Boundary-wised Positional Encoding To further enhance the relative positional representation among characters, we design a boundary-wised positional encoding (PE) method. Our intuition is to provide each character with the ability to recognize which characters belong to the same word. Thus we restrict the relative window within each word, as illustrated in Figure 4. Vanilla relative positional encoding (Shaw et al., 2018) models the correlations in a fixed window size 2k + 1. Here we set k = 3, positions exceed k would be masked. Differently, the proposed boundary-wised PE is utilized to enhance the inner relative positional information among characters within each word. In our preliminary experiments, boundary-wised PE is helpful for stable training. ## 3.3 Cross-Granularity Fusion As depicted in Figure 3, the computation of the two branches in our TranSFormer architecture is separated, with each branch operating independently of the other's representation. To facilitate communication between the branches, we propose the utilization of a cross-granularity information fusion method within each encoder block. This method can be implemented through various options. Given the lengths of the slow and fast branches as Ls and Lf , and the hidden sizes as Hs and Hf , respectively, the goal is to seamlessly ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) integrate cross-scale information within each encoder block between the two branches. In the field of machine translation, it is straightforward to employ cross-attention mechanisms, such as encoderdecoder attention (Vaswani et al., 2017) or contextaware cross-attention (Voita et al., 2018), to capture correlations between the representations. Our default strategy is to employ a crossgranularity attention mechanism (namely CGA) sandwiched between the self-attention and feedforward network. The architecture is plotted in Figure 3. xf and xs denote the representation of the fast and slow branches, respectively. The challenge remains here is the mismatched feature shape between xs and xf Here, we take the Fast branch as an instance, we first normalize xs via xˆf = LN(xs). LN(·) denotes the layer normalization for stable optimization. Then xˆf is fed into CGA of the fast branch, the formulation is as follows: $$\begin{array}{r c l}{{\mathrm{ATTN}_{f}}}&{{=}}&{{\mathrm{Softmax}(\frac{x_{f}W_{f}^{q}\cdot({\hat{x}}_{f}W_{f}^{k})^{\mathsf{T}}}{\sqrt{d_{f}^{k}}}),}}\\ {{}}&{{}}&{{\mathrm{CGA}}}&{{=}}&{{\mathrm{ATTN}_{f}\cdot{\hat{x}}_{f}W_{f}^{v},}}\end{array}\qquad(2)$$ where the query is derived from the residual output of SAN in the Fast branch via xs ·W q f . The key and value are derived from the Slow branch via xˆfWk f and xˆfWv f , respectively. It is worthy to note that, the shape of Wk f and Wv f ∈ R Hs×Hf , to reduce the hidden size. Detailed transformation could be found in the left part of Figure 3. It is important to note that our proposed method of cross-granularity fusion is bidirectional, as opposed to the lateral connections used in the SlowFast (Feichtenhofer et al., 2019). Other alternative methods would be discussed in Section 4.3. ## 3.4 Interactions Between Encoder And Decoder In vanilla Transformer, the key and value of the encoder-decoder attention on the decoder side derives from the encoder output, however, there are two branches in our TranSFormer (See Figure 1). It is worthy to investigate how to effectively leverage the multi-granularity representations. Our default strategy is to regard the fast branch as an auxiliary to provide fine-grained features for the slow branch, thus only the output of the slow branch is exposed to the decoder. Besides this, there are also several feasible options. For example, we can fuse the outputs of two branches as the final encoder output, or building a double-branch encoder-decoder attention to attend two branches independently. We compares this options in our experiments. ## 4 Experiments 4.1 Experimental Setups Datasets The present study examines the performance of our proposed TranSFormer on several machine translation datasets: the WMT'14 EnglishGerman (En-De), WMT'16 English-Romanian (En-Ro) and WMT'17 Chinese-English (Zh-En) datasets. The En-De dataset comprises approximately 4.5 million tokenized sentence pairs, which were preprocessed following the same procedure as in Ott et al. (2018) to yield a high-quality bilingual training dataset. For validation, we use the *newstest2016* set, while the *newstest2014* set served as the test data. The En-Ro dataset consists of 610K bilingual sentence pairs, and we adopt the same preprocessing scripts as in Lee et al. (2018); Kasai et al. (2020), using a joint source and target BPE factorization with a vocabulary size of 40K. The newsdev2016 set is used for validation, while the newstest2016 set served as the test set. For the ZhEn task, we collect all the available parallel data for the WMT17 Chinese-English translation task, consisting 15.8M sentence pairs from the UN Parallel Corpus, 9M sentence pairs from the CWMT Corpus and about 332K sentence pairs from the News Commentary corpus. After carefully data filtering setups in Hassan et al. (2018), there are left 18M bilingual pairs. *newsdev2017* and *newstest2017* are served as the validation and test sets, respectively. Setups For the machine translation task, we mainly evaluate the proposed TranSFormer on base and big configurations. The hidden size of slow | Model | Enc. | Dec. | Base | Big | | | | |--------------------------------------|------------------------------------|----------|--------|-------------|-------------|-------------|------------| | Param | BLEU | Param | BLEU | | | | | | Transformer (Vaswani et al., 2017) | Sub | Sub | 65M | 27.30 | 213M | 28.40 | | | Transformer | Char | Sub | 63M | 26.56 | 208M | 28.05 | | | RPR (Shaw et al., 2018) | Sub | Sub | 65M | 27.60 | 213M | 29.20 | | | CSAN (Yang et al., 2019) | Sub | Sub | 88M | 28.18 | - | 28.74 | | | Localness (Yang et al., 2018) | Sub | Sub | 89M | 28.11 | 267M | 29.18 | | | MG-SA (Hao et al., 2019) | Sub | Sub | 89M | 28.28 | 271M | 29.01 | | | UMST (Li et al., 2022b) | Sub | Sub | 70M | 28.51 | 242M | 29.75 | | | Multiscale | Muse (Zhao et al., 2019) | Sub | Sub | - | - | 233M | 29.90 | | Double-Branch | Multi-Attentive (Fan et al., 2020) | Sub | Sub | - | - | 325M | 29.80 | | Multi-Unit (Yan et al., 2020) | Sub | Sub | 130M | 29.30 | - | - | | | ConvTransformer (Gao et al., 2020) † | Char | Char | 65M | 23.47 | - | - | | | Character-level | Fast Only (Hidden=32, L=6) | Char | Sub | 42M | 17.90(16.9) | - | - | | Fast Only (Hidden=512, L=6) | Char | Sub | 64M | 27.11(26.1) | 211M | 28.65(27.6) | | | Slow Only | Sub | Sub | 63M | 27.40(26.4) | 211M | 28.80(27.8) | | | Slow-Fast | TranSFormer (Hidden=32, L=6) | Char/Sub | Sub | 66M | 28.56(27.6) | 231M | 29.85(28.9 | | TranSFormer + ODE (Li et al., 2022a) | Char/Sub | Sub | 66M | 29.30(28.3) | - | - | | Table 1: Comparison with previous studies on the WMT En-De task. Models with † denote the re-implementing results based on our codebase within the same hyperparameters. BLEU at the right corner denotes the SacreBLEU. | (a) Previous work based on Big models | | | |-----------------------------------------|-------------|-------| | System | Params BLEU | | | Transformer-Big(Hassan et al., 2018) | - | 24.20 | | CSAN (Yang et al., 2019) | - | 25.01 | | Localness (Yang et al., 2018) | 307M | 25.03 | | UMST (Li et al., 2022b) | 307M | 25.23 | | (b) Our Big models | | | | subword-level Transformer-Big | 261M | 24.41 | | character-level Transformer-Big | 258M | 23.80 | | TranSFormer (Hidden=64) | 283M | 25.55 | Table 2: Results on WMT Zh-En. We compare several prior work of learning local patterns. branch is 512/1024 for base and big, respectively. And the filter size in FFN is 2048/4096. In our default setting, a width of 32 slow branch is enough to learn fine-grained features, and the corresponding filter size is set to 128. We both employ residual dropout, attention dropout and activation dropout. All values are 0.1, except the residual dropout 0.3 for big counterparts. Training and Evaluations The codebase is developed upon *Fairseq* (Ott et al., 2019). All experiments are conducted on 8 Tesla V100 GPUs. We use Adam (Kingma and Ba, 2015) optimizer with (0.9, 0.997), and the default learning rate schedule with 0.002 max value, 16, 000 warmup steps. For machine translation tasks, BLEU scores are computed by *mult-bleu.perl*, and we also provide the SacreBLEU1for En-De. The beam size is 4 for En-De and 8 for Zh-en, and the length penalty is 0.6 and 1.3, respectively. 1BLEU+case.mixed+numrefs.1+smooth.exp+ tok.13a+version.1.2.12 | Model | Param | BLEU | |----------------------------------------|---------|--------| | DELIGHT (Mehta et al., 2020) | 53M | 34.70 | | Baseline in MBART (Liu et al., 2020) | - | 34.30 | | Baseline in DISCO (Kasai et al., 2020) | - | 34.16 | | Transformer † (Vaswani et al., 2017) | 54M | 34.21 | | TNT† (Han et al., 2021) | 73M | 34.00 | | UMST (Li et al., 2022b) | 60M | 34.81 | | ODE Transformer (Li et al., 2022a) | 69M | 34.94 | | TranSFormer (Hidden=32) | 59M | 35.40 | Table 3: Results on the WMT En-Ro task. Results of En-De The results of the WMT EnDe task under both base and big configurations are summarized in Table 1. As evidenced by the results, our TranSFormer model demonstrates significant improvements in BLEU when compared to the Slow only model, with gains of up to 1.16/1.05 BLEU scores under the base/big configurations. Conversely, the Fast only baseline, which has a hidden size of 32, only attains a BLEU score of 17.90, leading to a considerable performance gap due to its limited capacity. However, it still contributes up to a 1.14 BLEU-point benefit to the Slow branch, indicating that the fine-grained correlations modeled by the Fast branch are complementary. Additionally, we present the results of prior works, which have employed both character-level and subword-level systems, and categorize them in terms of various aspects. TranSFormer can beat or on par with prior works with less parameters. This indicates the investigation of character-level mutliscale models is meaningful. Note that TranSFormer is computationally efficient, only requiring additional 15% training cost and negligible inference latency. And TranSFormer can also benefit from ![6_image_1.png](6_image_1.png) ![6_image_0.png](6_image_0.png) | Model | Input Granularity | BLEU | | |--------------------------------------------------------------------------------------------|---------------------|---------------|-------| | Slow-Branch | Fast-Branch | | | | TranSFormer | Subword | Character | 28.56 | | TranSFormer | Subword | Subword | 27.50 | | TranSFormer | Subword | Sentencepiece | 28.27 | | TranSFormer | Sentencepiece | Character | 28.60 | | (d) Input: Figuring out the impact of various input granularites for Slow and Fast branch. | | | | | Model | Enc. Output | BLEU | | | TranSFormer | Slow Branch | 28.56 | | | TranSFormer | Fast Branch | 23.11 | | | TranSFormer | Both Slow and Fast | 28.25 | | | (c) Interactions: Figuring out the impact of various encoder-decoder interaction manners on performance. | | | | advanced design, *e.g.*, another 0.74 improvement with ODE method (Li et al., 2022a). ## 4.2 Results Results of Zh-En The WMT'17 Zh-En task poses a significant challenge due to the linguistic differences between Chinese and English. Additionally, the Chinese language owns less characters per word than English. Table 2 shows the results of our comparison of the TranSFormer with prior works. We observe TranSFormer yields a 1 BLEU point improvements than the subword-level systems. Our TranSFormer model demonstrates superior performance compared to previous work that models local patterns, while maintaining efficient computational requirements. We will exploit whether TranSFormer can gain more benefits when incorporating these techniques on the slow branch. Results of En-Ro Furthermore, our empirical evaluations of the proposed TranSFormer architecture on the smaller WMT En-Ro dataset also demonstrate consistent improvements in BLEU scores as a result of the utilization of interactions among granularities. Notably, the TranSFormer model even outperforms the ODE Transformer (Li et al., 2022a), an advanced variant that leverages the advantages of high-order ordinary differential equations (ODE) solutions, by a substantial margin while incurring less computational cost. ## 4.3 Analysis This section provides ablation studies of TranSFormer in terms of several core techniques. Effect of width on Fast branch We first aim to explore TranSFormer under various widths of the Fast branch, including 16, 32, 64, 128 and 256. Results in Table 4a show that even a hidden size of 16 can provide helpful fine-grained features for the slow branch, and yielding almost 1 BLEU-point gains by bringing modest parameters. Empirically, a hidden of 32 and 64 deliver the best performance on base (En-De) and big (Zh-En) configurations, respectively. Further increasing the hidden layer dimension of the model results in no more gains, while requiring more computational cost. Fusion methods between branches In addition to our proposed fusion method CGA, there are several alternative techniques that can be considered. The most straightfoward one is to transform the hidden with a linear projection and then use a standard cross-attention. It delivers similar performance but consumes more parameters. Another option is to downsample the character-level representation from Lf to Ls, and then concatenate (namely DS + Concat) or sum (DS + Sum) the two representations. Although both of these methods have been found to outperform the Slow only baseline, they have not been found to be on par with CGA method. This may be due to the fact that downsampling may impede optimization due to the low compression ratio of text compared with images. Various interaction methods In Table 4c, we present a summary of various promising options for interactions between the encoder and decoder. Empirical results indicate that utilizing the Slow ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) Table 5: Ablations on the fast branch in terms of several core design schemas. \begin{tabular}{c|c} Model & BLEU \\ \hline TransFormer (default: all blocks) & 28.56 \\ $+$ at the last encoder block (e.g., 6) & 27.99 \\ $+$ at bottom 3 blocks (e.g., 1/2/3) & 28.10 \\ $+$ at top 3 blocks (e.g., 4/5/6) & 28.40 \\ $+$ every 2 blocks (e.g., 1/3/5) & 28.20 \\ \end{tabular} ![7_image_4.png](7_image_4.png) ![7_image_5.png](7_image_5.png) branch as the output yields the highest performance. This can be attributed to the Fast branch's ability to provide fine-grained features and low-level semantic knowledge as auxiliary information to the Slow branch. Additionally, while utilizing the Fast branch as the encoder output results in inferior performance compared to the baseline, it still yields a significant improvement over the Slow only baseline (17.90). This highlights the effectiveness of the TranSFormer model in leveraging interactions between different granularities. Furthermore, we also evaluated a two-stream approach in the decoder, in which one stream attends to the Slow branch and the other attends to the Fast branch, with a gated mechanism being used to fuse the features. However, this method was not sufficient to further improve performance. We attribute this to the negative interactions brought by the Fast branch, increasing the optimization difficulty. Effect of various input granularities To ascertain whether the observed performance gains can be attributed to the complementary information provided by fine-grained character-level representations, we replaced the input of the fast branch with subword-level sequences, identical to that of the slow branch. The results presented in Table 4d demonstrate a degradation of up to 1 BLEU point. This can be attributed to the lack of distinct or complementary features provided by the fast branch and the limited capacity of the model in fully optimizing subword-level features. This observation further supports the hypothesis that the Slow-Fast design can learn complementary features for each granularity. Furthermore, we found that the TranS- ![7_image_0.png](7_image_0.png) Table 7: Comparison of different low-resource settings, including 50K, 500K, and 1000K training subsets sampled from the WMT En-De dataset. ![7_image_3.png](7_image_3.png) Former architecture with sentencepiece (Kudo and Richardson, 2018) as the fast branch input can also benefit from the two-branch design, due to the different segmentations. Additionally, our TranSFormer is a general design that can work well with a character-level fast branch and a sentencepiecelevel slow branch, yielding a BLEU score of 28.60, even slightly better than the subword-level one. Ablations on fast branch designs It is hard to directly learn the tedious character sequence. The proposed character-boundary injection serves as a crucial component in addressing this challenge. Without this injection, the TranSFormer model suffers from a significant decrease in BLEU (\#3). Furthermore, the situation is exacerbated when the boundary is replaced with a randomly initialized one (\#4), emphasizing the importance of the proposed character-boundary injection. Also, both removing the boundary-wised positional encoding (\#5) or replacing the vanilla attention by linear attention (Wang et al., 2020) (\#6) lead to modest BLEU degradation. While, there is no significant impact when using unidirectional CGA (\# 2, from Fast to Slow). Ablations on interactions in different levels Our default configuration permits the model to allocate interactions at each encoder layer. It is beneficial to determine how interaction frequency impacts the performance. Table 6 compares various interaction frequencies at different levels, including exclusively at the final encoder block, the bottom three blocks, the top three blocks and every two blocks. The experiments were conducted on ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) WMT En-De. It is evident that the default configuration delivers optimal performance. Interactions conducted at the top three blocks demonstrate superior results compared to those at the bottom three blocks. Furthermore, performing fusion solely at the last encoder block proves insufficient for the model to learn multiscale interactions effectively. BLEU *v.s.* **Depth and BPE mergings** Figure 5 plots the performance against model depths and BPE merging operations. The proposed TranSFormer architecture demonstrates consistent performance improvements as a result of increased encoder depth. Furthermore, an empirical evaluation of the TranSFormer against various byte-pair encoding (BPE) operations (Sennrich et al., 2016), on the slow branch of the model yields a statistically significant average gain of 1.1 BLEU scores over the Slow only baseline. Low resource setting and morphological evaluation Li et al. (2021) has shown that characterlevel systems are better at handling morphological phenomena and show strong performance in low-resource scenarios than subword-level systems. Consequently, we evaluate how TranSFormer behaves at these scenarios. For the low-resource setting, we randomly select subsets of 50K, 500K, and 1000K from the WMT En-De training corpus. TranSFormer achieves respective BLEU scores of 11.87, 22.75, and 25.30, while the character-only and subword-only Transformers yield approximate scores of 10.50/7.00, 20.50/22.00, and 22.50/23.50. This empirical evidence demonstrates that TranSFormer effectively amalgamates the benefits of both character-level and subword-level features. Moreover, Figure 6 plots the performance on MorphEval(Burlot and Yvon, 2017) benchmark. TranSFormer behaves better than subword solely in terms of Negation, Past, P-Gender and P-Number metrics. Comparisons in Efficiency Table 8 compares the FLOPs between baseline and our TranSFormer both in base and big configurations. Due to the light computation cost of the fast branch, TranSFormer only brings additional 0.3G/1.1G FLOPS in base/big configurations, respectively. Note that the bulk of the additional computational cost is associated with the upsampling/downsampling operations within the cross-granularity attention mechanism. This process aligns the hidden size between the two representations. ## 5 Conclusions In this work, we comprehensively leverage the potential of character-level features in multiscale sequence models while preserving high computational efficiency. To accomplish this, we propose a Slow-Fast Transformer architecture consisting of two branches in the encoder. The slow branch, akin to the vanilla Transformer, handles subwordlevel features, while the fast branch captures finegrained correlations among characters. By leveraging the complementary features provided by the fast branch, our TranSFormer demonstrates consistent improvements in BLEU scores on three widelyused machine translation benchmarks. Further indepth analyses demonstrate the effectiveness of the TranSFormer and its potential as a universal multiscale learning framework. ## Acknowledgments This work was supported in part by the National Science Foundation of China (No. 62276056), the National Key R&D Program of China, the China HTRD Center Project (No. 2020AAA0107904), the Natural Science Foundation of Liaoning Province of China (2022-KF-16-01), the Yunnan Provincial Major Science and Technology Special Plan Projects (No. 202103AA080015), the Fundamental Research Funds for the Central Universities (Nos. N2216016, N2216001, and N2216002), and the Program of Introducing Talents of Discipline to Universities, Plan 111 (No. B16009). ## Limitations The proposed TranSFormer architecture employs a two-branch design, which separately encodes character-level and subword-level features. Our original design of the proposed cross-granularity attention is to acknowledge the correlation between subwords and characters that belong to the same word. For example, a cross-granularity Gaussian distribution to let subwords pay more attention to the corresponding characters. However, the variability of word boundary information across sentences presents a challenge in effectively batching them and achieving high computational efficiency. This is an area of ongoing research, and will be the focus of future work. On the other hand, our current evaluation of the TranSFormer architecture is limited to machine translation tasks. It is worth exploring the potential of TranSFormer in optimizing character sequences on natural language understanding tasks and other sequence generation tasks, such as abstractive summarization. These tasks are more challenging in terms of encoding longer sequences, but we believe that TranSFormer can serve as a versatile backbone. We aim to verify its effectiveness on these tasks in the future. ## References Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. *CoRR*, abs/1711.02132. Franck Burlot and François Yvon. 2017. Evaluating the morphological competence of machine translation systems. In Proceedings of the Second Conference on Machine Translation, pages 43–55, Copenhagen, Denmark. Association for Computational Linguistics. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting characterbased neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295–4305, Brussels, Belgium. Association for Computational Linguistics. Yang Fan, Shufang Xie, Yingce Xia, Lijun Wu, Tao Qin, Xiang-Yang Li, and Tie-Yan Liu. 2020. Multi-branch attentive transformer. *CoRR*, abs/2006.10270. Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In *2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019*, pages 6201–6210. IEEE. Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu, and Richard H.R. Hahnloser. 2020. Character-level translation with self-attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1591–1604, Online. Association for Computational Linguistics. Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented transformer for speech recognition. In *Interspeech 2020,* 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 5036–5040. ISCA. Qipeng Guo, Xipeng Qiu, Pengfei Liu, Xiangyang Xue, and Zheng Zhang. 2020. Multi-scale self-attention for text classification. In *The Thirty-Fourth AAAI* Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7847–7854. AAAI Press. Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. 2021. Transformer in transformer. *arXiv preprint arXiv:2103.00112*. Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. 2019. Multi-granularity selfattention for neural machine translation. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 887–897, Hong Kong, China. Association for Computational Linguistics. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv preprint arXiv:1803.05567. Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine translation with disentangled context transformer. In *Proceedings of the 37th International Conference on* Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine* Learning Research, pages 5144–5155. PMLR. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In *5th International Conference on Learning Representations, ICLR 2017, Toulon, France,* April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. *Transactions of* the Association for Computational Linguistics, 5:365– 378. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 1173–1182, Brussels, Belgium. Association for Computational Linguistics. Bei Li, Quan Du, Tao Zhou, Yi Jing, Shuhan Zhou, Xin Zeng, Tong Xiao, JingBo Zhu, Xuebo Liu, and Min Zhang. 2022a. ODE transformer: An ordinary differential equation-inspired model for sequence generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8335–8351, Dublin, Ireland. Association for Computational Linguistics. Bei Li, Tong Zheng, Yi Jing, Chengbo Jiao, Tong Xiao, and Jingbo Zhu. 2022b. Learning multiscale transformer models for sequence generation. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, pages 13225–13241. PMLR. Jiahuan Li, Yutong Shen, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2021. When is char better than subword: A systematic study of segmentation algorithms for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 543–549, Online. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2020. Delight: Very deep and light-weight transformer. ArXiv preprint, abs/2008.00623. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Brussels, Belgium. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. *arXiv preprint arXiv:2006.04768*. Lijun Wu, Shufang Xie, Yingce Xia, Yang Fan, JianHuang Lai, Tao Qin, and Tie-Yan Liu. 2020. Sequence generation with mixed representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine* Learning Research, pages 10388–10398. PMLR. Wei Wu, Houfeng Wang, Tianyu Liu, and Shuming Ma. 2018. Phrase-level self-attention networks for universal sentence encoding. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 3729–3738, Brussels, Belgium. Association for Computational Linguistics. Linting Xue, Aditya Barua, Noah Constant, Rami AlRfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. ByT5: Towards a token-free future with pre-trained byte-to-byte models. *Transactions of the Association for Computational Linguistics*, 10:291–306. Jianhao Yan, Fandong Meng, and Jie Zhou. 2020. Multiunit transformers for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1047–1059, Online. Association for Computational Linguistics. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449–4458, Brussels, Belgium. Association for Computational Linguistics. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019. Convolutional selfattention networks. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 4040–4045, Minneapolis, Minnesota. Association for Computational Linguistics. Guangxiang Zhao, Xu Sun, Jingjing Xu, Zhiyuan Zhang, and Liangchen Luo. 2019. MUSE: parallel multiscale attention for sequence to sequence learning. CoRR, abs/1911.09483. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitation ✓ A2. Did you discuss any potential risks of your work? ✓ ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✗ A4. Have you used AI writing assistants when working on this paper? ✗ ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
guan-huang-2023-mitigating
Mitigating the Learning Bias towards Repetition by Self-Contrastive Training for Open-Ended Generation
https://aclanthology.org/2023.findings-acl.431
Despite the huge progress in myriad generation tasks, pretrained language models (LMs) such as GPT2 still tend to generate repetitive texts with maximization-based decoding algorithms for open-ended generation. We attribute their overestimation of token-level repetition probabilities to the learning bias: LMs capture simple repetitive patterns faster with the MLE loss. We propose self-contrastive training to penalize the output of a premature checkpoint of the same model when it incorrectly predicts repetition, which is shown to mitigate repetition effectively while maintaining fluency on two datasets. Furthermore, we find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones, which may be the cause of sentence-level repetition loops.
# Mitigating The Learning Bias Towards Repetition By Self-Contrastive Training For Open-Ended Generation Jian Guan, Minlie Huang∗ The CoAI Group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China j-guan19@mails.tsinghua.edu.cn, aihuang@tsinghua.edu.cn ## Abstract Despite the huge progress in myriad generation tasks, pretrained language models (LMs) such as GPT2 still tend to generate repetitive texts with maximization-based decoding algorithms for open-ended generation. We attribute their overestimation of token-level repetition probabilities to the learning bias: LMs capture simple repetitive patterns faster with the MLE loss. We propose self-contrastive training to penalize the output of a premature checkpoint of the same model when it incorrectly predicts repetition, which is shown to mitigate repetition effectively while maintaining fluency on two datasets. Furthermore, we find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones, which may be the cause of sentence-level repetition loops1. ## 1 Introduction Existing LMs prefer to generate repetitive texts for open-ended generation with greedy decoding or beam search (Welleck et al., 2020a). Even largescale pretrained LMs such as GPT3 (Brown et al., 2020) still generate redundant sentences (Dou et al., 2022). Despite many solutions proposed from the perspective of both training (Welleck et al., 2020b) and decoding (Holtzman et al., 2020), the cause of preference for repetition still needs to be clarified. By analyzing the training dynamics of LMs regarding (non-)repetitive tokens, we reveal the learning bias towards repetition: LMs capture simple repetitive patterns first, which dominate the output distribution throughout the input space, and then learn more non-repetitive patterns during training. We show that the repetition problem can be mitigated by only training more steps (i.e., allowing over-fitting), although the coherence with inputs will be impacted. Conversely, when trained insuf- ∗Corresponding author 1The code is available at https://github.com/ thu-coai/SelfCont ficiently, LMs will overestimate repetition probabilities even for golden prefixes. We propose selfcontrastive training (SELFCONT), which exploits the contrast with a premature checkpoint of the same model by penalizing its output when it incorrectly predicts repetition. Experiments on two datasets show that SELFCONT effectively alleviates repetition while maintaining fluency by factoring out the undesired repetition behaviors highlighted by the premature checkpoint. Besides the above analysis about overestimating token-level repetition probabilities during training, we also find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones. It may explain why LMs tend to fall into repetition loops (Xu et al., 2022). The problem may be solved by improving the modeling of long-range dependencies (e.g., increasing model sizes), which are left to future work. ## 2 Related Work Regarding the cause of the repetition problem, Fu et al. (2021) theoretically derived bounds of repetition probabilities of the first-order Markov LM, although it is difficult to extend the bounds to general LMs. Another line of works attributed repetition to error accumulation during generation (Welleck et al., 2020b; Arora et al., 2022), while LMs still prefer repetition given golden prefixes. We divide recent works that alleviate repetition into training- and decoding-based methods: (1) Training-based Methods. Welleck et al. (2020b) proposed unlikelihood training (UL) to reduce the probabilities of repetitive generations. Lin et al. (2021) and Xu et al. (2022) further extended the framework at the token and sequence level, respectively. SELFCONT focuses on token-level modeling, which is orthogonal with sequence-level methods. Xi et al. (2021) adopted additional modules to learn repetition patterns and control repetition explicitly. **(2) Decoding-based Methods.** One straightforward solution to repetition is blocking repetitive n-grams generations (Paulus et al., 2018) or penalizing probabilities of repetitive candidates (Keskar et al., 2019). Li et al. (2022) selected candidates that maximize the probability difference between different-sized models. Sampling-based decoding methods are also shown effective in avoiding repetition, such as temperature sampling (Ficler and Goldberg, 2017), Top-k sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2020), and typical sampling (Meister et al., 2022). Although these methods reduce superficial repetition, it is unclear whether they utilize the underlying long-range dependencies to maintain coherence. ## 3 Empirical Analysis Neural networks (NNs) are highly expressive to approximate arbitrary input-output mappings. Using Fourier analysis, Rahaman et al. (2019) showed the *spectral bias* of NNs: they learn low-frequency components faster during training, which are less complex and vary globally without local fluctuation. Our key hypothesis is that simple repetitive patterns may be such low-frequency components and learned by LMs early. In this section, we first formulate LMs (§3.1), and then investigate the training dynamics (§3.2) and the ability to model long-range dependencies (§3.3) of LMs. ## 3.1 Language Models LMs aim to fit the mapping xt = f(x1:t−1) defined by a training corpus, where x1:tis a sequence from the corpus. To this end, they are usually trained by minimizing the following cross-entropy loss: $${\mathcal{L}}=-\mathbf{x}_{t}^{\mathsf{T}}\cdot\log\left[\mathrm{softmax}\left(f_{\theta}(x_{1:t-1})\right)\right],$$ $$\operatorname{ax}\left(f_{\theta}(x_{1:t-1}))\right],$$ , (1) where xt ∈ {0, 1}|V| is the one-hot representation of xtindicating its index in the vocabulary V, and fθ(x1:t−1) ∈ R|V| is the output logits of the LM parameterized by θ. Predictably, with more training steps, argmax(fθ) is closer to the target function f. Early stopping (Morgan and Bourlard, 1989) is a commonly used regularization technique to avoid over-fitting, e.g., stopping training when the validation loss reaches the minimum. Since NNs prioritize learning low-complexity components, early stopping may result in unexpected generations. We are inspired to investigate whether simple repetitive patterns in human-written texts are learned first, thus dominating the generations. ## 3.2 Training Dynamics We randomly sample 1k sequences containing 512 tokens from the Wikitext-103 dataset (Merity et al., 2016) and train GPT2base from scratch for 100 epochs2. Given a golden prefix x1:t−1, we regard the model prediction xˆt = argmaxfθ(x1:t−1) as correct if xˆt = xt. We call xt or xˆt repetitive if it is included in x1:t−1, and non-repetitive otherwise. ![1_image_0.png](1_image_0.png) Figure 1 plots the training curves, revealing the learning bias of the LM: (1) The initially learned components prefer to copy input tokens throughout the input space, as indicated by predicting repetitive tokens at ∼90% of positions for both golden and generated prefixes. (2) With golden prefixes, at those positions where xtis repetitive, the LM predicts repetition almost constantly during training. When xtis non-repetitive, the LM predicts more non-repetitive tokens with more training steps. The repetition ratio also gradually decreases in modelgenerated texts. (3) The token prediction accuracy improves faster when xtis repetitive, indicating that the LM learns repetitive patterns more easily. Moreover, we notice that the validation loss rises at the 1,500th step, where the LM predicts much more repetitive tokens than the ground truth. At the end of the training, the generation has a closer token repetition ratio to the ground truth. But manual 2We use only 1k samples because we expect to over-fit these samples to observe how repetition in generated texts changes with the fitting degree, considering that it will be very time-consuming to fit the whole Wikitext-103 dataset. ![2_image_0.png](2_image_0.png) inspection finds the coherence with inputs is poor due to over-fitting. Appendix A.1 shows several generation cases. ## 3.3 Modeling Long-Range Dependencies Figure 1 (Top) shows that LMs are still able to predict non-repetitive tokens conditioned on golden prefixes. However, it is still unclear why they get into repetition loops during generation and do not generate any non-repetitive tokens. To shed light on this behavior, we further investigate how LMs learn and utilize long-range dependencies. We finetune GPT2base on the training set of Wikitext-103, and examine the effect of prefix lengths on the perplexity of tokens that have appeared in the previous 250 tokens (called *repetitive*) or not on the original test set and model-generated texts. Figure 2 indicates (1) The LM only learns dependencies within ∼**100 tokens overall.** When the prefix length is larger than 100, the perplexity on golden tokens no longer drops significantly (p ⩾ 0.05). **(2) The LM learns and utilizes** longer-range dependencies to predict repetitive tokens than non-repetitive ones. The perplexity on golden repetitive/non-repetitive tokens plateaus when the prefix length is larger than 160/50, respectively. The case is similar for generated texts. (3) The LM uses short-range contexts to predict non-repetitive tokens regardless of decoding algorithms. Contexts beyond 100 tokens hardly help predict non-repetitive tokens, implying samplingbased decoding reduces repetition through randomness instead of using long-range dependencies. Based on the above observation, we conjecture that the LMs keep repeating the same sentence with maximization-based decoding (Xu et al., 2022) because they rarely learn long-range non-repetitive patterns beyond the sentence level. When generating long texts, LMs may struggle to maintain non-repetitive within a long range. To test the idea, we train GPT2base from scratch on three datasets constructed from the training set of Wikitext-103: (1) Doriginal, where examples are directly sampled from the original training set; (2) Drandom, where each example contains 30 randomly sampled sentences; (3) Dnorept, where each example also contains 30 random sentences, but there is at most one token overlapping between any adjacent 5 sentences (generally the period "."). Each dataset consists of 20k examples. We then generate texts using greedy decoding conditioned on the first 50 tokens in the original test set and compute the ratio of texts which fall into loops (Holtzman et al., 2020). $$1.67$$ $$2.14$$ Training Sets Doriginal Drandom Dnorept Ratios (%) ↓ 60.42 96.04 1.67 Table 1: Ratios of texts which get stuck into loops generated by LMs trained on different training sets. As shown in Table 1, compared to Doriginal, the LM trained on Drandom has higher repetition ratios because it learns shorter-range non-repetitive patterns only within one sentence. Besides, although sentences in each Drandom example are unrelated, they can contain repetitive tokens3, making the LM learn spurious long-range repetitive patterns to get into repetition loops. In contrast, the LM trained on Dnorept rarely gets into loops since it learns both repetitive and non-repetitive patterns almost within one sentence. Specifically, any adjacent five sentences in each Dnorept example are unrelated and hardly share tokens. These findings empirically support our hypothesis. Appendix A.2 shows more details. 3The ratios of tokens that have appeared in previous 128 tokens are 12.52% and 32.05% for the training sets of Doriginal and Drandom, respectively. Drandom has even more repetition than Doriginal possibly because random sentences repeat highfrequency words than human-written sentences. | Models | PPL | MAUVE | R-16 | R-32 | R-128 | D-3 | D-4 | PPL | MAUVE | R-16 | R-32 | R-128 | D-3 | D-4 | |--------------|-----------------------|-------------------------|--------|--------|---------|-------|-------|-------|---------|--------|--------|---------|-------|-------| | Greedy | Dataset: Wikitext-103 | Dataset: WritingPrompts | | | | | | | | | | | | | | MLE | 2.55 | 3.29 | 41.23 | 70.18 | 83.28 | 19.27 | 23.95 | 1.76 | 0.61 | 71.08 | 87.20 | 89.43 | 9.61 | 11.40 | | UL | 3.20 | 7.16 | 33.91 | 61.90 | 76.89 | 25.13 | 31.90 | 2.01 | 1.63 | 59.43 | 81.63 | 85.89 | 11.66 | 14.30 | | ScaleGrad | 4.61 | 7.66 | 29.82 | 50.69 | 66.14 | 36.96 | 47.34 | 2.87 | 11.17 | 52.29 | 69.53 | 76.16 | 18.16 | 24.40 | | SELFCONT | 6.47 | 17.34 | 23.29 | 39.41 | 62.46 | 46.71 | 57.66 | 3.30 | 20.05 | 35.13 | 53.69 | 74.09 | 23.30 | 31.52 | | Nucleus | Dataset: Wikitext-103 | Dataset: WritingPrompts | | | | | | | | | | | | | | MLE | 20.66 | 21.09 | 19.40 | 30.22 | 48.11 | 71.92 | 84.75 | 18.68 | 88.54 | 20.95 | 32.53 | 48.87 | 60.38 | 81.55 | | UL | 15.54 | 21.78 | 18.45 | 29.57 | 46.69 | 69.63 | 82.87 | 19.39 | 81.49 | 18.36 | 27.98 | 42.65 | 63.92 | 82.93 | | ScaleGrad | 12.41 | 25.69 | 18.59 | 29.24 | 45.19 | 66.35 | 80.23 | 14.14 | 77.82 | 18.62 | 27.80 | 41.22 | 56.74 | 77.27 | | SELFCONT | 19.02 | 34.37 | 16.45 | 26.47 | 45.10 | 72.02 | 84.78 | 19.86 | 89.84 | 17.56 | 26.98 | 43.39 | 63.33 | 83.51 | | Ground Truth | 18.31 | 100 | 17.38 | 27.92 | 46.29 | 72.34 | 84.20 | 24.01 | 100 | 16.36 | 26.47 | 42.30 | 74.49 | 90.01 | ## 4 Self-Contrastive Training We denote the premature checkpoint as fθ0 , which frequently predicts repetitive tokens. Formally, the SELFCONT algorithm is formulated as follows: $$\begin{array}{l}{{f_{\theta}=f_{\theta_{1}}+\mathrm{sg}(w f_{\theta_{0}}),}}\\ {{w=\lambda1(x_{t}\not\in x_{1:t-1})1(\hat{x}_{t}\in x_{1:t-1})}}\\ {{\hat{x}_{t}=\mathrm{argmax}\left(f_{\theta_{0}}(x_{1:t-1})\right),}}\end{array}$$ where sg(·) means stopping back-propagation of gradients, λ is a tunable hyper-parameter to control the extent of repetition penalty, and 1 is the indicator function. fθ1 is the target LM initialized from fθ0 , and we optimize fθ using Eq. 1 until the validation loss converges to the minimum. The gradient for each token u ∈ V has changed to: $$\begin{array}{l}\mbox{$\nabla_{u}\mathcal{L}=\frac{\exp(f_{\theta_{1}}|_{u})}{\sum_{v\in\mathcal{V}}w_{v,u}\exp(f_{\theta_{1}}|_{v})}-1(u=x_{t}),\ (5),$}\\ \mbox{$w_{v,u}=\exp(w(f_{\theta_{0}}|_{v}-f_{\theta_{0}}|_{u})),$}\end{array}\tag{6}$$ where fθ1|u is the output of fθ1 at the u-th dimension. If w is 0, wv,u is always 1 and ∇uL degenerates to the same as the vanilla LM. If w is not 0 and u is not xt, tokens with high logits under fθ0 will receive larger gradients than the vanilla LM since wv,u is mostly smaller than 1 with different v. As for u = xt (w ̸= 0), it may also be penalized with a positive gradient if fθ0|u is large enough, which usually means a dull token. By penalizing components that excessively prefer repetitive or dull tokens highlighted by fθ0 , fθ1 can utilize more complex patterns learned later to generate texts. ## 5 Experiments Datasets We conduct experiments on Wikitext103 (Merity et al., 2016) and WritingPrompts (Fan et al., 2018). The prompt and story in each WritingPrompts example are concatenated as a sequence. We set the maximum sequence length to 512 and take the first 50 tokens as input to generate the rest. Table 3 presents the detailed statistics. | Datasets | |Train| | |Validation| | |Test| | Avg. Len | |----------------|-----------|----------------|----------|------------| | Wikitext-103 | 201,632 | 448 | 480 | 512 | | WritingPrompts | 272,600 | 15,620 | 15,138 | 439 | Table 3: Statistics of the datasets. Baselines We compare SELFCONT to three baselines: MLE, token-level UL (Welleck et al., 2020b) and ScaleGrad (Lin et al., 2021). Since SELFCONT focuses on token-level modeling, we do not compare it to sentence-level methods that directly penalize repetition loops, e.g., DITTO (Xu et al., 2022). (5) $\binom{6}{2}$ . Implementation All baselines are implemented based on GPT2base. We set the batch size to 16, the learning rate to 1e-4, and λ in Eq. 3 to 4.0. For SELFCONT, we fine-tune GPT2base for one epoch using MLE and take the checkpoint as fθ0 for both datasets. We use different p for different models based on the performance on the validation set. Appendix B shows more details. Metrics We use perplexity (PPL) under GPT2xl to evaluate fluency, MAUVE (Pillutla et al., 2021) to measure the similarity between golden and generated distributions, the token repetition ratios (R-l) to measure the ratio of tokens that appear in previous l tokens (Welleck et al., 2020b), and distinct (D-n) (Li et al., 2016) to evaluate the n-gram diversity. The closer scores to the ground truth mean better quality for all metrics. Results As shown in Table 2, SELFCONT outperforms baselines in all metrics using greedy decoding. However, the high R-128 score shows it can still generate repetition loops due to the disability of small-scale LMs to model long-range dependencies. Using nucleus decoding, we see that different baselines can achieve similar repetition ratios and diversity to the truth by tuning p, while SELFCONT has better fluency and higher MAUVE scores. ## 6 Conclusion We present empirical studies on LMs' preference for repetition by analyzing the training dynamics, which highlights their learning bias towards simple repetitive patterns. We propose penalizing outputs of a premature checkpoint during training, which effectively mitigates repetition while maintaining fluency. We also provide insight into why LMs easily fall into repetition loops by showing their disability to model long-range dependencies. Sampling-based decoding reduces repetition through randomness but not utilizing long-range dependencies. We believe that maximization-based decoding can also generate coherent texts without repetition by improving the modeling of long-range dependencies, which is left to future work. ## Acknowledgments This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005. ## 7 Limitations The limitations of this paper mainly lie in the following folds: (1) We do not provide any theoretical analysis for the correlation between long-range dependencies and repetition loops, as well as solutions to avoid repetition loops with maximizationbased decoding. (2) We do not discuss the source of LMs' learning bias, which may be caused by multiple factors, such as the Transformer architecture (Vaswani et al., 2017), the MLE loss, or the auto-regressive generation manner. (3) We conduct experiments based on GPT2 due to resource limitations. The conclusions may differ for extra-large LMs (such as GPT3). (4) We do not experiment with RNN-based models, which are also shown to prefer repetition (Elman, 1990). (5) We do not perform the manual evaluation to compare SELFCONT with baselines since we focus on repetition in this paper, which can be automatically evaluated reliably. Perplexity and mauve scores are also shown to correlate highly with manual evaluation for evaluating fluency and overall quality, respectively. ## References Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Chi Kit Cheung. 2022. Why exposure bias matters: An imitation learning perspective of error accumulation in language generation. In *Findings of* the Association for Computational Linguistics: ACL 2022, pages 700–710. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A Smith, and Yejin Choi. 2022. Is gpt-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7250–7274. Jeffrey L Elman. 1990. Finding structure in time. *Cognitive science*, 14(2):179–211. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In *Proceedings* of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In *Proceedings of the Workshop on Stylistic Variation*, pages 94–104. Zihao Fu, Wai Lam, Anthony Man-Cho So, and Bei Shi. 2021. A theoretical analysis of the repetition problem in text generation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 12848–12856. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. *arXiv preprint arXiv:1909.05858*. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119. The Association for Computational Linguistics. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2022. Contrastive decoding: Open-ended text generation as optimization. *arXiv* preprint arXiv:2210.15097. Xiang Lin, Simeng Han, and Shafiq Joty. 2021. Straight to the gradient: Learning to use novel tokens for neural text generation. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pages 6642–6653. PMLR. Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Typical decoding for natural language generation. *arXiv preprint arXiv:2202.00666*. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. *arXiv preprint arXiv:1609.07843*. Nelson Morgan and Hervé Bourlard. 1989. Generalization and parameter estimation in feedforward nets: Some experiments. *Advances in neural information* processing systems, 2. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In *Advances in Neural Information Processing Systems*, volume 34, pages 4816–4828. Curran Associates, Inc. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. 2019. On the spectral bias of neural networks. In *International Conference on* Machine Learning, pages 5301–5310. PMLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in neural information processing systems*, pages 5998–6008. Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, and Kyunghyun Cho. 2020a. Consistency of a recurrent language model with respect to incomplete decoding. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5553–5568, Online. Association for Computational Linguistics. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020b. Neural text generation with unlikelihood training. In International Conference on Learning Representations. Yadong Xi, Jiashu Pu, and Xiaoxi Mao. 2021. Taming repetition in dialogue generation. *CoRR*, abs/2112.08657. Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. 2022. Learning to break the loop: Analyzing and mitigating repetitions for neural text generation. In *Advances in Neural Information Processing Systems*. ## A Details For Empirical Analysis A.1 Training Dynamics Table 4 shows several cases generated by the LM with greedy decoding at different training steps. We summarize the findings as follows: (1) In the beginning, the LM keeps repeating the high-frequency word "<eos>," indicating that it does not capture phrase-level dependencies yet. (2) At the 1500th step, the LM first generates a few fluent sentences and then gets stuck into the repetition of "the building," showing that it learns long-range dependencies conditioned on the golden prefix while the repetitive patterns dominate the probability distributions conditioned on the generated prefix. This case suggests the global tendency towards repetition for out-of-distribution inputs. (3) At the 6000th step, the LM can generate long, fluent texts without repetition. However, it is difficult for the LM to maintain coherence with inputs due to over-fitting. For example, in the generated first sentence, "she had begun in 1962," "she" conflicts with "he" in the input. ## A.2 Long-Range Dependencies Observation For the experiment in Figure 2, we generate texts with three decoding algorithms conditioned on the first 50 tokens on the test set. Ancestral decoding means directly sampling tokens from the original probability distribution. For nucleus decoding, we set p to 0.9. Figure 3 shows the performance of GPT2large, which shows similar results with GPT2base in Figure 2. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Verification For the experiment in Table 1, we use the same approach to construct the corresponding validation sets of 480 examples for Doriginal, Drandom and Dnorept, and train three LMs until the best validation performance. Table 5 shows several generation cases with greedy decoding. The LMs trained on Doriginal and Drandom fall into repetition loops. Although the LM trained on Dnorept also generates sentences that have previously appeared, it does not get stuck into loops. We further investigate whether the three LMs show the selfreinforcement effect: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence (Holtzman et al., 2020; Xu et al., 2022). Figure 4 indicates that the LMs trained on Doriginal and Drandom show the above effect, while the LM trained on Dnorept does not. The results suggest that longer-range repetitive patterns biased LMs to fall into repetition loops through the self-reinforcement effect whether such patterns are true or spurious. The LM trained on Dnorept always generate sentences in a limited set due to greedy decoding which aims to find the global maxima of probability distributions, instead of the preference for repetition loops. ## B Hyper-Parameters We decide the hyper-parameters λ in Eq. 3 and p for nucleus sampling by searching for the value that makes the R-64 score of generated texts closest to the ground truth on the validation set. We search λ in the range {1.0, 2.0, 3.0, 4.0, 5.0, 6.0}, and p in the range {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. Table 6 shows the settings of p for different models. Training Set **Cases** ![7_image_0.png](7_image_0.png) | Doriginal | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . He has appeared in several films , including the television series The Bill , The Bill Goes to Washington , and The Bill Goes to Washington . He has also appeared in several films , including The Bill Goes to Washington , The Bill Goes to Washington , and The Bill Goes to Washington . He has also appeared in several films , including The Bill Goes to Washington , The Bill Goes to Washington , and The Bill Goes to Washington . <eos> Boulter was born in London , England , on 23 May 1986 . He is the third child of actor and actress Robert Boulter and his wife , Susan . He is the third of five children born to his wife Susan and their three children , Robert , Roberta , and Roberta . Robert Boulter 's father , Robert Boulter , was a film director and producer . He was the first actor to be cast in a film role , and the first to be cast in a television series . He was also the first actor to be cast in a television series . <eos> Boulter 's father , Robert Boulter , was a film director and producer . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series . He was the first actor to be cast in a television series | |-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Drandom | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a small , rectangular structure that was built in the late 19th century . The first of these was the <unk> , a smal | | Dnorept | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . The first two were built by the British Royal Navy . It was also released on the iTunes Store on September 28 , 2010 . It is also possible that he was a member of the royal family . He also said that he would not be returning to the team . @ 5 m ) wide and 2 feet ( 0 @. The song was written by producer and songwriter David Gilmour . It was also released on the iTunes Store on September 28 , 2010 . It was also released on the iTunes Store on September 28 , 2010 . It was also released on the iTunes Store on September 28 , 2010 . @ 5 million ( US $ 2 @,@ 000 ) . The song was written by producer and songwriter David Gilmour . He also said that he would not be returning to the team . It was also released on the iTunes Store on September 28 , 2010 . It is also possible that he was a member of the royal family . @ 5 m ) wide and 2 feet ( 0 @. The two ships were to be joined by two smaller ships . It was also released on the iTunes Store on September 28 , 2010 . He also said that he would not be returning to the team . It was also released on the iTunes Store on September 28 , 2010 . @ 5 million ( US $ 2 @,@ 000 ) worldwide . The song was written by David Gilmour and directed by David Gilmour . It was also released on the iTunes Store on September 28 , 2010 . It is also possible that he was a member of the royal family . He also said that he would not be returning to the team . @ 5 m ) wide and 2 feet ( 0 @. The two ships were protected by armour plates of 100 millimeters ( 3 @. It was also released on the iTunes Store on September 28 , 2010 . It was also released on the iTunes Store on September 28 , 2010 . | ![7_image_1.png](7_image_1.png) | Models | Wikitext-103 | WritingPrompts | |-----------|----------------|------------------| | MLE | 0.9 | 0.9 | | UL | 0.7 | 0.8 | | ScaleGrad | 0.5 | 0.6 | | SELFCONT | 0.6 | 0.7 | Table 6: Settings of p for nucleus sampling. As for baselines, we follow the original papers to set α to 1.0 for UL and γ to 0.2 for ScaleGrad. As for the choice of fθ0 , we empirically choose the checkpoint after training for one epoch, which allows enough training steps for self-contrastive training. We use the premature checkpoint of the same model instead of other models since different models may have different biases. It costs about 24 hours to train SELFCONT on Wikitext-103 (∼10 epochs) or CNN News (∼6 epochs). The results are based on one NVIDIA Tesla V100 (32GB memory) with a random single run. ## C Modeling Token-Level Repetition We compare SELFCONT with baselines in terms of the performance for modeling token-level repetition. As shown in Table 7, SELFCONT achieves higher overall accuracy, higher F1 score on nonrepetitive tokens, and comparable F1 score on repetitive tokens. ## D Case Study Table 8 and Table 9 show the cases generated by different models on Wikitext-103 with greedy decoding and nucleus decoding, respectively. We see that SELFCONT can still get stuck into loops with greedy decoding since it hardly learns longer-range dependencies than standard LMs. Although sam- | Models | Acc | Repetitive | Non-Repetitive | | | | | |-----------|-------|--------------|------------------|------|------|------|------| | P | R | F1 | P | R | F1 | | | | MLE | 78.9 | 67.4 | 87.0 | 75.9 | 90.2 | 73.9 | 81.2 | | UL | 80.5 | 70.9 | 83.0 | 76.5 | 88.2 | 78.9 | 83.3 | | ScaleGrad | 81.6 | 75.9 | 76.1 | 76.0 | 85.2 | 85.0 | 85.1 | | SELFCONT | 82.3 | 78.5 | 73.9 | 76.1 | 84.4 | 87.4 | 85.9 | pling helps reduce superficial repetition, it does not utilize underlying long-range dependencies to maintain long-range coherence. Therefore, it is important to improve the modeling of long-range dependencies to essentially solve the repetition problem in future work. | Models | Cases (Greedy Decoding) | |------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | MLE | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . He has also appeared in the television series The Bill , The Bill 2 , The Bill and The Bill 3 . <eos> Boulter was born in London , England , and grew up in the United States . He studied at the University of California , Berkeley , and the University of California , Berkeley . He was a member of the British Academy of Film and Television Arts ( BAFTA ) in 2001 . He has appeared in several television series , including The Bill , The Bill 2 , The Bill 3 , The Bill 4 , The Bill 5 , The Bill 6 , The Bill 7 , The Bill 8 , The Bill 9 , The Bill 10 , The Bill 11 , The Bill 12 , The Bill 13 , The Bill 14 , The Bill 15 , The Bill 16 , The Bill 17 , The Bill 18 , The Bill 19 , The Bill 20 , The Bill 21 , The Bill 22 , The Bill 23 , The Bill 24 , The Bill 25 , The Bill 26 , The Bill 27 , The Bill 28 , The Bill 29 , The Bill 30 , The Bill 31 , The Bill 32 , The Bill 33 , The Bill 34 , The Bill 34 , The Bill 35 , The Bill 36 , The Bill 37 , The Bill 39 , The Bill 40 , The Bill 41 , The Bill 42 , The Bill 43 , The Bill 44 , The Bill 45 , The Bill 46 , The Bill 47 , The Bill 48 , The Bill 49 , The Bill 50 , The Bill 51 , The Bill 52 , The Bill 53 , The Bill 54 , The Bill 55 , The Bill 56 , The Bill 57 , The Bill 58 , The Bill 59 , The Bill 60 , The Bill 61 , The Bill 62 , The Bill 63 , The Bill 64 , The Bill 65 , The Bill 66 , The Bill 67 , The Bill 68 , The Bill 69 , The Bill 70 , The Bill 71 , The Bill 72 , The Bill 73 , The Bill 74 , The Bill 75 , The Bill 76 , The Bill 77 , The Bill 78 , The Bill 79 , The Bill 80 , The Bill 81 , The Bill 82 , The Bill 83 , The Bill 84 , The Bill 85 , The Bill 86 , The Bill 87 , The Bill 88 , The Bill 89 , The Bill 90 , The Bill 91 , The Bill 92 , The Bill 93 | | UL | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . He has also appeared in several films , including the comedy @-@ drama The Man Who Would Be King ( 2002 ) , the science fiction film The Man Who Would Be Super ( 2004 ) , the science fiction film The Man Who Would Be Super ( 2006 ) , the science fiction film The Man Who Would Be Super ( 2008 ) , the science fiction film The Man Who Would Be Super ( 2012 ) , the science fiction film The Day After Tomorrow ( 2013 ) , the science fiction film The Day After Tomorrow ( 2014 ) , the science fiction film The Day After Tomorrow ( 2015 ) , the science fiction film The Day After Tomorrow ( 2016 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow ( 2017 ) , the science fiction film The Day After Tomorrow | | ScaleGrad | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . In 2002 he appeared as a character in the BBC 's crime drama series The Secret Service . He has also worked as a consultant for several films including The Man Who Would Be King ( 2004 ) , The Man Who Would Never Die ( 2007 ) , The Man Who Would Never Be King 2 ( 2009 ) , The Man Who Would Never Be King 3 ( 2011 ) , The Man Who Would Never Be King 4 ( 2013 ) , The Man Who Would Never Be King 5 ( 2014 ) , The Man Who Would Never Be King 6 ( 2015 ) , The Man Who Would Never Be King 7 ( 2016 ) , The Man Who Would Never Be King 8 ( 2017 ) , The Man Who Would Never Be King 9 ( 2017 ) , The Man Who Would Never Be King 10 ( 2017 ) , The Man Who Would Never Be King 11 ( 2017 ) , The Man Who Would Never Be King 12 ( 2017 ) , The Man Who Would Never Be King 13 ( 2017 ) , The Man Who Would Never Be King 14 ( 2017 ) , The Man Who Would Never Be King 15 ( 2017 ) , The Man Who Would Never Be King 16 ( 2017 ) , The Man Who Would Never Be King 17 ( 2017 ) , The Man Who Would Never Be King 18 ( 2017 ) , The Man Who Would Never Be King 19 ( 2017 ) , The Man Who Would Never Be King 20 ( 2017 ) , The Man Who Would Never Be King 21 ( 2017 ) , The Man Who Would Never Be King 22 ( 2017 ) , The Man Who Would Never Be King 23 ( 2017 ) , The Man Who Would Never Be King 24 ( 2017 ) , The Man Who Would Never Be King 25 ( 2017 ) , The Man Who Would Never Be King 26 ( 2017 ) , The Man Who Would Never Be King 27 ( 2017 ) , The Man Who Would Never Be King 28 ( 2017 ) , The Man Who Would Never Be King 29 ( 2017 ) , The Man Who Would Never Be King 30 ( 2017 ) , The Man Who Would Never Be King 31 ( 2017 ) , The Man Who Would Never Be King 32 ( 2017 ) , The Man Who Would Never Be King 33 ( 2017 ) , The Man Who Would Never Be King 34 ( 2017 ) , The Man Who Would Never Be King 35 ( 2017 ) , The Man Who Woul | | SELFCONT | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . Boulter appeared in two episodes of the British comedy @-@ drama Oz in 2001 , playing an American businessman who helps an Italian woman ( Joanna Cassidy ) escape from her abusive husband ( Michael Sheen ) . He also played a character in the 2004 film Heat , which was directed by Michael Medvedev . Boulter appeared in two episodes of the British comedy @-@ drama Body of Proof in 2005 , playing a retired police officer who is accused of murdering his wife ( Christine Fay ) . He appeared in two episodes of the British television drama Body of Proof in 2006 , playing a retired judge who defends his client , Simon Ashdown ( Stephen Fry ) . Boulter also appeared in the Australian television series The X Factor in 2007 , playing a retired judge who defends his client , Kiernan Straker ( Simon Ashdown ) . <eos> Boulter was nominated for a Golden Globe Award for Best Actor - Motion Picture Musical or Comedy , but lost to Hugh Grant for the role of Seamus Heaney . In 2010 , he appeared in the British comedy @-@ drama The X Factor , playing a retired judge who is accused of murdering his wife ( Christine Fay ) . He also appeared in the Australian television series The X Factor in 2010 , playing a retired judge who defends his client , Simon Ashdown . In 2012 , he appeared in the British comedy @-@ drama Body of Proof , playing a retired judge who is accused of murdering his wife ( Joanna Cassidy ) . He also appeared in the American television series The X Factor in 2013 , playing a retired judge who is accused of murdering his wife ( Christine Fay ) . <eos> Boulter has appeared in a number of films , including the 2005 film Heat , and the 2008 comedy Body of Proof . He appeared in the British television series Heat 2 , and the 2009 comedy Body of Proof . In 2011 , he appeared in the Australian television series The X Factor . In 2012 , he appeared in the British television series The X Factor . In 2013 , he appeared in the British television series The X Factor . In 2014 , he appeared in the Australian television series The X Factor . In 2015 , he appeared in the British television series The X Factor . In 2016 , he appeared in the Australian television series The X Factor | | Table 8: Cases generated by different models with greedy decoding on Wikitext-103. The inputs are highlighted in | | Table 8: Cases generated by different models with greedy decoding on Wikitext-103. The inputs are highlighted in bold. | Models | Cases (Nucleus Decoding) | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | MLE | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . Since 1994 , he has appeared in a wide range of productions . He has played Crocodile Hunter , a domestic terrorist in A Change Is Gonna Come . His first feature film role was in a 1983 film , The Perfect Numbers , written by Ridley Scott . A science fiction parody of the British mafia , his next film role was the horror comedy The Imperfect People . The film was a box office hit and has been named one of the greatest movies ever made . <eos> Boulter portrayed a divorcee living in Chicago , Illinois , from the start of its development in 1986 , when he met Hollywood agent Kevin De Matos . The two became very close and married on August 29 , 1988 . He has become a great fan of James Cameron 's 1999 film Avatar . In December 2011 , he starred as a roadie who stands in a line at a restaurant . <eos> <eos> = = Early life = = <eos> <eos> Robert Boulter was born in Dundee , Scotland , on November 6 , 1961 , and raised in Dundee , Scotland . He attended Eales College , Dundee . He received an MBE for his work at Eales as a schoolteacher in 1973 , and graduated from Dundee University in 1974 . After teaching history to three young teenagers , he acted in numerous short films . <eos> <eos> = = Career = = <eos> <eos> He made his acting debut in 1976 with the short film The Quarryman , playing the character Andre Taylor in a variety of movies and television shows . Boulter has appeared on television and cinema advertisements as well . <eos> In 1982 , Boulter co @-@ starred in the video game Dr. No . The following year he made his film debut in the video game E1 Story , played by Terry Gilliam . He also appeared in the 1984 video game Doctor Who , starring Matthew Broderick . <eos> Boulter met producer Shane Bobbitt in 1982 , and the two became friends , and Broderick made Boulter his stand @-@ up comedy partner . On the strength of their relationship , Boulter starred in several feature film projects for the American television | | UL | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . He starred as the title character in 2008 's A Charlie Brown Christmas , directed by Adam Mickiewicz . Boulter has also played a variety of other roles over the years , including as the titular character on <unk> USA in 2002 , in the <unk> anthology series Blood 's Wild in 2007 , as the protagonist on the murder mystery A Beautiful Guy in 2009 , and as the love interest for the titular character on Queen 's College in 2012 . Boulter has also played the supporting roles of Richard VIII and Queen Elizabeth in various media including television , film , and video games . He won the Academy Award for Best Actor for his performance in King George V. <eos> Born in Carlisle , Lancashire , Boulter grew up in Manchester . He has been involved in theatre since he was eight years old . His only film role came in 2000 , when he played Jack Leppard in Harry Potter and the Goblet of Fire . A second appearance came in 2001 , when he played the young Severus Snape in Romeo and Juliet . He has worked on television shows including , Steven Soderbergh 's From Russia , with Love , Watchmen , Dawson 's Creek , Ealing Studios and the British Channel 7 documentary series Ghost Hunters . <eos> Boulter has made four appearances on television : seven as Colin Rudge in Star Trek : Voyager ( 1995 - 1997 ) ; 13 as Ian Frazier in The Sarah Jane Adventures ( 1997 - 1999 ) ; and 16 as Scott Reid in Michael Bay 's Robin Hood : How the Legend Was Won ( 1999 ) . He was nominated for a Laurence Olivier Award in 2001 . <eos> <eos> = = Early life = = <eos> <eos> Boulter was born on 12 April 1979 in Carlisle , Lancashire , England . His father is a retired pilot . He attended Elgin Grammar School , where he earned an academic degree . After leaving school , he worked in retail at an engineering firm in Manchester . <eos> Boulter attended <unk> High School , Carlisle before transferring to King 's College , Cambridge . During this time , he became involved in theatre and became a student at Queen 's College , Cambridge . | | ScaleGrad | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . In 2004 he starred as the character Sherlock Holmes in the BBC medical drama Holby City . Boulter made his screen debut with the comedy series Chitty Chitty Bang Bang , which was broadcast in 2006 . He then played Lord Stanley in The Lord of the Rings trilogy and in Pirates of the Caribbean : At World 's End . In 2007 he portrayed David Marmion in the film adaptation of A Midsummer Night 's Dream . <eos> He appeared in the British film Downton Abbey ( 2008 ) , which won him a Golden Globe Award for Best Actor . He also starred in Steven Spielberg 's fantasy film , The Wrestler , as the title character . <eos> In 2010 , Boulter portrayed Leonardo DiCaprio in the directorial debut of Woody Allen 's The Wrestler . The film tells the story of two men who live in Los Angeles , California , who have been feuding over money and power . The film received mixed reviews from critics . <eos> <eos> = = Early life = = <eos> <eos> Boulter was born in Leicester , England , on 3 October 1963 . His father , John , is a chemical engineer and designer . His mother , Mary ( née <unk> ) , is a member of the Scottish Episcopal Church . He grew up in Loughborough , Leicester . He was educated at Eton College and then St Thomas 's School , Northamptonshire . His older brother , Liam , plays football for Leicester City . <eos> Boulter started playing football when he was four years old . After a few years , he joined Brentford Town , where he played alongside Ronnie Brown . Boulter enjoyed the game and liked the fact that his father had taught him how to play football . When he turned twelve , he moved to Leicester City , but he left the club after one season because of disciplinary problems . Boulter did not attend Brentford 's youth team , but took up playing football for them . During his time at Brentford , he played for several clubs including Chesterfield , Rochdale , Oldham Athletic , St James ' Park , Scarborough and Lewes . He made his debut for the club aged 15 in 1971 , an | | SELFCONT | <eos> = Robert Boulter = <eos> <eos> Robert Boulter is an English film , television and theatre actor . He had a guest @-@ starring role on the television series The Bill in 2000 . He also appeared in a 2000 episode of Syfy 's Geneva Live . <eos> Boulter is an accomplished box @-@ office actor and one of the best known box @-@ office draws in the history of British television . Boulter appeared in over 20 films and received many awards , including four Academy Awards , including Best Actor and Best Supporting Actor , and the BAFTA Award for Best British Actor . He was nominated for six other BAFTAs , winning three , for his work on the television series and the 1997 film . He starred in The Bill in 2001 and again in 2002 . In 2005 , he appeared in The Gleason Room , the 2005 science fiction film about rediscovery of woolly alien relics , and in the 2006 biographical drama Brand New Eyes . In 2010 , he starred in the stage production of Minor Threat and the 2007 psychological thriller Victoria 's Secret . <eos> Boulter 's stage and film career began with his performance in the 1997 romantic comedy Hamlet . In 2000 , he was cast as Jonathan Simeone in the German @-@ language dramatisation of French novelist Raymond Lebowski 's epic play , The Professionals . He took on the role of " Troy " , an obsessive person who attempts to prove himself to a courtiers . Although he enjoyed playing Troy , he took " enormous risks " , in the words of theatre critic Graham McCann , who wrote that " there was nothing to lose in playing a man like Troy . " He co @-@ starred in The Professionals with Julianne Moore and Kim Novak . He portrayed the criminal Tammi Martineau in the 2004 biographical film Asterisk and appeared in several films and television shows . In 2005 , he starred as Garth Snow in the Fox crime drama Dangerous Liaisons . <eos> Boulter is known for his film work in Hungary and abroad . He has also worked with Brandon Thomas and Sacha Baron Cohen . In 2011 , he was nominated for a Laurence Olivier Award for Best Actor , with Olivier in the role of General Herculaneum . In 2012 , he starred in The Phantom of the Opera , which opened at the BBC2 Leicester Square Theatre , with much of the stage cast from his earlier work | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Section 5. ✓ B1. Did you cite the creators of artifacts you used? Section 3 and Section 5. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3 and Section 5. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3 and Section 5. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5. ## C ✓ **Did You Run Computational Experiments?** Section 3 And Section 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3, Section 5, Appendix Section B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3, Section 5, Appendix Section B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wen-etal-2023-digging
Digging out Discrimination Information from Generated Samples for Robust Visual Question Answering
https://aclanthology.org/2023.findings-acl.432
Visual Question Answering (VQA) aims to answer a textual question based on a given image. Nevertheless, recent studies have shown that VQA models tend to capture the biases to answer the question, instead of using the reasoning ability, resulting in poor generalisation ability. To alleviate the issue, some existing methods consider the natural distribution of the data, and construct samples to balance the dataset, achieving remarkable performance. However, these methods may encounter some limitations: 1) rely on additional annotations, 2) the generated samples may be inaccurate, e.g., assigned wrong answers, and 3) ignore the power of positive samples. In this paper, we propose a method to Dig out Discrimination information from Generated samples (DDG) to address the above limitations. Specifically, we first construct positive and negative samples in vision and language modalities, without using additional annotations. Then, we introduce a knowledge distillation mechanism to promote the learning of the original samples by the positive samples. Moreover, we impel the VQA models to focus on vision and language modalities using the negative samples. Experimental results on the VQA-CP v2 and VQA v2 datasets show the effectiveness of our DDG.
# Digging Out Discrimination Information From Generated Samples For Robust Visual Question Answering Zhiquan Wen1,2, Yaowei Wang2∗, Mingkui Tan1,3∗ , Qingyao Wu1**, Qi Wu**4 1School of Software Engineering, South China University of Technology, China 2PengCheng Laboratory, China 3Key Laboratory of Big Data and Intelligent Robot (South China University of Technology), Ministry of Education 4School of Computer Science, University of Adelaide sewenzhiquan@mail.scut.edu.cn, wangyw@pcl.ac.cn ## Abstract Visual Question Answering (VQA) aims to answer a textual question based on a given image. Nevertheless, recent studies have shown that VQA models tend to capture the biases to answer the question, instead of using the reasoning ability, resulting in poor generalisation ability. To alleviate the issue, some existing methods consider the natural distribution of the data, and construct samples to balance the dataset, achieving remarkable performance. However, these methods may encounter some limitations: 1) rely on additional annotations, 2) the generated samples may be inaccurate, e.g., assigned wrong answers, and 3) ignore the power of positive samples. In this paper, we propose a method to Dig out Discrimination information from Generated samples (DDG) to address the above limitations. Specifically, we first construct positive and negative samples in vision and language modalities, without using additional annotations. Then, we introduce a knowledge distillation mechanism to promote the learning of the original samples by the positive samples. Moreover, we impel the VQA models to focus on vision and language modalities using the negative samples. Experimental results on the VQA-CP v2 and VQA v2 datasets show the effectiveness of our DDG. ## 1 Introduction With the vigorous development of computer vision and natural language processing fields, it has promoted the vision-and-language (Gu et al., 2022; Wen et al., 2023b) field to take a forward step. As a typical task of the vision-and-language field, Visual Question Answering (VQA) (Anderson et al., 2018; Cadène et al., 2019a) requires an agent to fully comprehend the information of the questions and images, and then correctly answer the textual question according to the image. Although recent advances (Cadène et al., 2019a) have achieved impressive performance on the benchmark datasets ∗Corresponding author (*e.g.,* VQA v2 (Goyal et al., 2017)), numerous studies (Agrawal et al., 2018; Kafle and Kanan, 2017) have shown that some VQA models tend to excessively rely on the superficial correlations (*i.e.,* biases) between the questions and answers, instead of adopting reasoning ability to answer the questions. For example, the VQA models can easily answer "2" and "tennis" for the questions "How many ... " and "What sports ... ", respectively, would obtain higher accuracy, since the corresponding answers "2" and "tennis" are occupied the most in the dataset. However, memorising the biases to answer the questions would signify flawed reasoning ability, resulting in poor generalisation ability. To mitigate the bias issues, many methods have been proposed, which can be roughly categorised into three types: 1) enhance visual attention (Selvaraju et al., 2019; Wu and Mooney, 2019), 2) directly weaken the biases (Cadène et al., 2019b; Niu et al., 2021), and 3) balance the dataset (Chen et al., 2020; Zhu et al., 2020). Previous studies have shown the methods that balance the dataset usually outperform other types of methods, since they dig out the natural distribution of the data, and then devise a suitable strategy to overcome the biases. Specifically, CSS (Chen et al., 2020) and Mutant (Gokhale et al., 2020) methods generate counterfactual samples by masking the critical objects or words in the images and questions, respectively. However, these methods require additional annotations that are hard to obtain. To get rid of the dependence on the additional annotations, MMBS (Si et al., 2022) constructs the positive questions by randomly shuffling the question words or removing the words of question types, which destroys the grammar and semantics of the original questions. Moreover, SimpleAug (Kil et al., 2021) and KDDAug (Chen et al., 2022) build the new samples by re-combining the existing questions and images, which may be difficult to assign correct answers for the generated samples. SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) construct the negative samples by randomly sampling the images or questions in a mini-batch data. Nevertheless, these methods consider the negative samples only, but ignoring the generated positive samples would improve the diversity of the dataset and further promote the robustness of the VQA models. To overcome the above issues, we propose a method to Dig out Discrimination information from Generated samples (DDG). As pointed out by (Wen et al., 2021), the bias issues exist in both vision and language modalities, we thus construct positive and negative samples in vision and language modalities and devise corresponding training objectives to achieve unbiased learning. Concretely, we feed the samples to the UpDn (Anderson et al., 2018) model pre-trained on the VQA-CP (Agrawal et al., 2018) v2 training set, and select k objects based on the top-k image attention weights of the UpDn as the positive images. The positive questions can be constructed by using the translate-and-back-translate mechanism, *e.g.,* English → French → English. We then combine the positive images and positive questions with original questions and original images, respectively, as positive image and question samples. Based on the positive samples, we adopt a knowledge distillation mechanism (Hinton et al., 2015) to help the learning of the original samples. Moreover, inspired by (Wen et al., 2021), we construct mismatched image-question pairs as negative samples. Generally speaking, one cannot answer the question correctly given the mismatched image-question pairs, since missing the supporting modality information. To promote the VQA models to focus on the vision and language modalities, we devise a training objective that aims to minimise the likelihood of predicting the ground-truth answers of the original samples when given the corresponding negative samples. Besides, we further introduce the corresponding positive samples to assist the training. Based on the above debiased techniques, our DDG achieves impressive performance on the VQA-CP v2 (Agrawal et al., 2018) and VQA v2 (Goyal et al., 2017) datasets, which demonstrates the effectiveness of our DDG. Our contributions can be summarised as follows: 1) We devise a novel positive image samples generation strategy that uses the image attention weights of the pre-trained UpDn model to guide the selection of the target objects. 2) We introduce the knowledge distillation mechanism to promote the learning of the original samples by the positive samples. 3) We adopt the positive and negative samples to impel the VQA models to focus on the vision and language modalities, to mitigate the biases. ## 2 Related Work 2.1 Overcoming Biases In Vqa Recently, researchers have proposed vast debiased techniques (Selvaraju et al., 2019; Niu et al., 2021; Zhu et al., 2020; Wen et al., 2023a) to alleviate the bias issues in VQA, which can be roughly categorised into three types: 1) enhance the visual attention, 2) directly weaken the biases, 3) balance the dataset. Methods that enhance visual attention. These methods seek to adopt human-annotated information to strengthen the visual attention of the VQA models. Specifically, Selvaraju *et al.* (Selvaraju et al., 2019) aligned the important image regions identified based on the gradient with the human attention maps to enhance the visual attention in the VQA models. Wu *et al.* (Wu and Mooney, 2019) introduced a self-critical training objective that matches the ground-truth answer with the most important image region recognised by human explanations. However, these methods require human annotations that are hard to obtain. Methods that weaken the biases. Ramakrishnan *et al.* (Ramakrishnan et al., 2018) adopted adversarial learning to inhibit the VQA models capture the language biases. Inspired by (Ramakrishnan et al., 2018), Cadene *et al.* (Cadène et al., 2019b) devised a question-only model to generate weight to re-weight the samples. Moreover, Han et al. (Han et al., 2021) forced the biased models to capture different types of biases, and removed them step by step. Different from the above, Niu *et al.* (Niu et al., 2021; Niu and Zhang, 2021) introduced the idea of cause-effect to help alleviate the biases. Nevertheless, these methods introduce additional parameters in training or inference phrases. Methods that balance the dataset. CSS (Chen et al., 2020) and Mutant (Gokhale et al., 2020) methods generated massive counterfactual samples by masking the critical objects and words in the images and questions, respectively. However, these methods require additional annotations to assign the answers for the generated samples. To get rid of the dependence on the annotations, KDDAug (Chen et al., 2022) and SimpleAug (Kil et al., 2021) constructed the samples by re-composing the existing questions and images, which however, is hard to assign the correct answers for the generated samples. SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) constructed the negative samples by randomly sampling the images or questions in a mini-batch data. Nevertheless, these methods ignored the positive samples could improve the diversity of the dataset, which was helpful for improving the robustness of the VQA models. Moreover, Si *et al.* (Si et al., 2022) constructed the positive question samples by randomly shuffling the words or removing the words of question types, which destroys the semantics of the original questions. Different from the above methods, we seek to construct positive samples and negative samples in both vision and language modalities, and devise corresponding debiased strategies to achieve unbiased learning. ## 2.2 Knowledge Distillation Knowledge Distillation (KD) (Hinton et al., 2015) is a universal model compression method that seeks to train a small student model guided by a large teacher model. Due to the effectiveness of KD, the idea has been applied to other tasks, *e.g.,* longtail classification (He et al., 2021; Xiang et al., 2020), object detection (Chen et al., 2017; Wang et al., 2019), and video captioning (Pan et al., 2020; Zhang et al., 2020). Recently, some debiased VQA methods (Niu and Zhang, 2021; Chen et al., 2022) introduced KD to alleviate the bias issues. Specifically, Niu *et al.* (Niu and Zhang, 2021) devised two teachers (*i.e.,* ID-teacher and OOD-teacher) to generate "soft" labels to guide the training of the student model (*i.e.,* the baseline model) with the KD mechanism. Inspired by IntroD, Chen et al. (Chen et al., 2022) adopted a multi-teacher KD mechanism to help generate robust pseudo labels for all newly composed image-question pairs. In our DDG, we seek to improve the reasoning ability of the VQA models with the help of the generated positive samples, via the KD mechanism. ## 3 **Digging Out Discrimination Information** From Generated Samples As shown by (Agrawal et al., 2018; Kafle and Kanan, 2017), VQA models tend to capture the biases in a dataset to answer questions, instead of adopting the reasoning ability, resulting in poor generalisation ability. Moreover, bias issues exist in both vision and language modalities (Wen et al., 2021). To address the above issues, we seek to construct both positive and negative samples in vision and language modalities, and devise corresponding debiased strategies to achieve unbiased learning. The overall framework is shown in Figure. 1. ## 3.1 Preliminary Visual question answering (VQA) requires an agent to answer a textual question given a corresponding image. Traditional VQA methods (Anderson et al., 2018; Kim et al., 2018; Ben-younes et al., 2019; Cadène et al., 2019a) regard the VQA task as a multi-class classification problem, where each class corresponds to a unique answer. To be specific, given a VQA dataset D = {(vi, qi, ai)}Ni=1 with N samples, where vi ∈ V (image set), qi ∈ Q (question set) are the i-th sample in D, and ai ∈ A (answer set) is a corresponding ground-truth answer, VQA methods seek to learn a multimodal mapping: V×Q→ [0, 1]|A| to generate an answer distribution over the answer set A. Generally speaking, most VQA models usually contain four parts, namely, vision feature encoder ev(·), language feature encoder eq(·), multimodal feature fusion module f(·, ·), and classifier c(·). These modules can be formed as a traditional VQA model: $$P({\mathcal{A}}|v_{i},q_{i})=c(f(e_{v}(v_{i}),e_{q}(q_{i}))).\quad\quad(1)$$ Formally, since regarding the VQA task as a multiclass classification problem, the VQA models can be optimised by a binary cross-entropy loss Lvqa, which can be formulated as: $$\begin{array}{c}{{{\mathcal L}_{v q a}=-\frac{1}{N}\sum_{i=1}^{N}{\bf a}_{i}{\log(\sigma(P({\mathcal A}|v_{i},q_{i})))}+}}\\ {{(1-{\bf a}_{i}){\log(1-\sigma(P({\mathcal A}|v_{i},q_{i})))},}}\end{array}\tag{2}$$ where σ denotes the sigmoid activation function, and ai is the target score obtained based on the answer ai that humans annotated for (vi, qi). ## 3.2 Sample Generation Our method aims to adopt the generated samples to achieve unbiased learning. Hence we present how to generate the positive and negative samples at first. As pointed out by (Wen et al., 2021), biases exist in both language and vision modalities. To overcome the bias issue, we seek to generate positive and negative samples regarding the vision and language modalities for each original sample, to assist the training process. ![3_image_0.png](3_image_0.png) Positive samples generation. To mitigate the bias issue over vision and language modalities in VQA, we build two types of positive samples, *i.e.,* a positive question sample, and a positive image sample. Specifically, to generate the **positive image samples**, we seek to draw support from the image attention weights in a pre-trained baseline VQA model. We have empirically found that although the baseline models (*e.g.,* UpDn (Anderson et al., 2018)) achieve unsatisfactory performance in the out-of-distributions (OOD) test set (*e.g.,* VQA-CP v2 (Agrawal et al., 2018) dataset), they still obtain promising performance in the independent and identically distributed (IID) dataset (*e.g.,* VQA v2 dataset (Goyal et al., 2017)). In other words, the baseline models can identify the target objects in the images referred to in the questions to accomplish answering during the training process, regardless of whether capturing the biases. Hence, the image attention weights of the pre-trained UpDn (Anderson et al., 2018) model can help find target objects as positive image samples, which can exclude the background information of the images. Given the sample (vi, qi) from the VQA- CP v2 training set, we first feed it to the UpDn model pretrained on the VQA-CP v2 training set and would obtain the image attention weights of the UpDn model regarding the objects in image vi. Note that we select k objects based on the top-k image attention weights as the positive image samples (v+i , qi), where k is a hyper-parameters. To generate the **positive question samples**, previous methods (Si et al., 2022) seek to adopt some data augmentation methods to expand the data, *e.g.,* randomly shuffle the question words or remove question category words. However, these methods would severely destroy the grammar and semantics of the original question, resulting in changing the semantic information of the questions. To mitigate this issue, inspired by (Tang et al., 2020), we adopt the translate-and-back-translate mechanism to generate the positive question samples. Specifically, we first use pre-trained English-toFrench and English-to-German translation models to translate the original question to French and German, respectively. Then we use corresponding pretrained back-translation models to translate them back into English. 1 Moreover, we further adopt a pre-trained sentence similarity model to choose a back-translated question sample that has the highest similarity score with the original question as the positive question sample. Note that for some sim1All pre-trained translation models are obtained from the Hugging Face repository. ple questions, they would still keep the same even feeding them to the translate-and-back-translate process. To generate positive question samples for these questions, we substitute the words in the question with synonyms based on the pre-trained synonym word substitution model. 2 In this way, we obtain the positive question samples (vi, q+i ). Based on the above, we would obtain two types of positive samples (i.e., (v+i , qi) and (vi, q+i )) for each sample (vi, qi), in which the positive image samples have foreground information in the image and the positive question samples are semantic equivalent to the original question. Negative samples generation. Inspired by (Wen et al., 2021), we construct the negative samples over language and vision modalities by randomly sampling one question and one image in a minibatch data for each sample. Specifically, given a mini-batch data {(vb, qb)}Bb=1, for each sample (vi, qi), we randomly sample one image v−i and one question q−i from {(vb, qb)}Bb=1 to form the negative samples, namely, negative question sample (vi, q−i ) and negative image sample (v−i , qi). ## 3.3 Generated Samples Driven Robust Vqa Positive samples driven robust VQA. We attempt to achieve robust VQA with the help of the generated positive samples. Specifically, given an original sample (vi, qi) and its counterpart positive samples (v+i , qi) and (vi, q+i ), we first feed them into the VQA models to obtain the predictions P(A|vi, qi), P(A|v+i , qi), and P(A|vi, q+i ). Generally speaking, the ensemble predictions usually perform better than the predictions before the ensemble. We thus adopt a simple ensemble strategy (*i.e.,* averaging these predictions) to obtain ensemble predictions Pens = (P(A|vi, qi) + P(A|v+i , qi) + P(A|vi, q+i ))/3. One intuitive way to make the VQA models achieve better performance with the help of the positive samples is to adopt a knowledge distillation mechanism (Hinton et al., 2015). Concretely, we regard the ensemble prediction Pens and the original prediction P(A|vi, qi) as a teacher and a student, respectively, and then introduce a Kullback-Leibler (KL) Divergence Ldis as the objective to optimise the VQA models, which can be formulated as: $$\mathcal{L}_{dis}=\sum_{i=1}^{N}P_{ens}\log\frac{P_{ens}}{P(\mathcal{A}|v_{i},q_{i})}.\tag{3}$$ By minimising the KL divergence, the VQA models can extract discrimination information from the positive samples to help better answer the original questions qi correctly based on the images vi. To guarantee the teacher (*i.e.,* the ensemble prediction Pens) performs better than the student (*i.e.,* the original prediction P(A|vi, qi)), we still use the binary cross-entropy loss Lens on Pens to further optimise the VQA models. Negative samples driven robust VQA. Besides adopting the positive samples to assist the training process, we also introduce the debiased strategy on negative samples to alleviate the bias issues. As shown by (Agrawal et al., 2018), the bias issue usually denotes the VQA models tend to capture the superficial correlations between one modality and the answers to make a prediction on the questions. To mitigate this issue, one direct solution is to improve the attention on both language and vision modalities information when the VQA models answer the questions. We thus consider adopting the negative samples to achieve this aim. Intuitively, given a mismatched image-question pair, the VQA models even the human being cannot make a correct prediction. Drawing from this insight, when given original samples and the counterpart negative samples, we can alleviate the biases by giving contrary training objectives to the negative samples. This encourages the VQA models to answer the questions by paying more attention to the information of each modality. Concretely, inspired by (Wen et al., 2021), given an original sample (vi, qi, ai) and its counterpart negative samples (v−i , qi) and (vi, q−i ), the VQA models cannot answer correctly when feeding the negative samples, which can be achieved by minimising the possibility of predicting the ground-truth answer: $${\cal L}_{neg}=\delta(P({\cal A}|v_{i}^{-},q_{i}))[x]+\delta(P({\cal A}|v_{i},q_{i}^{-}))[x],\tag{4}$$ (4) where x is the index of ground-truth answer ai in the answer set A, and δ is the softmax activation function. Minimising the training objective Lneg encourages the VQA models not to give the ground-truth answer when feeding the mismatched image-question pairs. Thus, the VQA models are able to consider both image and question information before making a prediction, which implicitly alleviates the bias issue. Moreover, to further enhance the attention of VQA models towards both vision and language modalities, we introduce positive samples into the training process. Specifically, given positive image samples (v+i , qi, ai) and negative image samples (v−i , qi), when feeding them to the VQA models, one hopes the VQA models can answer correctly with high confidence on the positive samples, while having low prediction confidence to the groundtruth answer with the negative samples. This can be formulated as: $$\operatorname*{max}\;\delta(P({\mathcal{A}}|v_{i}^{+},q_{i}))[x]-\delta(P({\mathcal{A}}|v_{i}^{-},q_{i}))[x].$$ Maximising the objective encourages the VQA models to make accurate predictions for the matched image-question pairs, while discouraging the models from generating the ground-truth answer when provided with negative image samples. This impels the VQA models to allocate more attention to the vision modality. By leveraging the monotonicity property of the logarithmic function log, we convert the maximisation problem into an equivalent minimisation problem. This transformation can be mathematically formulated as follows: $$\begin{array}{c}{{{\mathcal L}_{i m g}=-\log(\sigma(\delta(P({\mathcal A}|v_{i}^{+},q_{i}))[x]-}}\\ {{\delta(P({\mathcal A}|v_{i}^{-},q_{i}))[x]))}}\end{array}\tag{5}$$ Moreover, with regard to the negative samples, the prediction scores associated with the groundtruth answer of the corresponding positive samples can serve as an indicator of the extent to which VQA models capture biases. The higher the prediction score δ(P(A|v−i , qi))[x]), the greater the degree of bias it represents. Therefore, it should be subject to a higher penalty in the loss Limg. Inspired by the focal loss (Lin et al., 2017), we consider the prediction score δ(P(A|v−i , qi))[x]) as a measure of the degree of bias, and thus the loss L*weight* img can be reformulated as: $$\mathcal{L}_{img}^{weight}=-\delta(P(\mathcal{A}|v_{i}^{-},q_{i}))[x]*\mathcal{L}_{img}\tag{6}$$ $\mathcal{L}_{img}^{weight}=-\delta(P(\mathcal{A}|v_{i}^{-},q_{i}))[x]*\mathcal{L}_{img}$ (6) We would obtain L*weight* que in the same way. Thus the weighted loss is formulated as follows: $${\mathcal{L}}^{w e i g h t}={\mathcal{L}}_{i m g}^{w e i g h t}+{\mathcal{L}}_{q u e}^{w e i g h t}.$$ que . (7) ## 3.4 Overall Training Objective In total, our overall training objective can be formulated as: $$+\,\mathbb{L}n e g+\lambda\,\ast\,\mathbb{L}\qquad\mathbb{L}\quad,\ (\infty)$$ L = Lvqa+Lens+Ldis+Lneg+λ∗L*weight*, (8) where λ is a hyper-parameter. ## 4 Experiments 4.1 Datasets We evaluate our DDG on the OOD dataset VQACP v2 (Agrawal et al., 2018) and IID dataset VQA v2 (Goyal et al., 2017) validation set based on the standard evaluation metric (Antol et al., 2015). Due to the page limitation, we put the implementation details and compared methods into the Appendix. ## 4.2 Quantitative Results We report the experimental results on the VQA-CP v2 and VQA v2 datasets in Table 1. From these results, we have the following observations: 1) On the whole, the methods that balance the datasets outperform the other two types of methods *i.e.,* enhance visual attention and directly weaken the biases. This demonstrates that alleviating the biases by paying more attention to the natural distribution of the data would obtain higher performance. 2) Our DDG outperforms most compared methods. Specifically, our DDG surpasses SCR (Wu and Mooney, 2019), GGE-DQ (Han et al., 2021), SSL-VQA (Zhu et al., 2020), and KDDAug (Chen et al., 2022) by approximately 12%, 3%, 3%, and 1%, respectively. These results demonstrate the effectiveness of our DDG. 3) Although our method performs slightly worse than the Mutant (Gokhale et al., 2020) and D-VQA (Wen et al., 2021), our DDG achieves higher performance on the VQA v2 dataset. Moreover, Mutant constructed the counterfactual samples highly relying on the additional annotations, while our method build the samples without introducing additional annotations. Meanwhile, compared to the D-VQA method, our method performs better when the data is limited, which can be shown in Table 2. These results further demonstrate the effectiveness of our DDG. Benefiting from the training process based on the positive samples, our DDG performs better than all the compared methods on the VQA v2 dataset. Specifically, our DDG outperforms SSLVQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) by around 1.8% and 0.6%, respectively, which demonstrates our DDG is able to improve the model performance on both IID (*i.e.,* VQA v2 dataset) and OOD (*i.e.,* VQA-CP v2) datasets, further implying the superiority of our DDG. $$\left(7\right)$$ ## 4.3 Qualitative Results. To further demonstrate the effectiveness of our DDG on alleviating the biases, we provide the 6915 | VQA-CP v2 test (%) | VQA v2 val (%) | | | | | | | | | |------------------------------------|-------------------------------|-------|--------|-------|-------|-------|--------|-------|-------| | Case | Model | All | Yes/No | Num | Other | All | Yes/No | Num | Other | | SAN (Yang et al., 2016) | 24.96 | 38.35 | 11.14 | 21.74 | 52.41 | 70.06 | 39.28 | 47.84 | | | - | GVQA (Agrawal et al., 2018) | 31.30 | 57.99 | 13.68 | 22.14 | 48.24 | 72.03 | 31.17 | 34.65 | | UpDn (Anderson et al., 2018) | 39.74 | 42.27 | 11.93 | 46.05 | 63.48 | 81.18 | 42.14 | 55.66 | | | AttAlign (Selvaraju et al., 2019) | 39.37 | 43.02 | 11.89 | 45.00 | 63.24 | 80.99 | 42.55 | 55.22 | | | I | HINT (Selvaraju et al., 2019) | 46.73 | 67.27 | 10.61 | 45.88 | 63.38 | 81.18 | 42.99 | 55.56 | | SCR (Wu and Mooney, 2019) | 48.47 | 70.41 | 10.42 | 47.29 | 62.30 | 77.40 | 40.90 | 56.50 | | | AdvReg (Ramakrishnan et al., 2018) | 41.17 | 65.49 | 15.48 | 35.48 | 62.75 | 79.84 | 42.35 | 55.16 | | | RUBi (Cadène et al., 2019b) | 44.23 | 67.05 | 17.48 | 39.61 | - | - | - | - | | | Re-Scaling (Guo et al., 2022) | 47.09 | 68.42 | 21.71 | 42.88 | 55.50 | 64.22 | 39.61 | 53.09 | | | DLR (Jing et al., 2020) | 48.87 | 70.99 | 18.72 | 45.57 | 57.96 | 76.82 | 39.33 | 48.54 | | | VGQE (KV and Mittal, 2020) | 48.75 | - | - | - | 64.04 | - | - | - | | | LMH (Clark et al., 2019) | 52.01 | 72.58 | 31.12 | 46.97 | 56.35 | 65.06 | 37.63 | 54.69 | | | IntroD (Niu and Zhang, 2021) | 51.31 | 71.39 | 27.13 | 47.41 | 62.05 | 77.65 | 40.25 | 55.97 | | | CF-VQA (Niu et al., 2021) | 53.55 | 91.15 | 13.03 | 44.97 | 63.54 | 82.51 | 43.96 | 54.30 | | | RMFE (Gat et al., 2020) | 54.55 | 74.03 | 49.16 | 45.82 | - | - | - | - | | | CKCL (Pan et al., 2022) | 55.05 | 90.33 | 18.99 | 46.46 | 62.55 | 79.17 | 41.94 | 55.38 | | | LPF (Liang et al., 2021) | 55.34 | 88.61 | 23.78 | 46.57 | 55.01 | 64.87 | 37.45 | 52.08 | | | GGE-DQ (Han et al., 2021) | 57.32 | 87.04 | 27.75 | 49.59 | 59.11 | 73.27 | 39.99 | 54.39 | | | D-VQA (Wen et al., 2021) | 61.91 | 88.93 | 52.32 | 50.39 | 64.96 | 82.18 | 44.05 | 57.54 | | | II | CSS (Chen et al., 2020) | 58.95 | 84.37 | 49.42 | 48.21 | 59.91 | 73.25 | 39.77 | 55.11 | | CSS+CL (Liang et al., 2020) | 59.18 | 86.99 | 49.89 | 47.16 | 57.29 | 67.27 | 38.40 | 54.71 | | | CSS+ (Chen et al., 2021) | 59.54 | 83.37 | 52.57 | 48.97 | 59.96 | 73.69 | 40.18 | 54.77 | | | ECD (Kolling et al., 2022) | 59.92 | 83.23 | 52.29 | 49.71 | 57.38 | 69.06 | 35.74 | 54.25 | | | Mutant (Gokhale et al., 2020) | 61.72 | 88.90 | 49.68 | 50.78 | 62.56 | 82.07 | 42.52 | 53.28 | | | III | CVL (Abbasnejad et al., 2020) | 42.12 | 45.72 | 12.45 | 48.34 | - | - | - | - | | Unshuffling (Teney et al., 2021) | 42.39 | 47.72 | 14.43 | 47.24 | 61.08 | 78.32 | 42.16 | 52.71 | | | MMBS (Si et al., 2022) | 48.19 | 65.00 | 14.05 | 48.75 | 63.84 | 79.61 | 44.23 | 57.05 | | | SimpleAug (Kil et al., 2021) | 52.65 | 66.40 | 43.43 | 47.98 | 64.34 | 81.97 | 43.91 | 56.35 | | | RandImg (Teney et al., 2020) | 55.37 | 83.89 | 41.60 | 44.20 | 57.24 | 76.53 | 33.87 | 48.57 | | | SSL-VQA (Zhu et al., 2020) | 57.59 | 86.53 | 29.87 | 50.03 | 63.73 | - | - | - | | | KDDAug (Chen et al., 2022) | 60.24 | 86.13 | 55.08 | 48.08 | 62.86 | 80.55 | 41.05 | 55.18 | | | DDG (Ours) | 61.14 | 88.77 | 49.33 | 49.90 | 65.54 | 82.92 | 44.80 | 57.80 | | | IV | | | | | | | | | | qualitative results on the VQA-CP v2 dataset in Figures. 2 and 3. From the results in Figure 2, UpDn (Anderson et al., 2018) and SSL-VQA (Zhu et al., 2020) fail to find the target objects mentioned in the question within the image, leading to erroneous predictions. In contrast, our DDG demonstrates a remarkable ability to accurately localize the target objects with a high degree of confidence, resulting in precise answers to the posed questions. These visualisation results demonstrate the effectiveness of our DDG. Moreover, in Figure. 3, we provide visualisations of the answer distributions obtained by various approaches for different question types, namely "How many ... ", "Is this ... ", and "How many people are in ... ". From the results, we have the following observations: 1) the training answer distribution is different from that in the test set, which is very challenging. 2) The UpDn model excessively fits the biases in the training set, and thus outputs a similar answer distribution with the training set given the test set, resulting in poor performance. 3) SSL-VQA seeks to alleviate the bias issue, which however is limited. Our DDG is able to alleviate the biases effectively, and thus achieves similar answer distributions with the test set, embodying the better generalisation ability. ## 4.4 Ablation Studies Effect of the scale of the training set. To demonstrate the effectiveness of our method in the datalimited scenario, we conduct experiments on different scales of the training data. Specifically, on the VQA-CP v2 dataset, we manually split the training set into different proportions (*i.e.,* from 20% to 80% of the original training data), while the test set is unchanged. From the experimental results in Table 2, ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) | Proportion of Training Set | | | | | | |------------------------------|-------|-------|-------|-------|-------| | Model | 20% | 40% | 60% | 80% | 100% | | UpDn† | 36.22 | 38.90 | 39.40 | 40.61 | 41.53 | | SSL-VQA | 52.71 | 54.42 | 56.83 | 57.31 | 57.59 | | D-VQA | 52.94 | 56.74 | 58.31 | 59.05 | 61.91 | | Ours | 55.74 | 57.42 | 58.99 | 59.69 | 61.14 | we find that our method performs better when the training data is limited, which is more practical and suitable for the real world. Specifically, our method performs better than SSL-VQA (Zhu et al., 2020) and D-VQA (Wen et al., 2021) on any proportions of the training data, especially when only remains 20% of the training data, our DDG outperforms SSL-VQA and D-VQA by around 3%. These results demonstrate the superiority of our DDG in the data-limited scenarios. Effect of each component of our DDG. We conduct ablation studies on the VQA-CP v2 dataset to evaluate each component in our DDG, and show the experimental results in Table 3. From these results, we have the following observations: 1) when introducing the ensemble binary cross-entropy loss Lens with the positive samples, the model performance improves by around 5% compared with the UpDn (Anderson et al., 2018) model (*i.e.,* 41.53% vs. 46.63%), which demonstrates the positive samples are able to assist the training process to alleviate the bias issue. 2) By incorporating the KL loss Ldis, the performance would be further improved (*i.e.,* 46.63% vs. 47.77%), which highlights the ensemble prediction is able to guide the training of the original prediction. 3) Upon introducing Lneg and L*weight*, which leverage negative samples, the performance would improve substantially (*i.e.,* 47.77% vs. 61.14%). This significant enhancement underscores the significance of promoting the attention of VQA models towards both vision and language modalities when answering the questions. | Model | k | VQA-CP v2 test set (%) | | | |---------|--------|--------------------------|-------|-------| | All | Yes/No | Number | Other | | | 3 | 59.73 | 86.98 | 42.36 | 50.22 | | 6 | 60.24 | 88.28 | 41.77 | 50.62 | | 8 | 60.74 | 88.41 | 45.54 | 50.41 | | 10 | 61.14 | 88.77 | 49.33 | 49.90 | | 12 | 60.65 | 88.79 | 45.92 | 49.95 | | 14 | 60.38 | 88.88 | 41.55 | 50.61 | | DDG | | | | | | Lens | Ldis | Lneg | Lweight | VQA-CP v2 (%) | |--------|--------|--------|-----------|-----------------| | √ | 41.53 | | | | | √ | √ | 46.63 | | | | √ | √ | √ | 47.77 | | | √ | √ | √ | √ | 60.85 61.14 | These results further demonstrate the effectiveness of each component in our DDG. Effect of k. k denotes the number of target objects that are selected as the positive samples, which can be referred to in Section 3.2. To demonstrate the effect of the k on the model performance, we conduct experiments on the VQA-CP v2 dataset regarding different k. From the results in Table 4, we have the following observations: 1) with the increase of k (*e.g.,* from 3 to 10), the model performance exhibits a gradual improvement.The results indicate that a higher value of k increases the likelihood that the positive image samples indeed encompass the target objects mentioned in the corresponding questions. 2) Once k exceeds 10, the model performance starts to drop, thereby illustrating that an excessively high value of k introduces extraneous background information that adversely affects the model's performance. These results demonstrate that an appropriate k helps to obtain the best performance on our DDG. Evaluation of λ. λ is the weight of the loss L*weight*. We conduct ablation studies about different λ on the model performance, and show the experimental results in Table 5. From the results, the best performance is obtained in our DDG when λ = 0.05, and the performance is drop whenever λ is higher or lower than 0.05. These results demonstrate that a suitable weight of loss L*weight* helps to obtain better performance. Table 5: Effect of different λ. We report the experimental results in terms of Accuracy (%). ## Evaluation Of Different Backbones. Our Ddg is model-agnostic. To demonstrate the effectiveness of our DDG on different backbones (*i.e.,* SAN (Yang et al., 2016) and UpDn (Anderson et al., 2018)), we conduct experiments on the VQA-CP v2 dataset, and show the results in Table 6. From the results, our DDG consistently achieves a substantial improvement in model performance, regardless of which backbone it is. These results further embody the superiority of our DDG. | Model | λ | VQA-CP v2 test set (%) | | | |---------|--------|--------------------------|-------|-------| | All | Yes/No | Number | Other | | | 0.01 | 60.86 | 88.89 | 47.21 | 49.93 | | 0.03 | 60.93 | 88.93 | 47.59 | 49.92 | | 0.05 | 61.14 | 88.77 | 49.33 | 49.90 | | 0.07 | 60.74 | 88.97 | 44.96 | 50.27 | | 0.1 | 60.58 | 88.89 | 43.11 | 50.53 | | DDG | | | | | ## 5 Conclusion | Model | Yes/No | Number | Other | Overall | GapΔ ↑ | |---------|----------|----------|---------|-----------|----------| | SAN† | 38.44 | 12.91 | 46.65 | 39.11 | +16.41 | | + DDG | 85.59 | 24.62 | 48.24 | 55.52 | | | UpDn† | 43.45 | 13.64 | 48.18 | 41.53 | +19.61 | | + DDG | 88.77 | 49.33 | 49.90 | 61.14 | | In this paper, we have proposed a novel method named DDG to alleviate the bias issues in VQA from vision and language modalities. Specifically, we construct both positive and negative samples in vision and language modalities without using additional annotations, in which the positive questions have similar semantics to the original questions, while the positive images contain foreground information. Based on the positive samples, we heuristically introduce the knowledge distillation mechanism to facilitate the training of the original samples through guidance from positive samples. Moreover, we put forth a strategy that encourages VQA models to focus more on the vision and language modalities when answering the questions, aided by the negative samples. Extensive experiments on the VQA-CP v2 and VQA v2 datasets show the effectiveness of our DDG. ## Acknowledgements We would like to thank all the anonymous reviewers for their constructive comments and suggestions. This work was partially supported by STI 2030—Major Projects (2022ZD0208900), Key Realm R&D Program of Guangzhou 202007030007, Program for Guangdong Introducing Innovative and Enterpreneurial Teams 2017ZT07X183, National Natural Science Foundation of China (NSFC) 62072190. ## Limitations The paper focuses on the VQA task only, we will extend our method to other multimodal tasks in future works, *e.g.,* referring expression comprehension (REC). Moreover, although our DDG outperforms most of the state-of-the-art methods, the performance is still a long way from humans. ## Ethics Statement The authors declare that they have no conflict of interest. This paper introduces a novel method named DDG to overcome the bias issue in Visual Question Answering (VQA). Mitigating the biases can impel the VQA models to adopt real reasoning ability to answer the questions, instead of using the captured biases. Hence, this research can promote the development of the AI robot, *e.g.,* dialogue robots, and facilitate people's daily lives. The failure of the debiased technique may result in the collapse of the VQA system in environments that have seen less or even never seen. Moreover, we evaluate our DDG on the benchmark out-of-distribution (OOD) dataset and demonstrate the remarkable debiased ability. ## References Ehsan Abbasnejad, Damien Teney, Amin Parvaneh, Javen Shi, and Anton van den Hengel. 2020. Counterfactual vision and language learning. In *IEEE* Conference on Computer Vision and Pattern Recognition (CVPR), pages 10041–10051. Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In *IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), pages 4971–4980. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In *IEEE* Conference on Computer Vision and Pattern Recognition (CVPR), pages 6077–6086. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: visual question answering. In *IEEE International Conference on Computer* Vision (ICCV), pages 2425–2433. Hedi Ben-younes, Rémi Cadène, Nicolas Thome, and Matthieu Cord. 2019. BLOCK: bilinear superdiagonal fusion for visual question answering and visual relationship detection. In *AAAI Conference on Artificial Intelligence (AAAI)*, pages 8102–8109. Rémi Cadène, Hedi Ben-younes, Matthieu Cord, and Nicolas Thome. 2019a. MUREL: multimodal relational reasoning for visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1989–1998. Rémi Cadène, Corentin Dancette, Hedi Ben-younes, Matthieu Cord, and Devi Parikh. 2019b. Rubi: Reducing unimodal biases for visual question answering. In *Conference on Neural Information Processing Systems (NeurIPS)*, pages 839–850. Guobin Chen, Wongun Choi, Xiang Yu, Tony X. Han, and Manmohan Chandraker. 2017. Learning efficient object detection models with knowledge distillation. In *Conference on Neural Information Processing Systems (NeurIPS)*, pages 742–751. Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. 2020. Counterfactual samples synthesizing for robust visual question answering. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 10797– 10806. Long Chen, Yuhang Zheng, Yulei Niu, Hanwang Zhang, and Jun Xiao. 2021. Counterfactual samples synthesizing and training for robust visual question answering. *Arxiv*. Long Chen, Yuhang Zheng, and Jun Xiao. 2022. Rethinking data augmentation for robust visual question answering. In European Conference on Computer Vision (ECCV), volume 13696, pages 95–112. Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In *Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. In *Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 4067–4080. Itai Gat, Idan Schwartz, Alexander G. Schwing, and Tamir Hazan. 2020. Removing bias in multi-modal classifiers: Regularization by maximizing functional entropies. In *Conference on Neural Information Processing Systems (NeurIPS)*, volume 33, pages 3197– 3208. Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. MUTANT: A training paradigm for out-of-distribution generalization in visual question answering. In *Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 878–892. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6325–6334. Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, and Xin Wang. 2022. Vision-and-language navigation: A survey of tasks, methods, and future directions. In *The Association for Computational Linguistics* (ACL), pages 7606–7623. Yangyang Guo, Liqiang Nie, Zhiyong Cheng, Qi Tian, and Min Zhang. 2022. Loss re-scaling VQA: revisiting the language prior problem from a classimbalance view. *IEEE Transactions on Image Processing (TIP)*, 31:227–238. Xinzhe Han, Shuhui Wang, Chi Su, Qingming Huang, and Qi Tian. 2021. Greedy gradient ensemble for robust visual question answering. In *IEEE International Conference on Computer Vision (ICCV)*, pages 1564–1573. Yin-Yin He, Jianxin Wu, and Xiu-Shen Wei. 2021. Distilling virtual examples for long-tailed recognition. In *IEEE International Conference on Computer Vision (ICCV)*, pages 235–244. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *ArXiv*. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International* Conference on Machine Learning (ICML), pages 448– 456. Chenchen Jing, Yuwei Wu, Xiaoxun Zhang, Yunde Jia, and Qi Wu. 2020. Overcoming language priors in VQA via decomposed linguistic representations. In AAAI Conference on Artificial Intelligence (AAAI), pages 11181–11188. Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In *IEEE International Conference on Computer Vision (ICCV)*, pages 1983–1991. Jihyung Kil, Cheng Zhang, Dong Xuan, and Wei-Lun Chao. 2021. Discovering the unknown knowns: Turning implicit knowledge in the dataset into explicit training examples for visual question answering. In *Conference on Empirical Methods in Natural* Language Processing (EMNLP), pages 6346–6361. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Conference on Neural Information Processing Systems (NeurIPS), pages 1571–1581. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *International* Conference on Learning Representations (ICLR). Camila Kolling, Martin D. More, Nathan Gavenski, Eduardo H. P. Pooch, Otávio Parraga, and Rodrigo C. Barros. 2022. Efficient counterfactual debiasing for visual question answering. In Winter Conference on Applications of Computer Vision (WACV), pages 2572–2581. Gouthaman KV and Anurag Mittal. 2020. Reducing language biases in visual question answering with visually-grounded question encoder. In *European* Conference on Computer Vision (ECCV), pages 18– 34. Zujie Liang, Haifeng Hu, and Jiaying Zhu. 2021. LPF: A language-prior feedback objective function for debiased visual question answering. In *ACM SIGIR* Conference on Research and Development in Information Retrieval (SIGIR), pages 1955–1959. Zujie Liang, Weitao Jiang, Haifeng Hu, and Jiaying Zhu. 2020. Learning to contrast the counterfactual samples for robust visual question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3285–3292. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In IEEE International Conference on Computer Vision (ICCV), pages 2980–2988. Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. 2021. Counterfactual VQA: A cause-effect look at language bias. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12700–12710. Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. In Conference on Neural Information Processing Systems (NeurIPS), pages 16292–16304. Boxiao Pan, Haoye Cai, De-An Huang, Kuan-Hui Lee, Adrien Gaidon, Ehsan Adeli, and Juan Carlos Niebles. 2020. Spatio-temporal graph for video captioning with knowledge distillation. In *IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), pages 10867–10876. Yonghua Pan, Zechao Li, Liyan Zhang, and Jinhui Tang. 2022. Causal inference with knowledge distilling and curriculum learning for unbiased VQA. ACM Trans. Multim. Comput. Commun. Appl. (ACM TOMMCCAP), 18(3):67:1–67:23. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Conference on Neural Information Processing Systems (NeurIPS), pages 8024–8035. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In *Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 1532–1543. Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In *Conference on Neural Information Processing Systems (NeurIPS)*, pages 1548–1558. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In *Conference on Neural Information Processing Systems* (NeurIPS), pages 91–99. Ramprasaath Ramasamy Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry P. Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a HINT: leveraging explanations to make vision and language models more grounded. In *IEEE International Conference on Computer Vision (ICCV)*, pages 2591– 2600. Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, and Jie Zhou. 2022. Towards robust visual question answering: Making the most of biased samples via contrastive learning. In *Findings of the Association for Computational Linguistics: EMNLP*, pages 6650–6662. Ruixue Tang, Chao Ma, Wei Emma Zhang, Qi Wu, and Xiaokang Yang. 2020. Semantic equivalent adversarial data augmentation for visual question answering. In *European Conference on Computer Vision* (ECCV), volume 12364, pages 437–453. Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, and Anton van den Hengel. 2020. On the value of out-of-distribution testing: An example of goodhart's law. In Conference on Neural Information Processing Systems (NeurIPS), volume 33, pages 407–417. Damien Teney, Ehsan Abbasnejad, and Anton van den Hengel. 2021. Unshuffling data for improved generalization in visual question answering. In *IEEE International Conference on Computer Vision (ICCV)*, pages 1417–1427. Tao Wang, Li Yuan, Xiaopeng Zhang, and Jiashi Feng. 2019. Distilling object detectors with fine-grained feature imitation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4933– 4942. Zhiquan Wen, Shuaicheng Niu, Ge Li, Qingyao Wu, Mingkui Tan, and Qi Wu. 2023a. Test-time model adaptation for visual question answering with debiased self-supervisions. *IEEE Transactions on Multimedia (TMM)*. Zhiquan Wen, Qi Wu, Leyuan Fang, and Mingkui Tan. 2023b. Transformer-based relational inference network for complex visual relational reasoning. ACM Trans. Multimedia Comput. Commun. Appl. (ACM TOMMCCAP). Zhiquan Wen, Guanghui Xu, Mingkui Tan, Qingyao Wu, and Qi Wu. 2021. Debiased visual question answering from feature and sample perspectives. In Conference on Neural Information Processing Systems (NeurIPS), volume 34, pages 3784–3796. Jialin Wu and Raymond J. Mooney. 2019. Self-critical reasoning for robust visual question answering. In Conference on Neural Information Processing Systems (NeurIPS), pages 8601–8611. Liuyu Xiang, Guiguang Ding, and Jungong Han. 2020. Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification. In *European Conference on Computer Vision (ECCV)*, volume 12350, pages 247–263. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alexander J. Smola. 2016. Stacked attention networks for image question answering. In *IEEE* Conference on Computer Vision and Pattern Recognition (CVPR), pages 21–29. Ziqi Zhang, Yaya Shi, Chunfeng Yuan, Bing Li, Peijin Wang, Weiming Hu, and Zheng-Jun Zha. 2020. Object relational graph with teacher-recommended learning for video captioning. In *IEEE Conference* on Computer Vision and Pattern Recognition (CVPR), pages 13275–13285. Xi Zhu, Zhendong Mao, Chunxiao Liu, Peng Zhang, Bin Wang, and Yongdong Zhang. 2020. Overcoming language priors with self-supervised learning for visual question answering. In *International Joint* Conferences on Artificial Intelligence (IJCAI), pages 1083–1089. ## 6 Appendix 6.1 Datasets We conduct experiments on the VQA-CP (Agrawal et al., 2018) v2 and VQA (Goyal et al., 2017) v2 datasets. Specifically, the training set of VQA-CP v2 contains approximately 121k images and 483k questions, while the test set contains around 98k images and 220k questions. ## 6.2 Compared Methods We compare our DDG with existing state-of-theart methods, including 1) methods that enhance visual attention: HINT (Selvaraju et al., 2019) and SCR (Wu and Mooney, 2019). 2) Methods that weaken the biases: AdvReg (Ramakrishnan et al., 2018), RUBI (Cadène et al., 2019b), Re-Scaling (Guo et al., 2022), DLR (Jing et al., 2020), VGQE (KV and Mittal, 2020), LMH (Clark et al., 2019), IntroD (Niu and Zhang, 2021), CFVQA (Niu et al., 2021), RMFE (Gat et al., 2020), CKCL (Pan et al., 2022), LPF (Liang et al., 2021), GGE-DQ (Han et al., 2021), and D-VQA (Wen et al., 2021). 3) Methods that balance the dataset using additional annotations: CSS (Chen et al., 2020), CSS+CL (Liang et al., 2020), CSS+ (Chen et al., 2021), ECD (Kolling et al., 2022), and Mutant (Gokhale et al., 2020). (4) Methods that balance the dataset without introducing additional annotations: CVL (Abbasnejad et al., 2020), Unshuffling (Teney et al., 2021), MMBS (Si et al., 2022), SimpleAug (Kil et al., 2021), RandImg (Teney et al., 2020), SSL-VQA (Zhu et al., 2020), and KDDAug (Chen et al., 2022). Our DDG generates positive and negative samples without introducing additional annotations to help alleviate the biases, which belongs to the methods in the fourth part. ## 6.3 Implementation Details Following existing VQA methods (Anderson et al., 2018; Cadène et al., 2019b; Zhu et al., 2020), we extract the top-36 object features with a dimension of 2048 in each image by the Faster-RCNN (Ren et al., 2015) model that is pre-trained by (Anderson et al., 2018). Moreover, each question is first truncated or padded into the same length (*i.e.,* 14), and then encoded by the Glove (Pennington et al., 2014) embedding with a dimension of 300. The dimension of the question encoder (*i.e.,* single layer GRU (Cho et al., 2014)) is 1280. Inspired by SSL-VQA (Zhu et al., 2020), we introduce one Batch Normalisation (Ioffe and Szegedy, 2015) layer before the classifier of UpDn (Anderson et al., 2018). We train our method for 30 epochs with the Adam (Kingma and Ba, 2015) optimiser. Specifically, we adopt Lens and Ldis to train the baseline model for 12 epochs, and introduce Lneg and L*weight* at the 13-th epoch. The learning rate is set to 1e-3, and decreases by half every 5 epochs after 10 epochs. The batch size is set to 256. We set k and λ to 10 and 0.05, respectively. We implement our method based on PyTorch (Paszke et al., 2019), and the model is trained with one Titan Xp GPU. Moreover, our method does not introduce additional parameters except the backbone model. Note that our method is model-agnostic and can be applied to different backbones of VQA models. To better demonstrate the effectiveness of our DDG, we conduct experiments based on different backbones, including UpDn (Anderson et al., 2018), and SAN (Yang et al., 2016) in the same settings. Moreover, we perform experiments over three rounds using varying seeds, and present the results in terms of mean values. The source code and the pre-trained models are available at DDG. ## 6.4 Training Method We provide the training method of our DDG in Algorithm 1. Specifically, when the training epoch is lower than the threshold τ , we forward the base model Mb with the original and positive samples to calculate the knowledge distillation loss Ldis and binary cross-entropy loss Lens. Moreover, when the training epoch is higher than threshold τ , we construct the negative image and question samples without introducing additional annotations, and then forward Mb with the negative samples and obtain the loss Lneg and L*weight*. Finally, we update Mb based on the overall loss L. ## 6.5 More Ablation Studies Effect of the training strategy. As shown in Algorithm 1, we adopt knowledge distillation loss Ldis and ensemble binary cross-entropy loss Lens to train the base model for 12 epochs. To demonstrate the effectiveness of the training strategy, we conduct experiments about the training strategies on the VQA-CP v2 dataset, and the results are shown in Table 7. From the results, the strategy that trains the model with the KL loss in the whole training process performs worse than that training for 12 epochs. We infer that the KL loss may accelerate the fitting of the training dataset, which hinders the ## Algorithm 1 Training Method Of Our Ddg. Require: Training data {(vi, qi, ai)}Ni=1, generated positive image samples (v+i , qi, ai) Ni=1, generated positive question samples (vi, q+i , ai) Ni=1 a base model Mb, batch size b, threshold τ . 1: Randomly initialise the parameters of Mb. 2: **while** not converge do 3: Randomly sample a mini-batch data {(vi, qi, ai)}bi=1 from the training data, and obtain the corresponding positive samples {(v+i , qi, ai)}bi=1, and {(vi, q+i , ai)}bi=1. 4: Forward Mb with the training data and the positive samples, and then obtain the predictions P(A|vi, qi), P(A|v+i , qi), P(A|vi, q+i ), and the ensemble prediction Pens. 5: Calculate the binary cross-entropy loss Lvqa for P(A|vi, qi) by Eq.(2). 6: if the training epoch lower than τ **then** 7: // *Introduce Knowledge Distillation Mechanism* 8: Calculate the Knowledge Distillation Loss Ldis based on P(A|vi, qi) and Pens via Eq. (3). 9: Calculate the binary cross-entropy loss Lens for Pens by Eq. (2). 10: **else** 11: // *Introduce Negative sample loss* 12: Randomly sample images and questions from the mini-batch data {(vi, qi, ai)}bi=1 to form the negative samples as {(¯vi, qi, ai)}bi=1 and {(vi, q¯i, ai)}bi=1. 13: Forward Mb with negative samples, and obtain the predictions P(A|v−i , qi) and P(A|vi, q−i ). 14: Calculate the loss Lneg based on the predictions of the negative samples via Eq. (4). 15: Calculate the loss L*weight* based on the predictions of both positive and negative samples by Eqs. (6) and (7). 16: **end if** 17: Update Mb by minimising the overall loss L (obtained via Eq. (8)). 18: **end while** training process of the negative sample losses Lneg and L*weight*. The objective of the KL loss is to make the two distributions close. In our method, we seek to make the predictions of the original samples approach to the ensemble predictions. Thanks to the generated high quality positive samples, the KL loss can improve the robustness of the VQA models (Refer to in Line 1-2 of Table 7), to some extent. However, if we adopt the KL loss in the whole training process, the VQA models will fit the data distributions excessively, and thus may hinder the training process of the negative sample losses. The experimental results in Table 7 also confirm it. For example, our DDG with the training strategy that introduces KL loss in the overall training process still performs better than that training using only (Ldis and Lens), but performs worse than that training the model using KL loss for 12 epochs. ## Comparison With The State-Of-The-Art Methods Regarding Of The Number Of Training Samples. As shown in Table 8, the VQA-CP v2 dataset comprises 438k training samples, while KDDAug, SimpleAug, and our DDG generate an additional | Strategy | VQA-CP v2 test set (%) | | | | |----------------------------|--------------------------|--------|-------|-------| | All | Yes/No | Number | Other | | | UpDn† | 41.53 | 43.45 | 13.64 | 48.18 | | + Ldis + Lens (all epochs) | 47.77 | 63.49 | 13.91 | 48.83 | | + DDG (KL for all epochs) | 56.35 | 86.05 | 21.30 | 50.40 | | + DDG (KL for 12 epochs) | 61.14 | 88.77 | 49.33 | 49.90 | Table 7: Effect of the training strategy. We report the experimental results in terms of Accuracy (%). "Ldis + Lens (all epochs)" denotes we additionally introduce Ldis and Lens losses to train the UpDn model in the whole training process. "DDG (KL for all epochs)" means we train the UpDn model with the DDG method, where the KL loss exists in the whole training process. "DDG (KL for 12 epochs)" denotes we train the UpDn model with the DDG method, and the KL loss exists in the first 12 epochs. 4088k, 3081k, and 1752k augmented training samples, respectively. Despite using fewer augmented samples than KDDAug and SimpleAug, our DDG outperforms these methods by around 8% and 1%, respectively, which demonstrates the effectiveness of our DDG. | Model | VQA-CP v2 test set (%) | # Samples | |------------------------------|--------------------------|-------------| | UpDn (Anderson et al., 2018) | 41.53 | 438k | | SimpleAug (Kil et al., 2021) | 52.65 | +3081k | | KDDAug (Chen et al., 2022) | 60.24 | +4088k | | DDG (Ours) | 61.14 | +1752k | Table 8: Comparison with the state-of-the-art data augmentation based methods (*e.g.,* SimpleAug and KDDAug) on the VQA-CP v2 dataset. ## 6.6 More Visualisation Results Qualitative results. We provide more visualisation results in Figure. 4 to present the effectiveness of our DDG. From the results, our DDG localise the target objects more accurately than the UpDn (Anderson et al., 2018) model and SSL-VQA method, and thus makes a more correct prediction than the compared methods. These visualisation results demonstrate the effectiveness of our DDG. Visualisation of the generated samples. As shown in Section 3.2, we have generated both positive image and question samples. To evaluate the generated methods, we provide some visualisation results about the augmented questions and selected target objects in Table 9 and Figure. 5, respectively. From the results in Table 9, our augment questions have similar semantics to the original questions, which demonstrates our generated questions are reasonable as the positive samples. Moreover, although the baseline model UpDn (Anderson et al., 2018) trained on the VQA-CP (Agrawal et al., 2018) v2 dataset achieves poor performance on the test set, the UpDn model still can obtain good performance on the training set. Thus, we adopt the image attention weights of the pre-trained UpDn model to help find the objects that are relevant to the questions. As shown in Figure. 5, we show the objects with the top-3 attention weights of the pre-trained UpDn model in the images. From the results, the pre-trained UpDn model can localise the target objects referred to in the questions, which demonstrates that selecting the objects with top-k image attention weights of the pre-trained UpDn model is reasonable, and can exclude background information. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) Q: What is the green stuff? **Q: What does the man in the** ![16_image_2.png](16_image_2.png) ![16_image_1.png](16_image_1.png) ![16_image_0.png](16_image_0.png) ![16_image_3.png](16_image_3.png) plate? Q: What color is his shirt? **Q: What color is the baby** ![16_image_4.png](16_image_4.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? We discuss the limitations of our work in Section Limitations ✓ A2. Did you discuss any potential risks of your work? We discuss the potential risks of our work in Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Section Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We conduct experiments on the VQA v2 and VQA-CP v2 datasets. In overall Sections ✓ B1. Did you cite the creators of artifacts you used? In overall sections ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? In Section Appendix B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Section Appendix ## C ✓ **Did You Run Computational Experiments?** In Section Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Section Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Section Experiments and Section Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Section Experiments and Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Section Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
lucy-etal-2023-words
Words as Gatekeepers: Measuring Discipline-specific Terms and Meanings in Scholarly Publications
https://aclanthology.org/2023.findings-acl.433
Scholarly text is often laden with jargon, or specialized language that can facilitate efficient in-group communication within fields but hinder understanding for out-groups. In this work, we develop and validate an interpretable approach for measuring scholarly jargon from text. Expanding the scope of prior work which focuses on word types, we use word sense induction to also identify words that are widespread but overloaded with different meanings across fields. We then estimate the prevalence of these discipline-specific words and senses across hundreds of subfields, and show that word senses provide a complementary, yet unique view of jargon alongside word types. We demonstrate the utility of our metrics for science of science and computational sociolinguistics by highlighting two key social implications. First, though most fields reduce their use of jargon when writing for general-purpose venues, and some fields (e.g., biological sciences) do so less than others. Second, the direction of correlation between jargon and citation rates varies among fields, but jargon is nearly always negatively correlated with interdisciplinary impact. Broadly, our findings suggest that though multidisciplinary venues intend to cater to more general audiences, some fields{'} writing norms may act as barriers rather than bridges, and thus impede the dispersion of scholarly ideas.
# Words As Gatekeepers: Measuring Discipline-Specific Terms And Meanings In Scholarly Publications Li Lucy1,2 Jesse Dodge1 David Bamman2 Katherine A. Keith1,3 1Allen Institute for Artificial Intelligence 2University of California, Berkeley 3Williams College {lucy3_li, dbamman}@berkeley.edu jessed@allenai.org kak5@williams.edu ## Abstract Scholarly text is often laden with jargon, or specialized language that can facilitate efficient in-group communication within fields but hinder understanding for out-groups. In this work, we develop and validate an interpretable approach for measuring *scholarly jargon* from text. Expanding the scope of prior work which focuses on word types, we use word sense induction to also identify words that are widespread but overloaded with different meanings across fields. We then estimate the prevalence of these discipline-specific words and senses across hundreds of subfields, and show that word senses provide a complementary, yet unique view of jargon alongside word types. We demonstrate the utility of our metrics for science of science and computational sociolinguistics by highlighting two key social implications. First, though most fields reduce their use of jargon when writing for generalpurpose venues, and some fields (e.g., biological sciences) do so less than others. Second, the direction of correlation between jargon and citation rates varies among fields, but jargon is nearly always negatively correlated with interdisciplinary impact. Broadly, our findings suggest that though multidisciplinary venues intend to cater to more general audiences, some fields' writing norms may act as barriers rather than bridges, and thus impede the dispersion of scholarly ideas. ## 1 Introduction Specialized terminology, or jargon, naturally evolves in communities as members communicate to convey meaning succinctly. It is especially prevalent in scholarly writing, where researchers use a rich repertoire of lexical choices. However, niche vocabularies can become a barrier between fields (Vilhena et al., 2014; Martínez and Mammola, 2021; Freeling et al., 2019), and between scientists and the general public (Liu et al., 2022; August et al., 2020a; Cervetti et al., 2015; Freel- ![0_image_0.png](0_image_0.png) ing et al., 2021). Identifying scholarly jargon is an initial step for designing resources and tools that can increase the readability and reach of science (August et al., 2022a; Plavén-Sigray et al., 2017; Rakedzon et al., 2017). Research on scholarly language typically focuses on the relative prevalence of words (McKeown et al., 2016; Prabhakaran et al., 2016; Sim et al., 2012; Rakedzon et al., 2017). However, the same word can be overloaded with multiple meanings, such as *bias* referring to electric currents or statistical misestimation (Figure 1). We use BERTbased word sense induction to disentangle these, and demonstrate the utility of including both word types and senses in our operationalization of *scholarly jargon*. We measure jargon in English abstracts across three hundred fields of study, drawn from over 12 million scholarly abstracts and one of the largest datasets of scholarly documents: the Semantic Scholar Open Research Corpus (S2ORC) (Lo et al., 2020). Our findings are valuable for several groups that 6929 partake in science: readers, authors, and science of science researchers. Due to scholarly language's gatekeeping effect, natural language processing (NLP) researchers have developed tools to support **readers**, such as methods for simplifying or defining terminology (Kim et al., 2016; Vadapalli et al., 2018; August et al., 2022a; Head et al., 2021; August et al., 2022b; Murthy et al., 2022). When deciding what constitutes jargon, studies may rely on vocabulary lists based on word frequency, often collapsing all of science into one homogeneous language variety (August et al., 2020b; Rakedzon et al., 2017; Plavén-Sigray et al., 2017). Our approach identifies language associated with individual subfields and proposes a bottom-up, data-driven process for creating these vocabularies (§3 and 4). Second, measuring levels of discipline-specific language in abstracts can inform **authors** who wish to communicate to a wider audience or enter a new field. We show that while some subfields tend to use highly specialized word types, others use highly specialized senses (§5). In addition, we provide evidence for audience design in scholarly discourse (§6.1), following a sociolinguistic framework that describes how speakers accommodate language to the scope of their audience (Bell, 1984). Finally, our language-centered approach contrasts the typical paradigm in **science of science** research, where citation behavior often defines relationships among articles, venues, and fields (e.g. Boyack et al., 2005; Rosvall and Bergstrom, 2008; Peng et al., 2021). Citation count is a common measurement of "success", and the mechanisms behind it form a core research area (Wang and Barabási, 2021; Foster et al., 2015; Fortunato et al., 2018). On the other hand, interdisciplinarity is increasingly valued, but does not always lead to short-term citation gains (Van Noorden, 2015; Larivière and Gingras, 2010; Okamura, 2019; Chen et al., 2022). We run regression analyses to examine the relationship between discipline-specific senses and types and these two distinct measures of success (§6.2). To summarize, we contribute the following (Figure 1): - **Methods.** We propose a new measure of scholarly jargon to identify discipline-specific word types and senses (§3). We validate our approach for measuring senses by showing it recalls more overloaded words in Wiktionary compared to word types alone (Figure 3). ![1_image_0.png](1_image_0.png) - **Social implications.** We illustrate the utility of our jargon measurements for computational social science by analyzing audience design and articles' success (§6). Though multidisciplinary venues may intend to be general-purpose, more dominant fields in these venues reduce jargon less so than others (Figure 5). Since jargon nearly always has a negative relationship with interdisciplinary impact (Table 4), our findings encourage the reconsideration of existing scholarly writing norms. We hope our measure of scholarly jargon can help researchers quantify language barriers in science and their implications. Our code and scored lists of jargon for each subfield can be found at https://github.com/lucy3/words_as_gatekeepers. ## 2 Data Our work involves several datasets: scholarly abstracts, Wikipedia, and Wiktionary. We use abstracts to calculate the association of words with disciplines and Wikipedia to supplement our calculation of background word probabilities. Later, in §4, we introduce and describe how we use Wiktionary to validate our approach. ## 2.1 Contemporary S2Orc Our dataset of academic articles, CONTEMPORARY S2ORC, contains 12.0 million abstracts and 2.0 billion words1that span a mix of scholarly fields (Figure 2, Appendix A). 1We define a word as a non-numeric, non-punctuation token outputted by Huggingface transformer's whole-word BasicTokenizer. To create CONTEMPORARY S2ORC, we draw from the July 2020 release of S2ORC (Lo et al., 2020). S2ORC is a general purpose corpus that contains metadata for 136 million scholarly articles, including 380.5 million citation links (Lo et al., 2020). These articles originate from Semantic Scholar, which obtains data directly from publishers, the Microsoft Academic Graph (MAG), arXiv, PubMed, and the open internet. Metadata, such as titles, authors, publication years, journals/venues, and abstracts are extracted from PDFs and LaTeX sources or provided by the publisher. Though extensive, S2ORC contains some amount of noisy or missing metadata. We remove non-English articles and those with missing metadata, consolidate journals and venues into a single venue label, and limit the dataset to the years 2000-2019 (Appendix B). S2ORC links articles to paper IDs in the Microsoft Academic Graph (MAG) (Sinha et al., 2015; Wang et al., 2019), so we match S2ORC abstracts to MAG fields of study (FOS). S2ORC originally contains top-level MAG FOS (level 0), e.g. *biology*, but we also join abstracts with second level MAG FOS (level 1), e.g. *immunology*, for more granularity.2In this present paper, we refer to level 0 FOS as *fields*, and level 1 FOS as *subfields*. We take an approximately uniform sample of 50k articles per subfield, resulting in a total of 293 subfields that fall under 19 fields (Appendix A). ## 2.2 Wikipedia We include Wikipedia article content to counterbalance CONTEMPORARY S2ORC's STEM-heavy focus for our estimation of words' typical prevalence. Wikipedia is a popular information-gathering resource (Reagle and Koerner, 2020), and we use an Oct 1, 2022 dump of its articles. It offers complementary topical coverage that is collectively curated and driven by public interest, and includes biographies, culture, and arts (Mesgari et al., 2015). We remove Wiki formatting using Attardi (2015)'s text extractor, and discard all lines, or paragraphs, that are less than 10 white-spaced tokens long. We sample twice as many Wikipedia paragraphs as the number of CONTEMPORARY S2ORC abstracts, so that each is similar in size despite differences in document length. In total our Wikipedia dataset, WIKISAMPLE, contains 24.0 million paragraphs 2A secondary level FOS may fall under multiple top-level FOS, some articles are labeled with multiple FOS at the same level, and some articles marked with top-level FOS do not indicate a secondary level FOS. ## 3 Methods Language differences among subsets of data can be measured by a variety of approaches, from geometric to information theoretic (Ramesh Kashyap et al., 2021; Vilhena et al., 2014; Aharoni and Goldberg, 2020). We calculate the association of a word's type or sense to subfields using normalized pointwise mutual information (NPMI). We choose NPMI over similar metrics (e.g. tf-idf, divergence, z-score) because of the nature of language difference it emphasizes: higher NPMI scores reflect language that is not only commonly used in a community, but also highly specific to it (Lucy and Bamman, 2021; Gardner et al., 2021). NPMI offers an interpretable metric of association, where a score of 1 indicates perfect association, 0 indicates independence, and -1 indicates no association. We follow Lucy and Bamman (2021)'s framework of calculating NPMI separately for word types and senses, which they originally used to identify communityspecific language on social media. We update their approach with a more recent word sense induction (WSI) method, and use a different interpretation of type and sense NPMI scores. ## 3.1 Discipline-Specific Words We calculate NPMI for word types, or *type NPMI*, as the following measure:: $${\mathcal{T}}_{f}(t)={\frac{\log{(P(t\mid f)/P(t))}}{-\log{P(t,f)}}}.\qquad\qquad(1)$$ Here, P(t | f) is the probability of a word t occurring given a set of abstracts f in a field, P(*t, f*) is their joint probability, and P(t) is the probability of the word overall (Lucy and Bamman, 2021; Zhang et al., 2017). "Overall" refers to the combined background dataset of CONTEMPORARY S2ORC and WIKISAMPLE. We only calculate Tf (t) for words that appear at least 20 times in each field.3 As illustrative examples, Table 1 shows words with the highest Tf (t) in several fields. ## 3.2 Discipline-Specific Senses Widely disseminated words can be overloaded with domain-specific meanings or use. For example, 3As we will describe in §3.2, the sense NPMI pipeline operates on lemmas, not words. Standard lemmatizers may not be suitable for rarer words in science, so to make our type and sense metrics comparable, we only lemmatize the set of widely-used words that are shared by both pipelines. | NLP | Chemical Engineering | Immunology | Communication | International Trade | Epistemology | | | | | | | |----------------|------------------------|----------------|-----------------|-----------------------|----------------|----------|--------|-------------|--------|-----------------|--------| | word | Tf (t) | word | Tf (t) | word | Tf (t) | word | Tf (t) | word | Tf (t) | word | Tf (t) | | nlp | 0.412 | rgo | 0.334 | treg | 0.346 | saccade | 0.354 | wto | 0.453 | epistemic | 0.356 | | corpora | 0.404 | mesoporous | 0.328 | cd4 | 0.341 | saccades | 0.345 | trade | 0.438 | epistemology | 0.350 | | treebank | 0.401 | nanosheets | 0.327 | immune | 0.3388 | stimuli | 0.333 | fdi | 0.401 | epistemological | 0.342 | | disambiguation | 0.396 | nanocomposite | 0.325 | il | 0.336 | stimulus | 0.331 | ftas | 0.396 | husserl | 0.332 | | corpus | 0.393 | nanocomposites | 0.324 | th2 | 0.335 | cues | 0.327 | antidumping | 0.396 | kant | 0.329 | bias could refer to a type of voltage applied to an electronic system, social prejudice, or statistical misestimation. Thus, we include word senses as a complement to word types for characterizing domain-specific language. We use *senses* to refer to different meanings or uses of the same word induced by word sense induction (WSI). ## 3.2.1 Word Sense Induction To partition occurrences of words into senses, we adapt Eyal et al. (2022)'s WSI pipeline with minimal modifications. WSI is an unsupervised task where occurrences of words are split into senses. Eyal et al. (2022)'s approach is designed for largescale datasets, where a sample of a target word's occurrences is used to induce senses, and remaining occurrences are then assigned to them. To induce senses, a masked language model predicts the top s substitutes of each occurrence of a target word. Then, a network is created for each target word, where nodes are substitutes and edges are their cooccurrence. Louvain community detection is then applied to determine senses, or sets of substitutes (Blondel et al., 2008). For example, in the network for *bass*, the substitutes for its sense as a type of fish are likely not predicted at the same time as substitutes for its musical sense, so each set would represent separate senses. We carry out this WSI pipeline on a caseinsensitive target vocabulary of 6,497 "widely used" words: those that appear in the top 98th percentile by frequency and in at least 50% of venues, not including stopwords and words split into wordpieces.4 We lemmatize and lowercase target words and substitutes, following Eyal et al. (2022)'s implementation, because otherwise the most common substitutes representing a sense may be different lemmas of the same word. This processing step reduces the target vocabulary into 4,407 lemmas. 4We avoid wordpieces since Eyal et al. (2022)'s pipeline predicts substitutes at the token-level. We sample 1000 instances of each vocabulary lemma, and use ScholarBERT to predict each instance's top s = 5 substitutes (Hong et al., 2022).5 We truncate each abstract to this model's maximum input length. We follow Eyal et al. (2022)'s heuristics for determining sets of substitutes that are big enough to recognize as senses: each set needs to have at least two substitutes, and the second most frequent substitute needs to appear at least 10 times across the target word's sample. If no sets are big enough, we add a fallback case, where we place all occurrences of a word to a single sense. Eyal et al. (2022) assigns additional occurrences of the target word to induced senses based on Jaccard similarity. We also add a fallback case here: if the overlap of a remaining occurrence's substitutes with all senses is zero, we assign that occurrence to an extra sense representing previously unseen senses. ## 3.2.2 Sense Npmi Once each occurrence of a widely-used word is labeled with a sense, their frequencies can be used to calculate sense NPMI. Sense NPMI uses the same formula as type NPMI, except it is calculated at the sense-level rather than the word-level (Lucy and Bamman, 2021). That is, counts of a word t are replace with counts of its ith sense, ti: $${\mathcal{S}}_{f}(t_{i})={\frac{\log{\big(}P(t_{i}\mid f)/P(t_{i}){\big)}}{-\log{P(t_{i},f)}}}.\qquad(2)$$ ## 4 Validation 4.1 Wiktionary We perform in-domain validation of the unsupervised sense pipeline using Wiktionary. Words 5We pick ScholarBERT over similar transformer models trained on scholarly language (e.g. SciBERT), because it is trained on a wider breadth of disciplines, splits fewer potential vocab words into wordpieces (15 versus 193), and uses RoBERTa-style training (Hong et al., 2022; Beltagy et al., 2019). marked as associated with a subfield in this online dictionary should also be highly scored by our metrics. Wiktionary is collaboratively maintained and includes common words listed with definitions that may be labeled as having "restricted usage" to a topic or context. For example, the word *ensemble* has the labels machine learning, *fashion*, and music (Appendix C). We map Wiktionary labels in English definitions of target words using exact string matching to fields and subfields. If an NPMI score threshold were used to determine whether a token should be considered discipline-specific or not, we expect sense NPMI to recall more words labeled by Wikitionary than type NPMI does. We do not calculate precision, because Wiktionary is not necessarily comprehensive for all subfields. We obtain Wiktionary entries for 94.94% of the common, widely used words that were inputs in the WSI pipeline. We filter out words where all definitions are labeled with only one field, and allow subfields to inherit the words labeled with their parent field. In total, we have 11,548 vocabulary word and subfield pairs to recall across 83 subfields. Since recall is calculated at the word-level and sense NPMI is at the sense-level, we use a word t's most frequent sense ti's Sf (ti) in a subfield to represent word-level sense NPMI Sf (t). In Eyal et al. (2022)'s WSI pipeline, the resolution parameter γ in Louvain community detection calibrates the number of senses induced per word. Increasing resolution leads to more fine-grained word senses and higher recall, but potentially spurious senses (Figure 3). Rather than using Eyal et al. (2022)'s default resolution value of 1, we use a dynamic formula for resolution (Newman, 2016): $$\epsilon=\frac{\omega_{i n}-\omega_{o u t}}{\log\omega_{i n}-\log\omega_{o u t}},$$ $\blacksquare$ , (3) where ωin is the probability of an edge between two nodes in the same community, and ωout is the probability of an edge between two nodes in different communities. Intuitively, nodes within communities should be more connected than nodes between them. We follow Newman (2016)'s algorithm, initializing γ = 1 and iterating for each target lemma at most 10 times. In each iteration, we run Louvain community detection and recalculate γ using the edge probabilities in the current clustering. We stop early if γ converges within 0.01 of its previous value. Sense NPMI with dynamic resolution recalls more discipline-specific Wiktionary words than ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) type NPMI at the same score cutoff (Figure 3). In addition, the sense NPMI of a word in a subfield labeled by Wiktionary is higher than the score of the same word in a random field (paired t-test, p < 0.001, Appendix C). Thus, Wiktionary-based validation shows that our unsupervised approach is able to measure discipline-specific senses, and in all downstream analyses, we use the dynamically defined γ for WSI. ## 4.2 Examples And Interpretation Examples of semantically overloaded words between fields can also lend face validity to our results (Table 2). Returning to the example introduced at the beginning, *bias* is indeed very overloaded. It has distinct senses with high NPMI (> 0.2) across multiple fields, including statistics (*skew*),6 optoelectronics (*charge*), cognitive psychology (*preference*), and climatology (*error*). These examples suggest that future work could examine how our approach could provide potential candidates for updating dictionaries or glossaries when new senses are introduced. Table 3 shows examples of words whose scores increase from type NPMI to sense NPMI despite having counts split across senses. Lucy and Bamman (2021) interpret sense and type NPMI similarly in their downstream analyses, based on the magnitude of their values, but this does not account for how type and sense NPMI scores are related. In the boundary case where a word t only has a single sense t0, Sf (t0) = Tf (t). This leads to a strong correlation between the two metrics, especially when a sense scored as highly associated with a field is also the dominant sense of that word in general. Thus, to narrow in on what 6Word in parentheses is the top predicted substitute for that subfield's sense for *bias*. | sense t1 | sense t2 | | | | | | |-------------|------------------------|--------|------------------------------------------------|-------------------------|--------|------------------------------------------------------| | word t | FOS a | Sa(t1) | top substitutes | FOS b | Sb(t2) | top substitutes | | kernel | Operating system | 0.321 | block, personal, ghost, every, pure | Agronomy | 0.272 | grain, palm, body, gross, cell | | performance | Chromatography | 0.266 | perform, play, timing, temperature, contribute | Industrial organization | 0.234 | success, record, position, accomplishment, hand | | network | Computer network | 0.327 | graph, net, regular, key, filter | Telecommunications | 0.259 | connection, channel, link, connectivity, association | | root | Dentistry | 0.413 | crown, arch, tooth, long, tissue | Horticulture | 0.330 | plant, tree, branch, part, stem | | power | Electrical engineering | 0.329 | energy, electricity, load, fuel, lit | Combinatorics | 0.193 | value, order, term, sum, degree | | Pure mathematics | Monetary economics | Computer security | Stereochemistry | | | | | | | | | | | | | |--------------------|----------------------|---------------------|-------------------|------------|-------|--------|--------|------------|-------|--------|--------|-----------|-------|--------|--------| | word | ∆ | Sf (t) | Tf (t) | word | ∆ | Sf (t) | Tf (t) | word | ∆ | Sf (t) | Tf (t) | word | ∆ | Sf (t) | Tf (t) | | power | 0.202 | 0.186 | -0.016 | movement | 0.218 | 0.266 | 0.048 | primitive | 0.162 | 0.221 | 0.058 | attack | 0.228 | 0.184 | -0.044 | | pole | 0.194 | 0.207 | 0.013 | liquid | 0.195 | 0.196 | 0.002 | host | 0.151 | 0.205 | 0.054 | title | 0.216 | 0.264 | 0.048 | | union | 0.193 | 0.141 | -0.051 | interest | 0.182 | 0.382 | 0.200 | elasticity | 0.148 | 0.158 | 0.010 | km | 0.212 | 0.175 | -0.037 | | surface | 0.193 | 0.260 | 0.068 | turbulence | 0.176 | 0.155 | -0.021 | hole | 0.147 | 0.134 | -0.013 | framework | 0.205 | 0.215 | 0.010 | | origin | 0.193 | 0.188 | -0.005 | provider | 0.176 | 0.121 | -0.055 | key | 0.142 | 0.320 | 0.179 | solve | 0.202 | 0.165 | -0.037 | we gain from WSI, we examine not only senses that are highly associated with a field, but have sense NPMI scores higher than their words' type NPMI scores (Table 3). Therefore, we count a token with a labeled sense as a discipline-specific sense if Sf (ti) > Tf (t) and Sf (ti) > c for a subfield f and some cutoff c. Otherwise, the token is a discipline-specific type if Tf (t) > c. ## 5 Language Norms Across Fields The linguistic insularity of science varies across fields. For example, Vilhena et al. (2014) found that phrase-level jargon separates biological sciences more so than behavioral and social sciences. We perform a similar analysis with the novel addition of word senses. To summarize the distinctiveness of word types in a field, we calculate the mean type NPMI score of unique words in a field. Before taking the mean, however, we adjust scores by zeroing negative values, since we are more interested in words associated with a field rather than those that are not. This zeroing practice is typically used in studies where PMI measures word relatedness (Levy et al., 2015; Dagan et al., 1993; Bullinaria and Levy, 2007). Like Vilhena et al. (2014), we also find that the biological sciences have very distinctive word types (Figure 4). However, there is a considerable amount of overlap in word type distinctiveness across fields. Similar to how natural sciences name ![5_image_0.png](5_image_0.png) molecules and chemicals, the arts and humanities name canons of writers, philosophers, and artists. We also examine what fields gain the most in NPMI scores when common words are broken into their senses. We recalculate subfields' average adjusted NPMI, but use max(Tf (t), Sf (t)) instead of Tf (t) for words that have induced senses. Based on their relative increases in average adjusted NPMI, subfields in math/technology, physics, and economics often use common words in specialized contexts (Table 3, Figure 4). There is no significant Pearson correlation between the distinctiveness of subfields' word types and that of their senses. Thus, word senses provide a very different perspective on language norms and suggests an additional route through which gatekeeping may occur. ## 6 Social Implications In this section, we examine two social implications of our metrics: audience design and scholarly success. We limit these experiments to articles in CONTEMPORARY S2ORC that are published among 11,047 venues in the top 95th percentile by abstract count (at least 800 each in S2ORC), to ensure solid estimation of venue-level information, such as their disciplinary focus and average citations per article. ## 6.1 Audience Design Audience design is a well-studied sociolinguistic phenomenon where a speaker's language style varies across audiences (Bell, 1984, 2002; Ndubuisi-Obi et al., 2019; Androutsopoulos, 2014). For example, on Twitter, when writers target smaller or more geographically proximate audiences, their use of nonstandard language increases (Pavalanathan and Eisenstein, 2015). Here, we examine this type of language accommodation at the level of subfields, as our data does not contain unique author identifiers that would allow measurements of author-level variation. We hypothesize that for abstracts within the same subfield, ones published for broader audiences (general-purpose venues) use less scholarly jargon than those published in narrower, discipline-focused venues. To address this hypothesis, we first collect sets of 6 general-purpose and 2464 discipline-specific venues. We use general-purpose venues that appear in both our dataset and Wikipedia's list of general and multidisciplinary journals:7 Nature, Nature Communications, PLOS One, Science, *Science Advances*, and *Scientific Reports*. Discipline-focused venues are those where 80% of articles fall under a single subfield or its name contains the subfield, e.g. Agronomy *Journal*. 8 Among these two venue sets, we examine abstracts labeled with only one subfield. We then calculate the fraction of jargon over all words in each abstract, by counting tokens t 7https://en.wikipedia.org/wiki/List_of_scholarly_journals. 8There is substantial overlap between these two groups, where 80% of venues dominated by one subfield also mention the subfield in its name. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) that are either discipline-specific senses or types, where c = 0.1. In other words, we count t if max(Tf (t), Sf (t)) > 0.1 in the abstract's subfield f. We find that most fields adjust their rate of jargon based on audience, though fields such as medicine and physics are notable exceptions (Figure 5). One explanation for this exceptional behavior is that general-purpose venues have a history of being led and dominated by biological sciences, and in some, by physical sciences as well (de Carli and Pereira, 2017; Koopman, 2011; Varmus et al., 2000). Thus, jargon-laden fields further from these areas adjust their writing the most when publishing in these venues. A limitation of this approach for quantifying the amount of jargon in an abstract is that it relies on choosing c. We also obtain similar results with c = 0.2 and justify our choice of c in Appendix D.1. An alternative perspective mimics how soon a reader may encounter highly specialized language in an abstract. In this approach, we calculate the maximum over an abstract's type or sense NPMI scores within the first m tokens of the abstract. These results provide another view of our previous finding: fields such as computer science and engineering adjust their content for general-purpose venues more so than those in the biological sciences (Figure 6). This indicates that though most "general-purpose" venues intend to be for all of science,9some fields are expected to adapt their language more so than others. Among in-group members, the use of specialized vocabulary can signal legitimacy and expertise (Agha, 2005; Labov, 1973). Thus, there may be competing incentives influencing authors' writing. In the next section, we further investigate the relationship between jargon and two incentives in science: citation count and a metric of interdisciplinary impact. ## 6.2 Scholarly Success We hypothesize that jargon plays different roles in the success of an article depending on how "success" is defined. In particular, since jargon gatekeeps outsiders from a discipline, we expect it to negatively affect interdisciplinary impact. To test this hypothesis, we run two sets of regressions to measure the relationship between abstracts' use of jargon and citation behavior within five years after publication. The first set of regressions predicts short-term citation counts, while the second predicts interdisciplinary impact. We run separate regression models for each field to compare heterogeneity across fields. Each unit of analysis is an abstract published in 2000-2014 labeled with only one or two subfields. Two key independent variables are the fractions of discipline-specific words and senses in an abstract, with c = 0.1. For abstracts that have two subfields in the same analyzed parent field, we sum their type and sense jargon counts. Additional independent variables include time (three evenly-sized time bins within 2000-2014), length of abstract in tokens, number of authors, number of references in the article, number of subfields (one or two), and the venue's average citations per article. Citation count is an over-dispersed count variable, so we run a negative binomial regression to predict this outcome (Hilbe, 2011). In some cases, | Citation count | Interdisciplinary impact (DIV) | | | | | | |-----------------------------------------------------------------|----------------------------------|---------------------------|-----------------------------|---------|---------|--------| | Field | types | senses | # obv. | types | senses | # obv. | | Medicine | -0.15*** | 0.60*** | 1,137,923 -0.10*** -0.05*** | 589,641 | | | | Engineering | 0.07 | 0.64*** | 786,559 -0.09*** -0.15*** | 199,790 | | | | Comp. sci. | -0.87*** | 0.71*** | 556,330 -0.12*** -0.11*** | 196,234 | | | | Biology | -0.12*** | 0.52*** | 824,768 -0.80*** -0.03*** | 481,103 | | | | Economics | 0.15 | 1.23*** | 454,215 -0.11*** | 0.00 | 123,476 | | | Physics | 0.47*** -1.04*** | 648,729 -0.16*** -0.10*** | 203,009 | | | | | Chemistry | -1.36*** -2.32*** | 613,535 -0.10*** -0.08*** | 187,621 | | | | | Mathematics | 1.22*** | 1.40*** | 363,369 -0.15*** -0.11*** | 128,482 | | | | Psychology | 0.34*** | 3.68*** | 261,102 -0.11*** -0.06*** | 133,319 | | | | Geology | -0.42*** | 0.83*** | 343,250 -0.13*** -0.13*** | 138,308 | | | | Sociology | 1.18*** | 2.24*** | 149,484 -0.08*** | 0.01 | 56,088 | | | Business | 0.30** | 2.71*** | 160,536 -0.11*** -0.04*** | 39,602 | | | | Environ. sci. | -1.22*** -2.20*** | 137,862 -0.12*** -0.05*** | 49,199 | | | | | Geography | 0.17 | 0.37 | 127,561 -0.10*** -0.04*** | 51,408 | | | | Material sci. | -1.73*** | 1.42*** | 149,602 -0.14*** -0.09*** | 45,445 | | | | Philosophy | -0.92*** | 2.16*** | 68,512 -0.03*** | 0.06*** | 10,559 | | | Art | -1.75*** -2.30 | 68,220 -0.04*** | 0.03 | 5,826 | | | | History | -0.27 | 10.94*** | 47,910 -0.50*** | 0.05 | 6,513 | | | Political sci. | 2.27*** | 2.86*** | 44,994 -0.04** | 0.03 | 8,486 | | | ***p < 0.001, **p < 0.01, *p < 0.05 with Bonferroni correction. | | | | | | | jargon use has a significant positive relationship with citations, but the direction of this relationship differs across fields (Table 4, Appendix D.2). Alternatively, interdisciplinary impact considers the subfield composition of articles citing a target abstract. We use Leydesdorff et al. (2019)'s established formula, which they call DIV: $$\mathbf{DIV}({\mathcal{C}})={\frac{n}{N}}(1-\operatorname{Gini})\sum_{i,j\in{\mathcal{C}},i\neq j}{\frac{d_{i j}}{n(n-1)}},\ \ \mathbf{(4)}$$ where C is the set of subfields citing the abstract, n = |C|, N is the total number of subfields, and dij = 1 − cos(vi, vj ), where v are subfields vectorized using overall cross-subfield citation counts (Appendix D.2). The first component measures the fraction of citing subfields, the second uses the Gini coefficient to calculate balance of citation counts among C, and the third incorporates subfield similarity (Leydesdorff et al., 2019; Chen et al., 2022; Stirling, 1998). We run ordinary least squares regression on abstracts that are cited by at least two subfields, with DIV as the dependent variable. Discipline-specific words and senses have a negative relationship with DIV across fields that have highly distinctive language norms (Table 4). Thus, though jargon has a varying relationship with citation counts, our regression results suggest that it may generally impede the forging of interdisciplinary connections. ## 7 Related Work Computational sociolinguistics often focuses on social media (Nguyen et al., 2016), with less attention on situation-dependent language varieties, or *registers*, in scholarly communities (Agha, 2005). Here, language differences can indicate different factions of authors and disciplinary approaches (Ngai et al., 2018; West and Portenoy, 2016; Sim et al., 2012). In addition to our present work, a few studies have examined word meaning or use, such as semantic influence or novelty (Soni et al., 2021, 2022) and semantic uncertainty (McMahan and Evans, 2018). Research on lexical ambiguity in science also appears in education, with an emphasis on how to improve the teaching of overloaded terminology (Ryan, 1985; Cervetti et al., 2015). Other NLP studies of science have predicted responses to articles (Yogatama et al., 2011), measured impact and innovation (Gerow et al., 2018; Hofstra et al., 2020; McKeown et al., 2016), and classified topics' rhetorical functions (Prabhakaran et al., 2016). ## 8 Conclusion We use data-driven, interpretable methods to identify jargon, defined as discipline-specific word types and senses, across science at scale. By identifying senses, we are able to recall more words labeled as associated with a field in Wiktionary than with word types alone. We then map language norms across subfields, showing that fields with distinctive word types differ from those with distinctive word senses. Finally, we analyze implications of jargon use for communication with out-groups. We find that supposedly general-purpose venues have varying expectations around abstracts' use of jargon depending on the field, and jargon is negatively related to interdisciplinary impact. This suggests a potential opportunity for the reconsideration of abstract writing norms, especially for venues that intend to bridge disciplines. ## 9 Limitations Below, we outline several limitations of our work. Data coverage. Our claims are only valid for the datasets accessed in our study. We use the Microsoft Academic Graph (Sinha et al., 2015) and S2ORC, which is larger than other publiclyavailable scientific text corpora (Lo et al., 2020). However, these sources can differ from other collections of scientific text, because which journal/venues, sources, and resource types constitute "science" differs across academic literature search systems and databases (Gusenbauer and Haddaway, 2020; Ortega and Aguillo, 2014). In particular, since a substantial portion of S2ORC comes from scrapes of arXiv and PubMed, its coverage of computer science and medicine is better than that of other fields (Lo et al., 2020). Also, our coverage is limited to English articles. Past work has shown that citation-based metrics of impact favor articles written in English, and articles from non-Englishspeaking countries have different citation patterns compared to others (Liang et al., 2013; Liu et al., 2018; González-Alcaide et al., 2012). Finally, we recognize that MAG field of study labels are contestable and imperfect. For example, less than twothirds of ACL articles are labeled as *natural language processing*, and the most popular subfield in ICML is *mathematics* rather than *machine learning*. Token-level analyses. Another limitation of our study is that many scholarly terms are not single words or tokens, but rather phrases. Phrases are somewhat accounted for by measuring words' senses, since senses induced by language models reflect words' in-context use, including their use in discipline-specific phrases. For example, Table 3 shows that *title* has a sense specific to stereochemistry, and in abstracts, this word often occurs in the phrases title reaction or *title compound*. Phrases containing distinctive words are also somewhat accounted for by measuring individual words in the phrase. However, phrase-level measurements of jargon would likely still be useful for improving interpretability and downstream applications of our metrics, and so discipline-specific phrases are a promising avenue for future work. Compute. Science of science is interdisciplinary and involves a range of organizations and institutions. Not all researchers will have easy access to the computuational resources needed to replicate our study or apply our approach to data of the same scale. The most resource intensive step of our pipeline is when ScholarBERT predicts each instance of a vocabulary word's top 5 substitutes across CONTEMPORARY S2ORC and WIKISAM-PLE. This took approximately 90 GPU hours split across Nvidia RTX A6000 and Quadro RTX 8000 GPUs. ScholarBERT itself is a 770M-parameter BERT model (Hong et al., 2022), and generally our compute infrastructure included machines with 64 to 128 cores and 512 to 1024 GB of RAM. Social implications. In §6.2, we define "success" in two ways, both of which are based on citations. However, though citations are an important currency in science, they are imperfect signals of credit or impact. One article may cite another for reasons that span a range of significance, from brief mentions of related background to core motivation (Jurgens et al., 2018). In addition, associations between jargon use and scientific success may differ as success is redefined using indicators beyond citations. For example, success could be defined beyond scientific communities, such as findings that lead to societal change, products, and use (Bornmann, 2013). Finally, our study on the relationship between jargon and success is not causal, but associational and descriptive. ## 10 Ethical Considerations Data. With regards to data privacy, the dataset we use, S2ORC, is not anonymized, since entries for each article includes a list of author names. Even with the removal of author names, data can easily be linked to authors since abstracts are published online with attribution. We don't use author information in our research, and our outputs are aggregated over subsets of data. Still, we acknowledge that science of science research involving author information has the risk of judging research productivity and quality using metrics that may deemphasize some forms of contribution and labor, systemically disadvantaging some demographic groups. In addition, we did not receive the explicit consent of authors to use their content for our study, though the harms of this are minimized since the type of science we study is inherently a public-facing endeavor. S2ORC is released under a CC BY-NC 4.0 license, and its intended use is for NLP task development and science of science analysis. Any derivatives we produce share the same intended use and license. "Jargon". In this paper, we use *jargon* to refer to sets of words that are specific to a discipline. Jargon can be a neutral term when referring to scientific or technical language, but has negative connotations of being incomprehensible or undesirable when used to refer to community vernacular or entire language varieties. Thus, care should be taken when deciding when and how to use *jargon* to refer to language. ## 11 Acknowledgements We thank Misha Teplitskiy, Sandeep Soni, Isaac Bleaman, and Tal August for helpful conversations during the completion of this paper, and the Semantic Scholar team for their support in using and managing data. In addition, we thank our anonymous reviewers for their feedback. KK is grateful for the support of a Young Investigator Grant from the Allen Institute for Artificial Intelligence. ## References Asif Agha. 2005. *Registers of Language*, chapter 2. John Wiley & Sons, Ltd. Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747– 7763, Online. Association for Computational Linguistics. Jannis Androutsopoulos. 2014. Languaging when contexts collapse: Audience design in social networking. Discourse, Context & Media, 4-5:62–73. Digital language practices in superdiversity. Giusepppe Attardi. 2015. Wikiextractor. https:// github.com/attardi/wikiextractor. Tal August, Dallas Card, Gary Hsieh, Noah A. Smith, and Katharina Reinecke. 2020a. Explain like I am a scientist: The linguistic barriers of entry to r/science. In *Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems*, CHI '20, page 1–12, New York, NY, USA. Association for Computing Machinery. Tal August, Lauren Kim, Katharina Reinecke, and Noah A. Smith. 2020b. Writing strategies for science communication: Data and computational analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5327–5344, Online. Association for Computational Linguistics. Tal August, Katharina Reinecke, and Noah A. Smith. 2022a. Generating scientific definitions with controllable complexity. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8298–8317, Dublin, Ireland. Association for Computational Linguistics. Tal August, Lucy Lu Wang, Jonathan Bragg, Marti A. Hearst, Andrew Head, and Kyle Lo. 2022b. Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing. Allan Bell. 1984. Language style as audience design. Language in Society, 13(2):145–204. Allan Bell. 2002. *Back in style: reworking audience* design, page 139–169. Cambridge University Press. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10):P10008. Lutz Bornmann. 2013. What is societal impact of research and how can it be assessed? A literature survey. Journal of the American Society for Information Science and Technology, 64(2):217–233. Kevin W Boyack, Richard Klavans, and Katy Börner. 2005. Mapping the backbone of science. *Scientometrics*, 64(3):351–374. T. S. Breusch and A. R. Pagan. 1979. A simple test for heteroscedasticity and random coefficient variation. Econometrica, 47(5):1287–1294. John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39(3):510–526. A Colin Cameron and Pravin K Trivedi. 2013. *Regression Analysis of Count Data*. Cambridge University Press. Gina N. Cervetti, Elfrieda H. Hiebert, P. David Pearson, and Nicola A. McClung. 2015. Factors that influence the difficulty of science words. Journal of Literacy Research, 47(2):153–185. Shiji Chen, Yanhui Song, Fei Shu, and Vincent Larivière. 2022. Interdisciplinarity and impact: The effects of the citation time window. *Scientometrics*, 127(5):2621–2642. Ido Dagan, Shaul Marcus, and Shaul Markovitch. 1993. Contextual word similarity and estimation from sparse data. In *31st Annual Meeting of the Association for Computational Linguistics*, pages 164–171, Columbus, Ohio, USA. Association for Computational Linguistics. Gabriel José de Carli and Tiago Campos Pereira. 2017. Multidisciplinarity: Widen discipline span of nature papers. *Nature*, 545(7654):289–289. Matan Eyal, Shoval Sadde, Hillel Taub-Tabib, and Yoav Goldberg. 2022. Large scale substitution-based word sense induction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4738–4752, Dublin, Ireland. Association for Computational Linguistics. Santo Fortunato, Carl T. Bergstrom, Katy Börner, James A. Evans, Dirk Helbing, Staša Milojevic, Alexander M. Petersen, Filippo Radicchi, ´ Roberta Sinatra, Brian Uzzi, Alessandro Vespignani, Ludo Waltman, Dashun Wang, and AlbertLászló Barabási. 2018. Science of science. *Science*, 359(6379):eaao0185. Jacob G. Foster, Andrey Rzhetsky, and James A. Evans. 2015. Tradition and innovation in scientists' research strategies. *American Sociological Review*, 80(5):875– 908. Benjamin Freeling, Zoë A. Doubleday, and Sean D. Connell. 2019. How can we boost the impact of publications? Try better writing. Proceedings of the National Academy of Sciences, 116(2):341–343. Benjamin S. Freeling, Zoë A. Doubleday, Matthew J. Dry, Carolyn Semmler, and Sean D. Connell. 2021. Better writing in scientific publications builds reader confidence and understanding. *Frontiers in Psychology*, 12. Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A. Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1801–1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Aaron Gerow, Yuening Hu, Jordan Boyd-Graber, David M. Blei, and James A. Evans. 2018. Measuring discursive influence across scholarship. *Proceedings of the National Academy of Sciences*, 115(13):3308–3313. Gregorio González-Alcaide, Juan Carlos ValderramaZurián, and Rafael Aleixandre-Benavent. 2012. The impact factor in non-English-speaking countries. *Scientometrics*, 92(2):297–311. Michael Gusenbauer and Neal R. Haddaway. 2020. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. *Research Synthesis Methods*, 11(2):181–217. Andrew Head, Kyle Lo, Dongyeop Kang, Raymond Fok, Sam Skjonsberg, Daniel S. Weld, and Marti A. Hearst. 2021. Augmenting scientific papers with justin-time, position-sensitive definitions of terms and symbols. In *Proceedings of the 2021 CHI Conference* on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery. Joseph M Hilbe. 2011. *Negative binomial regression*. Cambridge University Press. Bas Hofstra, Vivek V. Kulkarni, Sebastian Munoz-Najar Galvez, Bryan He, Dan Jurafsky, and Daniel A. McFarland. 2020. The diversity-innovation paradox in science. *Proceedings of the National Academy of* Sciences, 117(17):9284–9291. Zhi Hong, Aswathy Ajith, Gregory Pauloski, Eamon Duede, Carl Malamud, Roger Magoulas, Kyle Chard, and Ian Foster. 2022. ScholarBERT: Bigger is not always better. David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391–406. Yea-Seul Kim, Jessica Hullman, Matthew Burgess, and Eytan Adar. 2016. SimpleScience: Lexical simplification of scientific terminology. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1066–1071, Austin, Texas. Association for Computational Linguistics. Ann Koopman. 2011. Nature launches new open access journal: Scientific reports. Thomas Jefferson University Library News. William Labov. 1973. *Sociolinguistic patterns*. 4. University of Pennsylvania Press. Vincent Larivière and Yves Gingras. 2010. On the relationship between interdisciplinarity and scientific impact. *Journal of the American Society for Information Science and Technology*, 61(1):126–131. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. *Transactions of the Association for Computational Linguistics*, 3:211–225. Loet Leydesdorff, Caroline S. Wagner, and Lutz Bornmann. 2019. Interdisciplinarity as diversity in citation patterns among journals: Rao-stirling diversity, relative variety, and the gini coefficient. Journal of Informetrics, 13(1):255–269. Liming Liang, Ronald Rousseau, and Zhen Zhong. 2013. Non-English journals and papers in physics and chemistry: Bias in citations? *Scientometrics*, 95(1):333–350. Fang Liu, Guangyuan Hu, Li Tang, and Weishu Liu. 2018. The penalty of containing more non-English articles. *Scientometrics*, 114(1):359–366. Yang Liu, Alan Medlar, and Dorota Głowacka. 2022. Lexical ambiguity detection in professional discourse. *Information Processing & Management*, 59(5):103000. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983, Online. Association for Computational Linguistics. Li Lucy and David Bamman. 2021. Characterizing English variation across social media communities with BERT. *Transactions of the Association for Computational Linguistics*, 9:538–556. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In *Proceedings of the ACL 2012 System Demonstrations*, pages 25–30, Jeju Island, Korea. Association for Computational Linguistics. Alejandro Martínez and Stefano Mammola. 2021. Specialized terminology reduces the number of citations of scientific papers. Proceedings of the Royal Society B, 288(1948):20202581. Kathy McKeown, Hal Daume III, Snigdha Chaturvedi, John Paparrizos, Kapil Thadani, Pablo Barrio, Or Biran, Suvarna Bothe, Michael Collins, Kenneth R. Fleischmann, Luis Gravano, Rahul Jha, Ben King, Kevin McInerney, Taesun Moon, Arvind Neelakantan, Diarmuid O'Seaghdha, Dragomir Radev, Clay Templeton, and Simone Teufel. 2016. Predicting the impact of scientific concepts using full-text features. Journal of the Association for Information Science and Technology, 67(11):2684–2696. Peter McMahan and James Evans. 2018. Ambiguity and engagement. *American Journal of Sociology*, 124(3):860–912. Mostafa Mesgari, Chitu Okoli, Mohamad Mehdi, Finn Årup Nielsen, and Arto Lanamäki. 2015. "the sum of all human knowledge": A systematic review of scholarly research on the content of w ikipedia. Journal of the Association for Information Science and Technology, 66(2):219–245. Sonia K Murthy, Kyle Lo, Daniel King, Chandra Bhagavatula, Bailey Kuehl, Sophie Johnson, Jonathan Borchardt, Daniel S Weld, Tom Hope, and Doug Downey. 2022. Accord: A multi-document approach to generating diverse descriptions of scientific concepts. In Proceedings of the EMNLP 2022 System Demonstrations. Innocent Ndubuisi-Obi, Sayan Ghosh, and David Jurgens. 2019. Wetin dey with these comments? modeling sociolinguistic factors affecting code-switching behavior in nigerian online discussions. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 6204–6214, Florence, Italy. Association for Computational Linguistics. Mark EJ Newman. 2016. Equivalence between modularity optimization and maximum likelihood methods for community detection. *Physical Review E*, 94(5):052315. Sing Bik Cindy Ngai, Rita Gill Singh, and Alex Chun Koon. 2018. A discourse analysis of the macrostructure, metadiscoursal and microdiscoursal features in the abstracts of research articles across multiple science disciplines. *PLOS ONE*, 13(10):1–21. Dong Nguyen, A. Seza Dogruöz, Carolyn P. Rosé, and ˘ Franciska de Jong. 2016. Survey: Computational sociolinguistics: A Survey. *Computational Linguistics*, 42(3):537–593. Keisuke Okamura. 2019. Interdisciplinarity revisited: evidence for research impact and dynamism. *Palgrave Communications*, 5(1):1–9. José Luis Ortega and Isidro F. Aguillo. 2014. Microsoft academic search and google scholar citations: Comparative analysis of author profiles. *Journal of the* Association for Information Science and Technology, 65(6):1149–1156. Umashanthi Pavalanathan and Jacob Eisenstein. 2015. Audience-Modulated Variation in Online Social Media. *American Speech*, 90(2):187–213. Hao Peng, Qing Ke, Ceren Budak, Daniel M Romero, and Yong-Yeol Ahn. 2021. Neural embeddings of scholarly periodicals reveal complex disciplinary organizations. *Science Advances*, 7(17):eabb9004. Pontus Plavén-Sigray, Granville James Matheson, Björn Christian Schiffler, and William Hedley Thompson. 2017. Research: The readability of scientific texts is decreasing over time. *eLife*, 6:e27725. Vinodkumar Prabhakaran, William L. Hamilton, Dan McFarland, and Dan Jurafsky. 2016. Predicting the rise and fall of scientific topics from trends in their rhetorical framing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1170– 1180, Berlin, Germany. Association for Computational Linguistics. Tzipora Rakedzon, Elad Segev, Noam Chapnik, Roy Yosef, and Ayelet Baram-Tsabari. 2017. Automatic jargon identifier for scientists engaging with the public and science communication educators. PLOS ONE, 12(8):1–13. Abhinav Ramesh Kashyap, Devamanyu Hazarika, MinYen Kan, and Roger Zimmermann. 2021. Domain divergences: A survey and empirical analysis. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1830–1849, Online. Association for Computational Linguistics. Joseph Reagle and Jackie Koerner. 2020. *Wikipedia@* 20: Stories of an incomplete revolution. The MIT Press. Martin Rosvall and Carl T Bergstrom. 2008. Maps of random walks on complex networks reveal community structure. *Proceedings of the national academy* of sciences, 105(4):1118–1123. Janet N. Ryan. 1985. The language gap: Common words with technical meanings. *Journal of Chemical* Education, 62(12):1098. B. C. Satishkumar, P. John Thomas, A. Govindaraj, and C. N. R. Rao. 2000. Y-junction carbon nanotubes. Applied Physics Letters, 77(16):2530. Yanchuan Sim, Noah A. Smith, and David A. Smith. 2012. Discovering factions in the computational linguistics community. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 22–32, Jeju Island, Korea. Association for Computational Linguistics. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In *Proceedings of the 24th* International Conference on World Wide Web, WWW '15 Companion, page 243–246, New York, NY, USA. Association for Computing Machinery. Sandeep Soni, David Bamman, and Jacob Eisenstein. 2022. Predicting long-term citations from short-term linguistic influence. Sandeep Soni, Kristina Lerman, and Jacob Eisenstein. 2021. Follow the leader: Documents on the leading edge of semantic change get more citations. Journal of the Association for Information Science and Technology, 72(4):478–492. Andrew Stirling. 1998. On the economics and analysis of diversity. *Science Policy Research Unit (SPRU),* Electronic Working Papers Series, Paper, 28:1–156. Raghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Balaji Vasan Srinivasan, and Vasudeva Varma. 2018. When science journalism meets artificial intelligence : An interactive demonstration. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 163–168, Brussels, Belgium. Association for Computational Linguistics. Richard Van Noorden. 2015. Interdisciplinary research by the numbers. *Nature*, 525(7569):306–307. Harold Varmus, Patrick Brown, and Michael Eisen. 2000. Open letter. *PLOS One*. Daril A Vilhena, Jacob G Foster, Martin Rosvall, Jevin D West, James Evans, and Carl T Bergstrom. 2014. Finding cultural holes: How structure and culture diverge in networks of scholarly communication. Sociological Science, 1:221. Dashun Wang and Albert-László Barabási. 2021. The h-Index, page 17–27. Cambridge University Press. Kuansan Wang, Zhihong Shen, Chiyuan Huang, ChiehHan Wu, Darrin Eide, Yuxiao Dong, Junjie Qian, Anshul Kanakia, Alvin Chen, and Richard Rogahn. 2019. A review of Microsoft Academic Services for science of science studies. *Frontiers in Big Data*, 2. Jevin West and Jason Portenoy. 2016. Delineating fields using mathematical jargon. In *Proceedings of the* ![13_image_0.png](13_image_0.png) Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL), pages 63–71. Dani Yogatama, Michael Heilman, Brendan O'Connor, Chris Dyer, Bryan R. Routledge, and Noah A. Smith. 2011. Predicting a scientific community's response to an article. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 594–604, Edinburgh, Scotland, UK. Association for Computational Linguistics. Justine Zhang, William Hamilton, Cristian DanescuNiculescu-Mizil, Dan Jurafsky, and Jure Leskovec. 2017. Community identity and user engagement in a multi-community landscape. *Proceedings of the* International AAAI Conference on Web and Social Media, 11(1):377–386. ## A Fields Of Study Figure 7 shows the number of valid abstracts in each top-level MAG field of study *before* subsampling a similar number of abstracts from each subfield. This figure can be compared to Figure 2 to show how the distribution of fields changed after sampling. The following lists the subfields, or level 1 MAG FOS, used in our study. The same subfield may fall under multiple fields. - Art (6 children): art history, classics, humanities, visual arts, literature, aesthetics. - **Biology** (31 children): computational biology, biochemistry, bioinformatics, cancer research, evolutionary biology, anatomy, molecular biology, pharmacology, immunology, virology, ecology, agronomy, botany, toxicology, food science, microbiology, biological system, agroforestry, biophysics, animal science, paleontology, cell biology, physiology, endocrinology, horticulture, genetics, biotechnology, neuroscience, fishery, zoology, biology (other). - **Business** (10 children): international trade, accounting, risk analysis (engineering), process management, actuarial science, marketing, industrial organization, finance, advertising, business (other). - **Chemistry** (20 children): polymer chemistry, molecular physics, biochemistry, organic chemistry, physical chemistry, chemical physics, nuclear chemistry, medicinal chemistry, photochemistry, combinatorial chemistry, computational chemistry, analytical chemistry, food science, chromatography, mineralogy, inorganic chemistry, crystallography, stereochemistry, environmental chemistry, chemistry (other). - **Computer Science** (32 children): natural language processing, software engineering, theoretical computer science, embedded system, computer security, programming language, data science, computer vision, computer network, human–computer interaction, world wide web, information retrieval, parallel computing, operating system, computer hardware, multimedia, computer graphics (images), library science, real-time computing, artificial intelligence, database, distributed computing, simulation, telecommunications, internet privacy, pattern recognition, machine learning, knowledge management, data mining, speech recognition, algorithm, computer science (other). - **Economics** (28 children): international trade, labour economics, political economy, natural resource economics, industrial organization, monetary economics, economic system, economy, operations management, demographic economics, management, finance, management science, environmental resource management, accounting, agricultural economics, economic growth, actuarial science, financial economics, market economy, socioeconomics, environmental economics, econometrics, law and economics, development economics, public economics, microeconomics, economics (other). - **Engineering** (35 children): engineering ethics, software engineering, control engineering, embedded system, nuclear engineering, reliability engineering, operations research, transport engineering, engineering drawing, biomedical engineering, engineering management, electronic engineering, automotive engineering, forensic engineering, operations management, mechanical engineering, petroleum engineering, process engineering, systems engineering, management science, civil engineering, control theory, simulation, telecommunications, geotechnical engineering, pulp and paper industry, process management, environmental engineering, marine engineering, chemical engineering, manufacturing engineering, waste management, structural engineering, electrical engineering, engineering (other). - **Environmental Science** (7 children): environmental resource management, environmental planning, environmental engineering, agroforestry, soil science, environmental protection, environmental science (other). - **Geography** (7 children): environmental planning, meteorology, archaeology, physical geography, remote sensing, environmental protection, geography (other). - **Geology** (14 children): atmospheric sciences, geochemistry, geomorphology, soil science, hydrology, oceanography, climatology, mineralogy, geotechnical engineering, seismology, petroleum engineering, remote sensing, paleontology, geology (other). - **History** (5 children): art history, classics, ancient history, archaeology, history (other). - **Materials Science** (6 children): polymer chemistry, optoelectronics, composite material, nanotechnology, metallurgy, materials science (other). - **Mathematics** (17 children): geometry, topology, combinatorics, operations research, mathematical optimization, pure mathematics, control theory, discrete mathematics, statistics, algebra, mathematics education, mathematical physics, applied mathematics, econometrics, mathematical analysis, algorithm, mathematics (other). - **Medicine** (45 children): audiology, gerontology, pediatrics, obstetrics, medical physics, urology, radiology, gynecology, dentistry, cancer research, cardiology, veterinary medicine, biomedical engineering, medical education, general surgery, andrology, oncology, dermatology, traditional medicine, orthodontics, anatomy, pharmacology, medical emergency, anesthesia, gastroenterology, immunology, virology, risk analysis (engineering), emergency medicine, surgery, psychiatry, physiology, nursing, endocrinology, clinical psychology, intensive care medicine, physical therapy, nuclear medicine, family medicine, ophthalmology, environmental health, internal medicine, physical medicine and rehabilitation, pathology, medicine (other). - **Philosophy** (6 children): environmental ethics, humanities, epistemology, aesthetics, linguistics, philosophy (other). - **Physics** (24 children): mechanics, atmospheric sciences, molecular physics, astrophysics, acoustics, medical physics, classical mechanics, chemical physics, nuclear physics, optoelectronics, quantum mechanics, theoretical physics, optics, computational physics, particle physics, atomic physics, statistical physics, meteorology, nuclear magnetic resonance, thermodynamics, mathematical physics, astronomy, condensed matter physics, physics (other). - **Political Science** (4 children): public relations, public administration, law, political science (other). - **Psychology** (15 children): mathematics education, cognitive psychology, criminology, clinical psychology, applied psychology, social psychology, communication, pedagogy, psychoanalysis, neuroscience, developmental psychology, psychiatry, psychotherapist, cognitive science, psychology (other). - **Sociology** (11 children): social science, criminology, demography, law and economics, communication, pedagogy, political economy, gender studies, socioeconomics, media studies, sociology (other). ## B Dataset Filtering We perform the following preprocessing steps of S2ORC to create CONTEMPORARY S2ORC: - **Venue.** We consolidate the *venue* and *journal* keys of each article's metadata. We use whichever label is non-empty, and only a small fraction (0.08%) of articles with valid abstracts have *venue* and *journal* that differ, in which case we use use the article's *journal*. We handle venue names case insensitively, and also remove tokens in their names that contain numbers to consolidate years and editions. - **Time**. Our study focuses on contemporary science, which are abstracts published during 2000-2019. S2ORC contains some abstracts from 2020 and onwards, but dates past 2020 are likely metadata processing errors. We remove 47.6 million articles outside of this time range. - **Valid metadata**. We remove 42.5 million articles with missing abstracts, titles, or journal and venue labels. - **Language.** We remove 77,133 articles from 925 non-English journals or venues, which are those that have less than 80% of their articles in English, using Lui and Baldwin (2012)'s language classifier. - **Field of study.** Medicine fields dominate S2ORC abstracts. We balance the dataset by taking a sample of 50k articles per subfield. For subfields that are too small to sample or articles that have field-level but no subfieldlevel labels, we categorize these in an OTHER subfield under their parent field. Since articles can be labeled with multiple FOS, our sample is not perfectly stratified, but prevents large subfields from dominating calculations of the general prevalence of words in English. In total we identify specialized language across 293 subfields that fall under 19 fields (listed in Appendix A). ## C Validation Details Here, we include two additional figures to supplement §4. Figure 8 shows a screenshot of a Wiktionary entry for the word *ensemble*, which is overloaded with ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) several labeled definitions.10 Some labels, such as collective, show grammatical information, while others indicate restricted usage to different fields, dialects, or contexts.11 We match these labels to MAG fields and subfields when evaluating recall of words marked as discipline-specific by Wiktionary. In the main text, we show that sense NPMI is able to recall more Wiktionary words at the same threshold than type NPMI. In addition, sense NPMI scores are higher in Wiktionary-labeled fields than random ones (Figure 9). ## D Additional Experimental Details D.1 Cutoff Decision We generated Figure 5 with additional values of the NPMI cutoff c, such as c = 0.2, and achieve similar conclusions (Figure 10). That is, these results are similar when it comes to which fields tend to adjust their language between general-purpose and discipline-focused venues. In the main text, we 10https://en.wiktionary.org/wiki/ensemble 11https://en.wiktionary.org/wiki/Template:label ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) usually use c = 0.1, as positive NPMI values indicate association, but NPMI values too close to 0 would instead lean towards independence. Though NPMI ranges from -1 to 1, the outputted scores for various subfields tended to range from -0.5 to 0.5, and some include bimodal behavior where the latter peak of the distribution usually occurs after c = 0.2 (Figure 11). We assume that this latter peak is indicative of jargon. Thus, we experimented with cutoffs that would separate the initial peak around 0 and a secondary peak in the positive NPMI value range, if any. ## D.2 Scholarly Success D.2.1 Subfield Similarity To calculate subfield similarity, we first create a (N + 1) × (N + 1) citation matrix, where N is the total number of subfields, and the additional row and column represents articles in unknown subfields. Rows in this matrix represent subfields that are cited, and columns are citing subfields. This matrix is generated using all articles published in S2ORC within the years 2000 and 2019 that have inbound citations. For subfield similarity calculations, we use the rows to represent each subfield. For example, the nearest neighbors via cosine similarity of the row representing *chemical engineering* include polymer chemistry, *polymer science*, and inorganic chemistry. ## D.2.2 Regressions We ran a few statistical tests to determine what regressions to use. Citation counts. We run both Poisson regressions and negative binomial regressions on citation count data, as these generalized linear models are typically used to model count data. Negative binomial regression is used for data that shows overdispersion, when the variance of the dependent variable exceeds the mean. We calculate the overdispersion ratio ϕ of Poisson regressions for each field: ## Φ =Pearson'S Χ 2 Residual Degrees Of Freedom . Since it exceeds 1 for each field's regression, there is overdispersion in our data, and thus we use negative binomial regressions for citation counts. Negative binomial regressions require choosing a constant α which is used to express the variance in terms of the mean. We determine α by inputting the fitted rate vector from the Poisson regression into an auxiliary OLS regression without a constant (Cameron and Trivedi, 2013). The α we obtain from this for each regression is significant for all fields except for Art and Philosophy (p < 0.01, right-tailed t-test). Interdisciplinary impact. We run ordinary least squares (OLS) regressions for this dependent variable. OLS involves several assumptions: randomly sampled data, linearity, exogeneity, noncollinearity, and homoskedasticity. We check for linearity and exogeneity by comparing residuals and fitted values, non-collinearity by checking that the variance inflation factors of covariates do not exceed 5, and homoskedasticity by running a Breusch-Pagan test (Breusch and Pagan, 1979). We find that we satisfy all assumptions except homoskedasticity. Due to to this, we also run a weighted least squares regression to check the robustness of our OLS results, and achieve similar coefficients. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 9 ✓ A2. Did you discuss any potential risks of your work? Section 10 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 1 ✓ B1. Did you cite the creators of artifacts you used? 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 10 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 10 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 10 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 2 (dataset size), 4 (Wiktionary \# of examples), 6 (\# of observations and venues in experiments) C ✓ **Did you run computational experiments?** 4, 5, 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 9 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4, 5, 6 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We use a word sense induction model that isn't an existing package, but was open-source, and we detailed any changes and parameter settings we used for that model. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
matzken-etal-2023-trade
Trade-Offs Between Fairness and Privacy in Language Modeling
https://aclanthology.org/2023.findings-acl.434
Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worsening biases in classification tasks. In this paper, we explore the extent to which this tradeoff really holds when we incorporate both privacy preservation and de-biasing techniques into training text generation models. How does improving the model along one dimension affect the other dimension as well as the utility of the model? We conduct an extensive set of experiments that include bias detection, privacy attacks, language modeling, and performance on downstream tasks.
## Trade-Offs Between Fairness And Privacy In Language Modeling Cleo Matzken1and **Steffen Eger**2and **Ivan Habernal**1 1Trustworthy Human Language Technologies Department of Computer Science, Technical University of Darmstadt https://www.trusthlt.org 2Natural Language Learning Group Faculty of Technology, Universität Bielefeld https://nl2g.github.io ## Abstract Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worsening biases in classification tasks. In this paper, we explore the extent to which this tradeoff really holds when we incorporate both privacy preservation and de-biasing techniques into training text generation models. How does improving the model along one dimension affect the other dimension as well as the utility of the model? We conduct an extensive set of experiments that include bias detection, privacy attacks, language modeling, and performance on downstream tasks.1 ## 1 Introduction Fairness and privacy are two important concepts in contemporary NLP. Unfairness caused by demographic biases can lead to unequal performance for different user groups (Tatman, 2017), misidentification of speakers and their needs (Perez, 2019), or propagation of hurtful stereotypes (Agarwal et al., 2019; Nozza et al., 2022). In addition, when NLP models leak data, it can lead to the disclosure of sensitive personal data which can hurt individuals (Carlini et al., 2019). In an attempt to provide both privacy and fairness in NLP classifiers, existing research suggests an inherent trade-off between the two dimensions (Farrand et al., 2020; Hansen et al., 2022; Bagdasaryan et al., 2019; Cummings et al., 2019). Introducing privacy may amplify bias in some social groups more than others, more specifically those groups that were already underrepresented and therefore a minority in the data. For example, Bagdasaryan et al. (2019) find that classifiers across 1Our code is publicly available: https://github. com/cleolotta/fair-and-private-lm four diverse classification tasks perform worse for underrepresented groups due to the effects of gradient clipping implemented in differential privacy (Dwork and Roth, 2014). However, current research on trade-offs between privacy and fairness in large language models remains inconclusive. In this work, we aim to fill this research gap by investigating *language modeling* under privacy and de-biasing paradigms. Our research deals with scenarios in which there is arguably no quantitative minority group (our focus is on gender bias), as opposed to labeled data in fine-tuning used in previous works. We ask how fairness and privacy affect each other in this context, exploring differential privacy and two different debiasing objectives during fine-tuning stages. We examine how each objective in isolation and jointly affects (1) privacy, measured in terms of data leakage, and (2) biases, evaluated across three popular recent bias evaluation benchmarks. Specifically, our paper aims to answer the following research questions: RQ1: Does training with a differential privacy objective lead to fairer LMs? RQ2: Does training with debiasing objective lead to less leakage? RQ3: How does training with debiasing as well as DP objective affect fairness and privacy? RQ4: How does training with debiasing and/or DP objective affect the language ability in the resulting model? RQ5: How does training with debiasing and/or DP objective affect downstream NLU performance? To our best knowledge, ours is the first study exploring such effects on language modeling. ## 2 Related Work Bias detection A test for detecting biases in word embeddings is the Word Embedding Association Test (WEAT; Caliskan et al. (2017)) which computes the association between two target word sets with words from two attribute sets in vector space. An extension of this to sentence-level representations was created by May et al. (2019). Bias Evaluation Corpus with Professions (BECPro; Bartl et al., 2020) and Discovery of Correlations (DisCo; Webster et al., 2020) are datasets that use predefined templates to determine gender bias with regard to different professions and other characteristics. Zhao et al. (2018) further introduced the WinoBias benchmark in which a corpus — based on the Winograd Challenge (Levesque et al., 2012) - follows a certain scheme, each containing a person, a pronoun and an occupation. A model would pass the WinoBias test if the two binary genders were hit with the same accuracy. StereoSet (Nadeem et al., 2020) represents a crowd-sourced dataset through which it can be determined with what proportion a model meets a stereotypical association in terms of gender, occupation, race, and religion instead of the anti-stereotypical one. Biasin-Bios (De-Arteaga et al., 2019) uses a dataset created from biographies found on the web containing a person's profession and asks a model to read the biographies and recognise the profession without making gender-based assumptions. Bias-mitigation methods Several methods have been proposed for mitigating a bias. Webster et al. (2020) proposed dropout as debiasing technique and aimed at reducing gender correlations through increasing dropout regularization. Counterfactual Data Augmentation (CDA; Zhao et al. 2018) is a commonly used approach (Barikeri et al., 2021; Lauscher et al., 2021; Webster et al., 2020) in which a dataset is practically rebalanced by exchanging bias attribute words (e.g. pronouns) in an automated process. Ravfogel et al. (2020) proposed another method to mitigate biases in word embeddings, namely iterative nullspace projection (INLP). INLP aims to find a linear guardian function that removes the linear dependency between word embeddings and their associated protected attributes, which should not be considered in the decision of a fair classifier. Self-Debias (Schick et al., 2021) poses a post-hoc text generation debiasing technique that does not change the model's internal representations. In this approach, the model is asked to make a biased statement, instead of an unbiased statement. The resulting probability distribution is then used to change the model's initial output distribution. Differential privacy To avoid the leakage of sensitive data through language models, methods have been introduced to protect the privacy of the data. This includes Differential Privacy (DP; Dwork and Roth, 2014), which has been used in many domains (Erlingsson et al., 2014; Abowd, 2018). Abadi et al. (2016) have introduced DP Stochastic Gradient Descent (DP-SGD) to implement DP directly in the training of language models. The disadvantage of it, though, is high computational and memory overhead which Yu et al. (2021b) tried to tackle with their approach of parameterized gradient perturbation (RGP). They created a low-dimensional projection of the gradient of each layer's weight matrix and then introduced privacy by clipping and adding noise to these low-dimensional gradients. Shi et al. (2021) further elaborated the influence of privacy on the utility of a model and emphasized the importance of understanding the trade-off between privacy and utility. To improve utility, they introduced the approach of selective-DP (S-DP) for RNN-based language models and thereby allowed different attributes in the data to have different privacy levels. Privacy attacks There are indications that models unintentionally memorize information which introduces a risk of information leakage (Carlini et al., 2021). Nasr et al. (2019) define privacysensitive leakage of a model as the information an adversary can learn from the model about the training data that the adversary cannot infer from other models trained on other data from the same distribution. A method for quantifying the leakage of a model is through *Membership Inference* Attacks. These can be divided into the kind of access the attacker has to the deep learning algorithm and therefore to infer information - into blackbox (Shokri et al., 2017) and whitebox inferences attacks (Nasr et al., 2019). In the blackbox setting, the attacker has access only to the output of the model whereas in the whitebox setting, the attacker obtains the model f(x; W) along with all parameters needed for the prediction. Mireshghallah et al. (2022a) used the whitebox setting in their approach of reference-based likelihood ratio attacks (Murakonda et al., 2021; Ye et al., 2021; Carlini et al., 2022). For that, they determined the likelihood of a sample under the target model and the likelihood of a sample under a reference model. Using a test statistic based on the ratio between the likelihoods, they decided whether a sample belongs to the training dataset of the target model or not. ## 3 Methods, Metrics, And Datasets In the following, we introduce (1) datasets and methods to measure **bias**, (2) techniques to measure **privacy**, and (3) datasets to model the **language modeling ability** of our language models used in our work. Bias evaluation We employ three recent popular benchmarks to evaluate bias in language models. BEC-Pro (Bartl et al., 2020) is a dataset containing 5,400 English sentences to capture gender bias with respect to professions. The sentences in the corpus follow a pattern in which a gender-denoting noun phrase or h person word i and a h profession i must be included. The components of the corpus and how they were used to build it can be found in Appendix E. Since we use GPT-2 in our work, which can only make predictions sequentially, we make use of the 5,400 sentences of the BEC-Pro dataset in simplified form. Precisely, we do not compare the predictions for sentences with different masking, but only the prediction for a sentence with male target token and the corresponding sentence with female target token, e.g., This man is a carpenter - This **woman** is a carpenter We then calculate the bias from the ratio of the male-dominated sentences among all sentences in the dataset. Male-dominated means that a male target token is predicted (female-dominated is defined analogously). Consequently, a model that treats genders equally in terms of occupations has a score of 50% and shows a bias against women (men) if the score is above (below) 50%. Sentence Encoder Association Test (SEAT) (May et al., 2019) SEAT is an intrinsic bias benchmark and an extension of the Word Embedding Association Test (WEAT; Caliskan et al., 2017). WEAT is used to detect biases in static word embedding spaces. It computes the differential association between two target word sets A (e.g., masculine words) and B (e.g., feminine words) with terms from two attribute sets X (e.g., mathematical terms) and Y (e.g., art terms). In our case, we are interested in the target and attribute sets that relate genders to certain stereotypical counter-concepts, such as career and family (WEAT 6) or math and art (WEAT 7). WEAT determines whether the representations of words from an attribute word are closer to those of words from a specific target set. Thus, if the representations of the female attribute words are closer to those of the art target attributes, or vice versa, this could indicate a bias. We relegate the formal test statistics for WEAT to Appendix E. May et al. (2019) extended the approach of Caliskan et al. (2017) to a sentence level by inserting the attribute and target words from WEAT into template synthetic sentences such as "This is a[n] h word i". A complete list of the SEAT tests that we used for evaluation can be found in Appendix E. StereoSet (Nadeem et al., 2020) is a large-scale English dataset used to detect stereotypes in pretrained language models. Nadeem et al. (2020) argue that a language model should be able to judge the sentence "Our housekeeper is a Mexican" (stereotype) as more probable than "Our housekeeper is a banana" (language modeling ability) and yet at the same time with the same probability as "Our housekeeper is an American" (antistereotype). Based on this principle, they created the Context Association Test (CAT), which measures both the language modeling ability and the stereotypical bias of a model. Examples can be found in Appendix E. To evaluate CAT, Nadeem et al. (2020) proposed two scores, the language modeling score (lms) and the stereotype score (ss). A model would have an lms of 100% if it always chose the meaningful context over the meaningless one. The ss would ideally be 50%, namely if the model preferred neither stereotypical nor anti-stereotypical associations. Indeed, the ss of gender would be the proportion of examples in which the model prefers stereotypical associations over anti-stereotypical associations. ## 3.1 Privacy Attack To heuristically examine the *leakage* in our models, we use reference-based likelihood ratio attacks (Mireshghallah et al., 2022a,b; Carlini et al., 2022). These use a hypothesis test to guess whether a particular data point was used to train a target model. To perform the attack, a model Mθ is trained on the dataset D sampled from the general population distribution. We then simulate an attack on the trained model in the whitebox setting, i.e., with complete access to the model, including the prediction f(x; W), along with all its parameters. Following Mireshghallah et al. (2022b), we use a pre-trained but not finetuned GPT-2 as reference model Rθ. Figure 1 illustrates the procedure. During the attack, an adversary wants to determine for each sample x from dataset D whether it comes from the training dataset of the model under attack. To do this, each sample x is fed into our fine-tuned model and into the reference model in turn, giving us the likelihoods PrM(x) and PrR(x). When evaluating the leakage of the models trained with CDA, we slightly adjust the attack. More specifically, the attacker still uses the general data distribution for the attack as this represents the real and potentially sensitive data. However, the target model uses the data it was trained on, namely the augmented data, for computing the loss. Figure 5 in Appendix H illustrates this in more detail. With PrM(x) and PrR(x), the likelihood ratio LR(x) = PrR(x) PrM(x) is then formed. If this ratio is smaller than a threshold t, we classify x as a member in the training dataset and vice versa. We compute the threshold t, like Mireshghallah et al. (2022b), by computing LR(x) for all x in the validation set and then choosing the highest threshold at which the false positive rate (over training and validation members) does not exceed α = 10%. In the results on our experiments, we report the Membership Inference Attack Recall (MIA Recall). The higher the MIA recall, the higher the leakage in the model investigated. ## 3.2 Model Utility Evaluation We use the General Language Understanding Evaluation (GLUE; Wang et al., 2018) benchmark as a **downstream task**. It consists of nine different English Natural Language Understanding (NLU) tasks to ensure that a model is not exclusively useful for solving a single task. For evaluating the language modeling capabilities, we use perplexity in addition to Nadeem et al.'s (2020) Language Model Score. ## 4 Experiments 4.1 Setup We conducted a total of six experimental setups as illustrated in Figure 2 and ran them on a Nvidia A100 Tensor Core GPU with 40 gigabytes of graphics memory. Data We choose the BookCorpus (Zhu et al., 2015) for our fine-tuning dataset which was built from 11,038 free books from the web written by unpublished authors. We adapt the approach of Lauscher et al. (2021) in creating the training dataset by uniformly subsampling the BookCorpus; more precisely, we reduce the entire dataset to approximately 6% of its original size and skip sentences with less than four tokens. This gives us about 4.6 million sentences that we further split into train-dev 80:20. In doing so, we obtain roughly 3.6 million training sentences. Models and baselines The basis for our trainings is GPT-2-medium (Radford et al., 2019) from the Transformers library of huggingface2, to which we refer to as GPT-2. We were not able to determine the leakage for the pre-trained GPT-2 with a whitebox membership inference attack, as this would have required us to run it on all originally used training data. To still have a comparable model to the pre-trained GPT-2 of huggingface that is neither trained with our debiasing nor DP methods, but can still be analyzed for its leakage, we create a baseline by training the huggingface GPT-2 on our subset of the BookCorpus for 3 epochs with a batch size of 2 and a gradient accumulation with a step size of 8. Training We further train GPT-2 with the different objectives that can be found in Figure 2. All models are trained for 3 epochs with a learning rate of 1e-05. Since training GPT-2 with DP requires too much GPU memory for the computational resources we have, we reduce the number of trainable parameters with LoRA (Hu et al., 2021) to 0.393 million.3 For reasons of comparability, we consequently use the same reduced number of trainable parameters in all experiments. Debiasing training We use two different bias mitigation methods in our experiments, namely CDA (see Appendix B) and Dropout (Webster et al., 2020). In both cases, we perform another phase of fine-tuning. For CDA, we use the counterfactually augmented dataset and for Dropout, we use the original dataset but increase dropout regularization, more specifically with the value 0.15 instead of the ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) default of 0.14as proposed by Meade et al. (2021). For CDA, we use two-sided CDA, meaning that both the augmented and original example are left in the dataset (Meade et al., 2021). More specifically, we first tokenize the text and then truncate it into chunks of size 512. This is followed by augmenting each chunk as necessary. All CDA and Dropout models are trained with a batch size of 2 and gradient accumulation step size of 8. Privacy training For implementing DP, we use the open-source PyTorch library Opacus (Yousefpour et al., 2021) and the dp-transformers repository (Wutschitz et al., 2022). All training with privacy as objective, either standalone or combined with debiasing, uses a batch size of 2 and gradient accumulation steps of size 128. ## 4.2 Results We target five research questions which we describe and answer in the following. ## Rq1: Does Training With A Differential Privacy Objective Lead To Fairer Lms? Table 1 Lists Bias results on SEAT (averaged over all SEAT subsets; individual results are in Appendix G), StereoSet and BEC-Pro. To answer the RQ, we look at row (iii), finding that DP has no or negligible effect on bias in our case. Besides privacy, we also look at the results of debiasing on fairness. Surprisingly, Dropout (row ii) substantially increases bias and CDA (row i) has a mixed effect across bias benchmarks. We discuss this in the limitations section. The baseline model - our own GPT-2 model which we pre-trained on the BookCorpus - has a substantially higher bias than the original GPT-2. Dropout+ DP has no effect on bias on average. ## Rq2: Does Training With Debiasing Objective lead to less leakage? The MIA Recall values are listed in Table 2. For computational reasons, we only compare the baseline, CDA, dropout, and DP models. DP has the lowest MIA recall and the baseline model has the highest. Dropout is only slightly below the baseline, and the model trained with CDA has the highest leakage. Therefore, to answer RQ2, we find that debiasing as we implement it does not lead to a lower leakage. Dropout leads to the same leakage as baseline and CDA even has | SEAT | Stereo | BEC-Pro | | |--------------------|----------|-----------|------| | (0) Baseline | 0.2 | 66.5 | 59.1 | | (1) GPT-2 | 0.1 | 66.2 | 43.7 | | (i) + CDA | 0.3 | 66.2 | 55.1 | | (ii) + Dropout | 0.2 | 66.9 | 66.6 | | (iii) + DP | 0.1 | 66.2 | 43.6 | | (a) + CDA + DP | 0.1 | 66.1 | 43.7 | | (b) + Dropout + DP | 0.1 | 66.2 | 43.7 | | End of Training MIA Recall | | |------------------------------|-------| | (0) Baseline | 0.060 | | (1) GPT-2 | N/A | | (i) + CDA | 0.076 | | (ii) + Dropout | 0.060 | | (iii) + DP | 0.057 | | (a) + CDA + DP | 0.029 | | (b) + Dropout + DP | 0.050 | Table 2: MIA Recall (↓) for all models. a higher leakage. The complete list of MIA recall values per epoch can be found in Appendix G. ## Rq3: How Does Training With Debiasing As Well As Dp Objective Affect Fairness And Privacy? First, we consider the effect of the combined training objective in terms of fairness, looking at Table 1, lower part. We observe that only CDA combined with DP has a slightly positive effect, as the scores on StereoSet and BEC-Pro are closer towards 50% than the original GPT-2 model. To evaluate the effect of the combined objectives on leakage, we look at the MIA recall again. Figure 3 and Table 2 illustrate that the combined methods have lower leakage than both the DP model and the baseline. Contrary to previous findings, both Dropout and CDA are now effective in conjunction with DP. And the combined effect of debiasing and privacy fine-tuning is also stronger than each effect in isolation. Overall, combining DP with CDA seems to make models more private while marginally improving bias compared to the fine-tuned model without privacy and debiasing objectives. Dropout has a weaker effect. Thus, depending on how debiasing is implemented, fairness and privacy training objectives can be a good choice for both targets. ![6_image_0.png](6_image_0.png) | Perplexity | LM Score | GLUE | | |--------------------|------------|--------|------| | (0) Baseline | 17.82 | 91.77 | 0.60 | | (1) GPT-2 | N/A | 91.65 | 0.56 | | (i) + CDA | 17.99 | 91.86 | 0.61 | | (ii) + Dropout | 18.09 | 91.80 | 0.59 | | (iii) + DP | 91.15 | 91.65 | 0.57 | | (a) + CDA + DP | 34.41 | 91.71 | 0.57 | | (b) + Dropout + DP | 91.16 | 91.65 | 0.55 | ## Rq4: How Does Training With Debiasing And/Or DP objective affect the language ability in the resulting model? Table 3 shows that all models trained with DP have a higher perplexity than the baseline and the models trained with debiasing objective only. However, the CDA+DP model has a much lower perplexity than the other DP models. This indicates that CDA mitigates the negative effect of DP on perplexity. The LM score, which requires the model under evaluation to select the most meaningful sentences in a classification task, shows little variation across all models. Nevertheless, the score of the CDA model is slightly higher than those of the other models which is plausible since CDA augments the dataset, which by itself can provide an improvement in language modeling ability. From our analysis alone, it is not clear how much this fact alone explains the results. We leave this open for future research. Figure 4 (a) shows the interaction between debiasing and language modeling ability. Starting from the baseline and moving left towards less bias, there is an increase in perplexity but only for those models trained with DP. Next, to specifically determine the impact of privacy, we consider the interaction between leakage and language modeling ability in Figure 4 (b). Again, starting from the baseline in the lower right, moving in the direction of less leakage, we find that only the three models trained with DP have a higher perplexity than the baseline. The model with the fourth lowest leakage is the CDA model, which has no meaningful loss in perplexity compared to the baseline. Thus, there seems to be a negative interaction between DP and perplexity. However, as mentioned before, CDA seems to mitigate this effect when used together with DP. ## Rq5: How Does Training With Debiasing And/Or DP objective affect downstream NLU performance? We evaluate all models on the GLUE benchmark.5 The overall average values are shown in Table 3. We notice that the pre-trained GPT-2 with reduced parameter size performs second worst on average over all GLUE tasks. Apart from that, all models without DP perform about equally well in comparison with each other. It can be highlighted that the CDA model performs minimally better than the baseline and best across all models; and the CDA+DP model performs minimally better than the DP-only model, again suggesting that CDA has some positive impact. We might see the same effect here that we discussed previously under RQ4, leaving the more detailed analysis open for future research. Dropout+DP performs worst on average over all tasks. To see if LoRA per se has an impact on downstream performance, we also run GLUE on the full pre-trained GPT-2. Here, we find that, in particular, the performance of the model evaluated on the acceptability task CoLA (Warstadt et al., 2019) and the sentence similarity task STS-B (Socher et al., 2013) suffer under LoRA (see Appendix G for full results). ## 5 Discussion Of Main Findings 1. CDA reduces leakage. In our experiments, the model trained with the combination of CDA and DP had the lowest leakage of all models. Thus, CDA seems to increase the privacy in models even more when combined with DP, as demonstrated by membership inference attacks. We explain this by the fact that during the process of 2-sided CDA, sentences containing a target word (e.g., a masculine or feminine pronoun) are duplicated and modified to the original data. Therefore, during 5Refer to Appendix F for more information on the GLUE tasks. ![7_image_0.png](7_image_0.png) commparison of loss values in the membership inference attack, for every changed sentence, the loss is automatically different even without training the target model. However, we would like to stress that this observation is yet another example of the known phenomena: Better results in membership inference attacks do not necessarily correspond to stronger formal privacy guarantees. ## 2. While Dp Increases Biases In Classification tasks, its effects on language modeling are negligible. To explain this phenomenon, we briefly revisit what was already addressed in related work, namely the presumption about why DP leads to increased bias in classification tasks. As Bagdasaryan et al. (2019) show, bias/unfairness in classification tasks can arise when the classifier is trained on data that exhibit representative bias, i.e., represent a particular demographic group better than others. This decreases the accuracy of this classifier on "minority" data. Such bias is thus caused by the lack of diversity in the data (Dastin, 2018). Bagdasaryan et al. (2019) explain the increasing impact of DP on bias by the fact that language that was underrepresented in the training data causes it to receive larger updates in training and thus is more affected by clipping and the addition of noise in the process of DP-SGD. As a result, and according to this explanation, tweets with African American language were classified worse in terms of sentiment than those with standard American English in their work (Bagdasaryan et al., 2019). However, the bias in language models is one that already exists in the world and is therefore included in the data on which a model is trained. Accordingly, a minority is not defined by being underrepre- ![7_image_1.png](7_image_1.png) sented in the data, e.g., by having fewer resumes of female developers (Dastin, 2018). Rather, it is defined by being associated with human stereotypes in the text corpora, e.g., by the fact that men in texts are more often programmers and women are housewives (Bolukbasi et al., 2016). However, this means that the model initially learned and holds this information and therefore should not find it extraordinarily complex. Thus, it should also neither produce larger model updates for this data nor add a disproportionally amount of noise. Hence, Bagdasaryan et al.'s (2019) assumption is not applicable to our setting. To distinguish our setting more precisely: We added DP in the process of self-supervised language modeling instead of supervised classification tasks (where different classes may have different sizes) and found that stereotypical associations were not reinforced as a result of this process. ## 3. Cda Mitigates The Negative Effect Of Dp On perplexity. Perplexity represents the ability of the model to predict uniformly over the set of specified tokens in a corpus. Huggingface6therefore suggest that the tokenization procedure has a direct impact on perplexity and that this should be taken into account when comparing different models. In the training process, we took this into account by dividing the texts into equal-sized batches with equal numbers of tokens, regardless of whether they were augmented or not. Only the number of characters differed in the augmented method, since, for example, "he" (2 characters) was changed to "she" (3 characters). 6https://huggingface.co/docs/ transformers/perplexity We calculated the outputs of our model for each complete batch and then determined the loss, which finally contributed to the computation of the perplexity. In this respect, the batches in the augmented training process differ from those in the non-augmented training process in the number of characters, which could possibly lead to a minimal change in perplexity. However, we do not believe that this explains the still relatively large mitigating effect of CDA on DP and leave this open for future research. ## 6 Conclusion Existing literature has found a negative trade-off between differential privacy and fairness in NLP classifiers that results in minorities being classified worse, thus with lower accuracy (Farrand et al., 2020; Hansen et al., 2022; Bagdasaryan et al., 2019; Cummings et al., 2019). In our work, we explored this trade-off in language modeling with transformers. In particular, we applied debiasing methods and differential privacy to the pre-trained GPT-2 model in six different experimental setups and investigated their mutual effects measured by several complementary performance metrics. We found positive results when combining these two paradigms. First, the debiasing method CDA combined with DP protects against membership inference attacks more than DP by itself. Second, unlike previously found in classification models, we did not observe a negative effect of DP on fairness in language models. Finally, it is worth highlighting that in training with both debiasing and privacy objective, CDA mitigated the negative impact of DP on language modeling ability. ## 7 Limitations Our experiments were performed under some limitations. Since our work deals with both privacy and bias, we tried to keep the individual concepts within bounds, and thus only focused on the oftentreated case of gender bias. Other works, however, also consider cases of, for example, stereotypes towards members of the LGBTQIA+ community or different religions (Barikeri et al., 2021; Nozza et al., 2022). Additionally, we adopted the simplified assumption of binary genders without considering other existing identities such as non-binary or trans*7. 7https://www.gendercensus.com/results/ 2022-worldwide/ Furthermore, our computational resources were limited. Training with DP requires a lot of GPU memory (cf. Yu et al. 2021a; 2021b), which is why we could not train the entire GPT-2 medium with DP. Moreover, we could only train with a batch size of 2. Compensating this by increasing the gradient accumulation steps was also only possible to a small extent due to the limited memory. However, it is likely that DP could have a higher effect on some of the evaluation frameworks when applied to all layers of the model. It would have been of great interest to see if the effect on fairness would have been different. Furthermore, the dataset we used for training was relatively small. Due to limited computational resources and the overall good compatibility with Opacus (Yousefpour et al., 2021), we worked exclusively with GPT-2. For future work, it could be interesting to determine the studied effects in other models. In the experiments, we found that both dropout and CDA did not provide unambiguously reliable mitigation results. We agree with the finding of other authors that the reliability of SEAT is not beyond doubt, as no bias with statistical significance is found even in the pre-trained GPT-2 model (cf. Kurita et al., 2019; May et al., 2019; Meade et al., 2021). For the other two approaches (StereoSet and BEC-Pro), the model must make predictions with respect to very specific stereotypes, and these predictions may not necessarily be changed by training on a counterfactually expanded data set or increased dropout. Moreover, we evaluated our models on the GLUE benchmark, without focusing on individual tests. More closely examining this would be interesting scope of future research. ## Acknowledgments We thank all reviewers for their valuable feedback, hard work, and time, and to Fatemeh Mireshghallah for her help. This project was supported by the National Research Center for Applied Cybersecurity ATHENE. The independent research group TrustHLT is supported by the Hessian Ministry of Higher Education, Research, Science and the Arts. Steffen Eger is supported by DFG Heisenberg grant EG 375/5–1. The NLLG group is further supported by the BMBF grant "Metrics4NLG". ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer and communications security*, pages 308–318. John M Abowd. 2018. The us census bureau adopts differential privacy. In *Proceedings of the 24th ACM* SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2867–2867. Oshin Agarwal, Funda Durupınar, Norman I Badler, and Ani Nenkova. 2019. Word embeddings (also) encode human personality stereotypes. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019), pages 205– 211. Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255. Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. 2019. Differential privacy has disparate impact on model accuracy. *Advances in neural information processing systems*, 32. Soumya Barikeri, Anne Lauscher, Ivan Vulic, and ´ Goran Glavaš. 2021. Redditbias: A real-world resource for bias evaluation and debiasing of conversational language models. arXiv preprint arXiv:2106.03521. Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating bert's gender bias. arXiv preprint arXiv:2010.14534. Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. Private empirical risk minimization: Efficient algorithms and tight error bounds. In *2014 IEEE* 55th annual symposium on foundations of computer science, pages 464–473. IEEE. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. *Advances in neural information processing systems*, 29. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. 2022. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1897–1914. IEEE. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In *28th USENIX Security Symposium* (USENIX Security 19), pages 267–284. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In *30th USENIX Security Symposium (USENIX Security 21)*, pages 2633– 2650. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. *arXiv preprint* arXiv:1708.00055. Rachel Cummings, Varun Gupta, Dhamma Kimpara, and Jamie Morgenstern. 2019. On the compatibility of privacy and fairness. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pages 309–315. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*, pages 177–190. Springer. Jeffrey Dastin. 2018. Amazon scraps secret ai recruiting tool that showed bias against women. In *Ethics* of Data and Analytics, pages 296–299. Auerbach Publications. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In *proceedings of the Conference on Fairness, Accountability, and Transparency*, pages 120–128. Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005). Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006a. Our data, ourselves: Privacy via distributed noise generation. In *Annual international conference on the theory* and applications of cryptographic techniques, pages 486–503. Springer. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006b. Calibrating noise to sensitivity in private data analysis. In *Theory of cryptography* conference, pages 265–284. Springer. Cynthia Dwork and Aaron Roth. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3– 4):211–407. Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. Rappor: Randomized aggregatable privacy-preserving ordinal response. In *Proceedings* of the 2014 ACM SIGSAC conference on computer and communications security, pages 1054–1067. Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, and Andrew Trask. 2020. Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy. In Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, pages 15–19. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In *Proceedings* of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Ivan Habernal. 2021. When differential privacy meets NLP: The devil is in the detail. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1522–1528, Punta Cana, Dominican Republic. Association for Computational Linguistics. Ivan Habernal. 2022. How reparametrization trick broke differentially-private text representation learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 771–777, Dublin, Ireland. Association for Computational Linguistics. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7. Victor Petrén Bach Hansen, Atula Tejaswi Neerkaje, Ramit Sawhney, Lucie Flek, and Anders Søgaard. 2022. The impact of differential privacy on group disparity mitigation. arXiv preprint arXiv:2203.02745. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. *arXiv preprint* arXiv:2106.09685. Timour Igamberdiev, Thomas Arnold, and Ivan Habernal. 2022. DP-Rewrite: Towards Reproducibility and Transparency in Differentially Private Text Rewriting. In *The 29th International Conference* on Computational Linguistics, pages 2927–2933, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Timour Igamberdiev and Ivan Habernal. 2022. PrivacyPreserving Graph Convolutional Networks for Text Classification. In *Proceedings of the Language Resources and Evaluation Conference*, pages 338–350, Marseille, France. European Language Resources Association. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. arXiv preprint arXiv:1906.07337. Anne Lauscher, Tobias Lüken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. arXiv preprint arXiv:2109.03646. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning. Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561. Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2021. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. arXiv preprint arXiv:2110.08527. Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri. 2022a. Quantifying privacy risks of masked language models using membership inference attacks. arXiv preprint arXiv:2203.03929. Fatemehsadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans, and Taylor Berg-Kirkpatrick. 2022b. Memorization in nlp fine-tuning methods. arXiv preprint arXiv:2205.12506. Sasi Kumar Murakonda, Reza Shokri, and George Theodorakopoulos. 2021. Quantifying the privacy risks of learning high-dimensional graphical models. In *International Conference on Artificial Intelligence and Statistics*, pages 2287–2295. PMLR. Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP), pages 739–753. IEEE. Debora Nozza, Federico Bianchi, Anne Lauscher, and Dirk Hovy. 2022. Measuring harmful sentence completion in language models for lgbtqia+ individuals. In *Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion*, pages 26–34. Caroline Criado Perez. 2019. Invisible women: Data bias in a world designed for men. Abrams. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. *arXiv preprint arXiv:2004.07667*. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp. *Transactions* of the Association for Computational Linguistics, 9:1408–1424. Manuel Senge, Timour Igamberdiev, and Ivan Habernal. 2022. One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks. In *Proceedings of the 2022 Conference* on Empirical Methods in Natural Language Processing, Abu Dhabi, UAE. Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, and Zhou Yu. 2021. Selective differential privacy for language modeling. *arXiv preprint arXiv:2108.12944*. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In *2017* IEEE symposium on security and privacy (SP), pages 3–18. IEEE. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. 2013. Stochastic gradient descent with differentially private updates. In *2013 IEEE global conference on signal and information processing*, pages 245–248. IEEE. Rachael Tatman. 2017. Gender and dialect bias in youtube's automatic captions. In Proceedings of the first ACL workshop on ethics in natural language processing, pages 53–59. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. *arXiv* preprint arXiv:1704.05426. Lukas Wutschitz, Huseyin A. Inan, and Andre Manoel. 2022. dp-transformers: Training transformer models with differential privacy. https://www.microsoft.com/en-us/ research/project/dp-transformers. Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri. 2021. Enhanced membership inference attacks against machine learning models. arXiv preprint arXiv:2111.09679. Ying Yin and Ivan Habernal. 2022. Privacy-Preserving Models for Legal Natural Language Processing. In Proceedings of the Natural Legal Language Processing Workshop 2022, pages 172–183, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, Graham Cormode, and Ilya Mironov. 2021. Opacus: User-friendly differential privacy library in PyTorch. *arXiv preprint* arXiv:2109.12298. Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, et al. 2021a. Differentially private fine-tuning of language models. *arXiv preprint arXiv:2110.06500*. Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and TieYan Liu. 2021b. Large scale private learning via lowrank reparametrization. In *International Conference* on Machine Learning, pages 12208–12218. PMLR. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. *arXiv preprint arXiv:1804.06876*. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*. ## A Theoretical Background Differential Privacy We use Differential Privacy (DP) (Dwork et al., 2006b,a) in our experiments to report a quantifiable guarantee of disclosure risk. Given Equation 1, a computation is differentially private if the result on a data set d is 'almost' (up to some probability) equally plausible as the result on the adjacent data set d0, i.e., where d0 differs by a single entry from d. Definition 1 (Differential Privacy). A randomized algorithm M : D → R with domain D *and range* R is (ε, δ*)-differentially private if for every two* adjacent inputs d, d0 *and for every subset* S ⊆ R the following condition holds: ## Pr[M(D) ∈ S] ≤ Exp(Ε) Pr[M(D 0) ∈ S]+Δ (1) In other words, an algorithm is (ε,δ)-DP if the algorithm cannot probabilistically determine the existence of a single instance in the data set by more than a factor of exp(ε). In this context, δ represents a permission to violate this constraint with probability δ. To establish DP during training, we use Differentially Private Stochastic Gradient Descent (DP-SGD; Abadi et al., 2016, Song et al., 2013, Bassily et al., 2014) in which the gradient of the loss function over a random set of examples in each step is computed, the l2-norm of each gradient is clipped, the mean calculated, and noise added to protect privacy. See also (Senge et al., 2022; Igamberdiev and Habernal, 2022; Yin and Habernal, 2022) for an overview of DP-SGD in NLP tasks and (Habernal, 2021, 2022; Igamberdiev et al., 2022) for a general discussion of DP in NLP. ## B Counterfactual Data Augmentation (Cda) CDA (Zhao et al., 2018) is a method to rebalance a dataset to some extent by exchanging bias attribute words in an automated process. More specifically, words that describe one of the target groups (dominant or minor) are replaced with a word that describes the other group. With S as the training dataset consisting of sentences s, and T = {(t1, t2) i} N i=1, a set of N word pairs between the dominant and minorized groups, each sentence siis examined for each pair T = (t1, t2) to find out if either t1 or t2 is included in si. If either of the two words from T is included, it is then replaced with the other word (Lauscher et al., 2021). Thus, if t1, describes the dominant group, e.g., with the word he, then a sentence containing this word would be transformed with she. For this, we used the set of gender term pairs T from Zhao et al. (2018) 8, and further adopted pairs of male and female names that Lauscher et al. (2021) drew from 8https://github.com/uclanlp/corefBias/ tree/master/WinoBias/wino the US Social Security Name Statistics9. We added a few pairs that seemed important, such as names that were common in our dataset. The complete list from word pairs can be found in Appendix D. ## C Low-Rank Adaptation (Lora) Low-Rank Adaptation (LoRA) was proposed by Hu et al. (2021) to curb the high cost of training state-of-the-art language models. Inspired by Aghajanyan et al. (2020), who showed that pre-trained language models have a low "intrinsic dimension" and thus require low minimal dimension to solve an optimization problem with a certain precision level, Hu et al. (2021) assumed that weight updates also have such a low "intrinsic dimension". Given the pre-trained weight matrix W0 ∈ R d×k, with LoRA, the weights' update is therefore constrained with a low-rank decomposition: W0 + ∆W = W0 + BA in which B ∈ R d×rand A ∈ R r×kand the rank r is typically chosen to be small. Since both W0 and 4W get multiplied with the same input x, for h = W0x, we get the following forward pass: $$h=W_{0}x+\Delta W x=W_{0}x+B A x$$ Hu et al. (2021) applied the reparameterization only to the Transformer attention weights and froze all other weights. ## D Cda Word Pairs Below we present all the word pairs that were used to augment the texts for training the CDA and CDA+DP models. ## Name Pairs From Us Social Security Name Statistics10 adopted from (Lauscher et al., **2021)** (liam, olivia), (noah, emma), (oliver, ava), (william, sophia), (elijah, isabella), (james, charlotte), (benjamin, amelia), (lucas, mia), (mason, harper), (alexander, abigail), (henry, emily), (jacob, ella), (michael, elizabeth), (daniel, camila), (logan, luna), (jackson, sofia), (sebastian, avery), (jack, mila), (aiden, aria), (owen, scarlett), (samuel, penelope), (matthew, layla), (joseph, chloe), (levi, victoria), (mateo, madison), (david, eleanor), (john, grace), (wyatt, nora), (carter, riley), (julian, zoey), (luke, hannah), (grayson, hazel), (isaac, lily), (jayden, ellie), (gabriel, lillian), (anthony, zoe), 9https://www.ssa.gov/oact/babynames/ limits.html 10https://www.ssa.gov/oact/babynames/ limits.html (dylan, stella), (leo, aurora), (lincoln, natalie), (jaxon, emilia), (asher, everly), (christopher, leah), (josiah, aubrey), (andrew, willow), (thomas, addison), (joshua, lucy), (ezra, audrey), (hudson, bella), (charles, nova), (isaiah, paisley), (nathan, claire), (adrian, skylar), (christian, isla), (maverick, genesis), (colton, naomi), (elias, elena), (aaron, caroline), (eli, eliana), (landon, anna), (nolan, valentina), (cameron, kennedy), (connor, ivy), (jeremiah, aaliyah), (ezekiel, cora), (easton, kinsley), (miles, hailey), (robert, gabriella), (jameson, allison), (nicholas, gianna), (greyson, serenity), (cooper, samantha), (ian, sarah), (axel, quinn), (jaxson, eva), (dominic, piper), (leonardo, sophie), (luca, sadie), (jordan, josephine), (adam, nevaeh), (xavier, adeline), (jose, arya), (jace, emery), (everett, lydia), (declan, clara), (evan, vivian), (kayden, madeline), (parker, peyton), (wesley, julia), (kai, rylee), (ryan, serena), (jonathan, mandy), (ronald, alice) General Noun Pairs (Zhao et al., **2018)** (actor, actress), (actors, actresses) (airman, airwoman), (airmen, airwomen), (aunt, uncle), (aunts, uncles) (boy, girl), (boys, girls), (bride, groom), (brides, grooms), (brother, sister), (brothers, sisters), (businessman, businesswoman), (businessmen, businesswomen), (chairman, chairwoman), (chairmen, chairwomen), (chairwomen, chairman) (chick, dude), (chicks, dudes), (dad, mom), (dads, moms), (daddy, mommy), (daddies, mommies), (daughter, son), (daughters, sons), (father, mother), (fathers, mothers), (female, male), (females, males), (gal, guy), (gals, guys), (granddaughter, grandson), (granddaughters, grandsons), (guy, girl), (guys, girls), (he, she), (herself, himself), (him, her), (his, her), (husband, wife), (husbands, wives), (king, queen ), (kings, queens), (ladies, gentlemen), (lady, gentleman), (lord, lady), (lords, ladies) (ma'am, sir), (man, woman), (men, women), (miss, sir), (mr., mrs.), (ms., mr.), (policeman, policewoman), (prince, princess), (princes, princesses), (spokesman, spokeswoman), (spokesmen, spokeswomen)(uncle, aunt),(uncles,aunts), (wife, husband), (wives, husbands), (woman , man), (women , men) Extra Word List (Zhao et al., **2018)** (cowboy,cowgirl), (cowboys, cowgirls), (camerawomen, cameramen), (cameraman, camerawoman), (busboy, busgirl), (busboys, busgirls), (bellboy, bellgirl), (bellboys, bellgirls), (barman, barwoman), (barmen, barwomen), (tailor, seamstress), (tailors, seamstress'), (prince, princess), (princes,princesses), (governor, governess), (governors,governesses), (adultor, adultress), (adultors, adultresses), (god, godess), (gods, godesses), (host, hostess), (hosts, hostesses), (abbot, abbess), (abbots, abbesses), (actor, actress), (actors, actresses), (bachelor, spinster), (bachelors, spinsters), (baron, baroness), (barons, barnoesses), (beau, belle), (beaus, belles), (bridegroom, bride), (bridegrooms, brides), (brother, sister), (brothers, sisters), (duke, duchess), (dukes, duchesses), (emperor, empress), (emperors, empresses), (enchanter, enchantress), (father, mother), (fathers, mothers), (fiance, fiancee), (fiances, fiancees), (priest, nun), (priests, nuns), (gentleman, lady), (gentlemen, ladies), (grandfather, grandmother), (grandfathers, grandmothers), (headmaster, headmistress), (headmasters, headmistresses), (hero, heroine), (heros, heroines), (lad, lass), (lads, lasses), (landlord, landlady), (landlords, landladies), (male, female), (males, females), (man, woman), (men, women), (manservant, maidservant), (manservants, maidservants), (marquis, marchioness), (masseur, masseuse), (masseurs, masseuses), (master, mistress), (masters, mistresses), (monk, nun), (monks, nuns), (nephew, niece), (nephews, nieces), (priest, priestess), (priests, priestesses), (sorcerer, sorceress), (sorcerers, sorceresses), (stepfather, stepmother), (stepfathers, stepmothers), (stepson, stepdaughter), (stepsons, stepdaughters), (steward, stewardess), (stewards, stewardesses), (uncle, aunt), (uncles, aunts), (waiter, waitress), (waiters, waitresses), (widower, widow), (widowers, widows), (wizard, witch), (wizards, witches) Additional word pairs added by us (seth, sarah), (his, her), (himself, herself), (male, female) (hers, his)11 ## E Bias Evaluation Test Details E.1 Bec-Pro Structure of the BEC-Pro data set With 5 sentence templates (see Table 4), 18 person words, 20 professions and 3 profession groups, 5.400 English sentences were formed. The used profession words and professions per profession groups are shown in the following. 1 <person> is a <profession>. ![14_image_0.png](14_image_0.png) 2 <person> works as a <profession>. 3 <person> applied for the position of <profession>. 4 <person>, the <profession>, had a good day at work. 5 <person> wants to become a <profession>. Table 4: Sentence templates for creation of English BEC-Pro dataset (Bartl et al., 2020) Person words he, she, woman, man, brother, sister, son, daughter, wife, husband, girlfriend, boyfriend, mother, father, aunt, uncle, mom, dad Male professions taper, steel worker, mobile equipment mechanic, bus mechanic, service technician, heating mechanic, electrical installer, operating engineer,logging worker, floor installer, roofer, mining machine operator, electrician, repairer, conductor, plumber, carpenter, security system installer, mason, firefighter Female professions kindergarten teacher, dental hygienist, speech-language pathologist, dental assistant, childcare worker, medical records technician, secretary, medical assistant, hairdresser, dietitian, vocational nurse, teacher assistant, paralegal, billing clerk, phlebotomist, receptionist, housekeeper, registered nurse, bookkeeper, health aide Balanced professions salesperson, director of religious activities, crossing guard, photographer, lifeguard, lodging manager, healthcare practitioner, sales agent, mail clerk, electrical assembler, insurance sales agent, insurance underwriter, medical scientist, statistician, training specialist, judge, bartender, dispatcher, order clerk, mail sorter ## E.2 Seat And Weat Formally, the WEAT test statistic is calculated as follows: $$w(A,B,X,Y)=\sum_{a\in A}s(a,X,Y)-\sum_{b\in B}s(b,X,Y)$$ The association s of a term t ∈ A or t ∈ B is thereby computed as the difference between t's mean cosine similarity with the words from A and t's mean cosine similarity with the words from B: $$s(t,X,Y)={\frac{1}{|X|}}\sum_{x\in X}\cos(t,x)-{\frac{1}{|Y|}}\sum_{y\in Y}\cos(t,y)$$ $\mathrm{CS}_{\mathrm{n}}$ is the $\mathrm{n}$-th order of $\mathrm{L}$. We report the effect size which is computed as: $$\frac{\mu(\{s(a,X,Y)\}_{a\in A})-\mu(\{s(b,X,Y)\}_{b\in B})}{\sigma(\{s(t,X,Y)\}_{t\in A\cup B})}$$ with µ as the mean and σ as the standard deviation. ![14_image_1.png](14_image_1.png) An effect size closer to 0 means a lower bias in the representations. ## E.3 Seat Test Specifications The following shows the sentence-level sets that are used in the gender-related stereotypes tests. Table 5 shows the combination in which the sentence sets are used in the SEAT tests that we use for bias detection. ## Seat-6 Attribute word set A (Career) executive, management, professional, corporation, salary, office, business, career Attribute word set B (Family) home, parent, child, family, cousin, marriage, wedding, relative Target word set X (Male names) John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill Target word set Y (Female names) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate ## Seat-6B Attribute word set A (Career) executive, management, professional, corporation, salary, office, business, career Attribute word set B (Family) home, parent, child, family, cousin, marriage, wedding, relative Target word set X (Male terms) male, man, boy, brother, he, son Target word set Y (Female terms) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate ## Seat-7 Attribute word set A (Math) math, algebra, calculus, equation, computation, number, addition, geometry Attribute word set B (Arts) poetry, art, dance, literature, novel, symphony, drama, sculpture Target word set X (Male names) John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill Target word set Y (Female names) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate | Target X | Target Y | Attribute A | Attribute B | | |------------|------------|---------------|---------------|--------------| | SEAT-6 | Male names | Female names | Career | Family | | SEAT-6b | Male terms | Female terms | Career | Family | | SEAT-7 | Math | Arts | Male terms | Female terms | | SEAT-7b | Math | Arts | Male names | Female names | | SEAT-8 | Science | Arts | Male terms | Female terms | | SEAT-8b | Science | Arts | Male names | Female names | ## Seat-7B Attribute word set A (Math) math, algebra, calculus, equation, computation, number, addition, geometry Attribute word set B (Arts) poetry, art, dance, literature, novel, symphony, drama, sculpture Target word set X (Male terms) male, man, boy, brother, he, son Target word set Y (Female terms) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate ## Seat-8 Attribute word set A (Science) science, technology, physics, einstein, chemistry, nasa, experiment, astronomy Attribute word set B (Arts) poetry, art, dance, literature, novel, symphony, drama, sculpture Target word set X (Male names) John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill Target word set Y (Female names) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate ## Seat-8B Attribute word set A (Science) science, technology, physics, einstein, chemistry, nasa, experiment, astronomy Attribute word set B (Arts) poetry, art, dance, literature, novel, symphony, drama, sculpture Target word set X (Male terms) male, man, boy, brother, he, son Target word set Y (Female terms) Amy, Joan, Lisa, person, Sarah, Diana, Ann, Kate ## E.4 Stereoset Table 6 shows an example of an intrasentence and intersentence task from StereoSet. All examples included in the dataset can be viewed at https:// github.com/McGill-NLP/bias-bench/ tree/main/data/stereoset. ## F Glue Part of our research question was also to investigate how a DP and/or debiasing objective in the training of language models would affect their ability to perform downstream tasks. To answer this question, we evaluated all models in our experiments on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). GLUE was created as a collection of different English Natural Language Understanding (NLU) tasks to ensure that a model is not exclusively useful for solving a single task (Wang et al., 2018). It consists of nine different tasks which we will briefly explain below. The different GLUE datasets can further be found in Table 7 along with their tasks and metrics. ## F.1 Single-Sentence Tasks The Corpus of Linguistic Acceptability (**CoLA**; Warstadt et al., 2019) and the Stanford Sentiment Treebank (**STS-B**; Socher et al., 2013) both represent single-sentence tasks. CoLA consists of 9,500 sentences labeled as either grammatical or ungrammatical and SST-2 uses around 69,000 sentences from movie reviews that have been annotated regarding their sentiment by humans. CoLA consists of a total of 9,500 sentences labeled as either grammatical or ungrammatical, and SST-2 uses about 69,000 sentences from movie reviews that have been annotated by humans in terms of sentiment. CoLA is evaluated with the Matthews correlation coefficient and SST-2 with accuracy. ## F.2 Similarity And Paraphrase Tasks GLUE further consists of three Similarity and Paraphrase tasks, namely, the Microsoft Research Paraphrase Corpus (**MRPC**, Dolan and Brockett, 2005), the Quora Question Pairs (QQP) dataset12, and the 12https://www.kaggle.com/c/ quora-question-pairs | Intrasentence task Context: | She is a very BLANK person. | Context | |-------------------------------|------------------------------------|-----------------| | Option 1: | emotional | stereotype | | Option 2: | controlled | anti-stereotype | | Option 3: | watermelon | meaningless | | Intersentence task Context: | Most females like children. | Context | | Option 1: | They have a nurturing nature. | stereotype | | Option 2: | They can be harsh disciplinarians. | anti-stereotype | | Option 3: | Let there be light. | meaningless | Table 6: The intrasentence and intersentence CAT from StereoSet (Nadeem et al., 2020) | Corpus | Task | Metrics | |---------------------------------|---------------------|-------------------------------| | Single-Sentence Tasks | | | | CoLA | acceptability | Matthews correlation | | SST-2 | sentiment | acc. | | Similarity and Paraphrase Tasks | | | | MRPC | paraphrase | acc./F1 Score | | STS-B | sentence similarity | Pearson/Spearman correlation | | QQP | paraphrase | acc./F1 Score | | Inference Tasks | | | | MNLI | NLI | matched acc./ mismatched acc. | | QNLI | QA/NLI | acc. | | RTE | NLI | acc. | | WNLI | coreference/NLI | acc. | Table 7: Tasks of GLUE (Wang et al., 2018) Semantic Textual Similarity Benchmark (**STS-B**, Cer et al., 2017). MRPC consists of automatically extracted sentence pairs from news sources on the Web that have been annotated by humans with respect to their semantic similarity. QQP works similarly, except that the data are question pairs from the website Quora. The task here is also to determine whether a question pair is semantically equal. Both the MRPC and QQP are imbalanced with respect to their classes, which is why the F1 score is used to evaluate the task in addition to accuracy. STS-B is a collection of sentence pairs from news headlines, video and image headlines, and NLI data. The task of the model is to predict a similarity score per pair, previously determined by humans. STS-B is evaluated with Pearson and Spearman correlation coefficients. ## F.3 Inference Tasks The third task category in GLUE is the Inference Tasks. These include 4 different datasets, namely the Multi-Genre Natural Language Inference Corpus (**MNLI**; Williams et al., 2017), the Stanford Question Answering Dataset (**QNLI**, Rajpurkar et al., 2016), the Recognizing Textual Entailment (RTE) datasets and the Winograd Schema Challenge (**WNLI**; Levesque et al., 2012). MNLI gives pairs of sentences each, consisting of a premise sentence and a hypothesis sentence. Based on this, the model should predict whether the hypothesis entails the premise, contradicts it, or neither. The corpus consists of about 413 thousand examples. Evaluation is performed on both the matching (intra-domain) and non-matching (cross-domain) sections. QNLI consists of examples, each containing a question and a paragraph that answers the question in one sentence. In GLUE, sentence pairs are formed on the data set from the question and each sentence in the paragraph. The model must then determine if a sentence contains the answer to the question. RTE includes a number of different entailment challenges, RTE1 (Dagan et al., 2005), RTE2 (Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Similar to MNLI, for this task the model must predict whether the meaning of one text entails that of another, contradicted or neither. WNLI is a comprehension task in which the model, given a sentence with pronouns and a list of referees, reads the sentence and must determine which of the referees from the list the model is referring to. The challenge is converted into a sentence pair classification within GLUE and sentences are formed for it that contain every possible referent instead of the ambiguous pronoun. The task is then to determine whether the sentence with the substituted pronoun is entailed by the original sentence. (Wang et al., 2018) give this modification of the dataset the name WNLI (Winograd NLI). Each of QNLI, RTE, and WNLI are evaluated using accuracy. ## G Results G.1 Glue Results Table 8 shows the results for GLUE per task and per model. ## G.2 Mia Recall Results Table 9 shows the MIA Recall resulting from the membership inference attack per epoch. ## G.3 Debiasing Results Table 10 show the complete results of SEAT per model and per test. ## H Additional Figures Figure 5 shows our extension of the referencebased likelihood ratio attack adjusted for models that were trained on counterfactually augmented data. pt.GPT-2 Baseline CDA Dropout DP CDA+DP Dropout+DP CoLA 0.456 (0.047) 0.024 0.033 0.018 0.049 0.051 0.006 SST-2 0.942 (0.901) 0.910 0.913 0.903 0.899 0.901 0.889 MRPC 0.850 (0.667) 0.791 0.795 0.787 0.714 0.715 0.689 STS-B 0.844 (0.069) 0.249 0.254 0.191 0.071 0.072 0.047 QQP 0.901 (0.832) 0.832 0.834 0.826 0.833 0.832 0.827 MNLI 0.853 (0.758) 0.769 0.770 0.755 0.759 0.760 0.736 QNLI 0.899 (0.815) 0.825 0.826 0.813 0.814 0.814 0.800 RTE 0.678 (0.493) 0.516 0.521 0.521 0.495 0.496 0.493 WNLI 0.408 (0.474) 0.516 0.540 0.531 0.474 0.474 0.474 GLUE Score 0.759 (0.561) 0.604 0.610 0.594 0.567 0.568 0.551 Table 8: NLU Task results for all models. The last row shows the average over all tasks, the GLUE score. The first column represents the results for the pre-trained GPT-2 and the values in parentheses show the results on the same model but with reduced parameter size through LoRA. Baseline CDA Dropout DP CDA+DP Dropout+DP Epoch 0 0.0603 0.0750 0.0608 0.0517 0.0304 0.0491 Epoch 1 0.0600 0.0754 0.0606 0.0553 0.0295 0.0481 Epoch 2 0.0603 0.0755 0.0603 0.0579 0.0287 0.0507 End-of-training 0.0603 0.0755 0.0603 0.0579 0.0287 0.0507 Table 9: MIA Recall for all our trained models over 3 epochs. SEAT-6 SEAT-6b SEAT-7 SEAT-7b SEAT-8 SEAT-8b Avg. Effect size (↓) Baseline 0.510* 0.097 -0.084 0.105 0.119 0.147 0.177 GPT-2 0.274 0.074 -0.040 -0.186 0.009 -0.023 0.101 + CDA 0.875* 0.073 0.042 0.215 0.163 0.169 0.256 + Dropout 0.670* 0.148 -0.044 0.195 0.120 0.177 0.226 + DP 0.273 0.074 -0.040 -0.186 0.009 -0.023 0.101 + CDA + DP 0.274 0.074 -0.034 -0.186 0.009 -0.023 0.101 + Dropout + DP 0.273 0.074 -0.040 -0.186 0.009 -0.023 0.101 Table 10: SEAT effect sizes for all models. Effect sizes closer to 0 imply less biased model representations. Statistically significant effect sizes at p < 0.01 are marked with *. The last column shows the average absolute effect size (↓) across all six gender-specific SEAT tests for each model. ![19_image_0.png](19_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? We don't see any risks in our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4 ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The BookCorpus that we used is a very commonly used dataset. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The BookCorpus that we used is a very commonly used dataset containing fictional books which is why we did not see the need to do this. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The BookCorpus contain 11,038 different books which is why we could not cover all domains, linguistic phenomena and demographic groups represented. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.1 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? This was not the focus of our studies. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We investigate bias and privacy and therefore use very specific evaluation frameworks. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We will release the code upon acceptance. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-etal-2023-css
{CSS}: A Large-scale Cross-schema {C}hinese Text-to-{SQL} Medical Dataset
https://aclanthology.org/2023.findings-acl.435
The cross-domain text-to-SQL task aims to build a system that can parse user questions into SQL on complete unseen databases, and the single-domain text-to-SQL task evaluates the performance on identical databases. Both of these setups confront unavoidable difficulties in real-world applications. To this end, we introduce the cross-schema text-to-SQL task, where the databases of evaluation data are different from that in the training data but come from the same domain. Furthermore, we present CSS, a large-scale CrosS-Schema Chinese text-to-SQL dataset, to carry on corresponding studies. CSS originally consisted of 4,340 question/SQL pairs across 2 databases. In order to generalize models to different medical systems, we extend CSS and create 19 new databases along with 29,280 corresponding dataset examples. Moreover, CSS is also a large corpus for single-domain Chinese text-to-SQL studies. We present the data collection approach and a series of analyses of the data statistics. To show the potential and usefulness of CSS, benchmarking baselines have been conducted and reported. Our dataset is publicly available at \url{https://huggingface.co/datasets/zhanghanchong/css}.
CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset Hanchong Zhang1∗, Jieyu Li1∗, Lu Chen1†**, Ruisheng Cao**1, Yunyan Zhang2, Yu Huang2, Yefeng Zheng2 **and Kai Yu**1† 1X-LANCE Lab, Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence, SJTU AI Institute Shanghai Jiao Tong University, Shanghai, China 2Tencent Jarvis Lab, Shenzhen, China {zhanghanchong,oracion,chenlusz,kai.yu}@sjtu.edu.cn ## Abstract The cross-domain text-to-SQL task aims to build a system that can parse user questions into SQL on complete unseen databases, and the single-domain text-to-SQL task evaluates the performance on identical databases. Both of these setups confront unavoidable difficulties in real-world applications. To this end, we introduce the cross-schema text-to-SQL task, where the databases of evaluation data are different from that in the training data but come from the same domain. Furthermore, we present CSS1, a large-scale CrosS-Schema Chinese text-to-SQL dataset, to carry on corresponding studies. CSS originally consisted of 4,340 question/SQL pairs across 2 databases. In order to generalize models to different medical systems, we extend CSS and create 19 new databases along with 29,280 corresponding dataset examples. Moreover, CSS is also a large corpus for single-domain Chinese textto-SQL studies. We present the data collection approach and a series of analyses of the data statistics. To show the potential and usefulness of CSS, benchmarking baselines have been conducted and reported. Our dataset is publicly available at https://huggingface. co/datasets/zhanghanchong/css. ## 1 Introduction Given the database, the text-to-SQL task (Zhong et al., 2017; Xu et al., 2017) aims to convert the natural language question into the corresponding SQL to complete complicated querying. As the wild usage of relational database, this task attract great attention and has been widely studied in both academic and industrial communities. Recently, text-to-SQL researches (Hui et al., 2022; Lin et al., 2020; Qi et al., 2022) mainly focus on building a parser under a cross-domain ∗The first two authors contributed equally to this work. †The corresponding authors are Lu Chen and Kai Yu. 1Our code is publicly available at https://github.com/ X-LANCE/medical-dataset setup (Yu et al., 2018; Wang et al., 2020b), where the databases of the training set and the evaluation set do not overlap. It aims to construct a universal parser that can automatically adapt different domains to inhibit the problem of data scarcity. However, domain-specific knowledge, especially domain convention, is crucial but difficult to transform across different domains under cross-domain setup. Another line of research focuses on the experiment environment where the training data and the evaluation data are based on the same database, which is known as a single-domain setup. A single-domain text-to-SQL system can parse domain knowledge more easily and also has more wide applications in the real world. However, the problem of data scarcity always comes up when security issues and privacy issues exist. Therefore, both of these setups will face particular difficulties when it comes to the real world. To this end, we introduce the cross-schema setup in this work. The cross-schema text-to-SQL tasks aim to build a text-to-SQL parser that can automatically adapt different databases from the same domain, which can avoid the aforementioned problems. Actually, the cross-schema text-to-SQL also has broad applications in the real world. For example, all the hospital store the information of patients and medical resources in databases with different structures. Most information categories are identical across these databases, for instance, the patient name and the treatment date. Moreover, domain-specific representations such as medicine names in databases and user questions are also commonly used. In this case, we can build a universal in-domain text-to-SQL parser that can be deployed on the new database from the given domain. Compared with the cross-domain setup, a crossschema parser will not always confront completely unseen domain knowledge. On the other hand, compared with the single-domain setup, the problem of data scarcity can also be inhibited because the data from other in-domain databases can be used to train the model. However, a cross-schema text-to-SQL parser need to automatically adapt different database schema structure. Unfortunately, this issue is less investigated before. Therefore, how to construct a structural-general parser is the mainly challenge of cross-domain text-to-SQL. In this paper, we propose a large-scale CrosS-Schema Chinese text-to-SQL dataset (CSS), containing 33,620 question/SQL pairs across 21 databases. We generate (question, SQL) pairs with templates and manually paraphrase the question by crowd-sourced. For the databases, we collect 2 real-world database schemas involving medical insurance and medical treatment. As the privacy issues, we are not allowed to use the original data. Therefore, we fill the databases with pseudo values. Based on these 2 seed databases, we alter the schema and expand 19 databases with different structures. Hence, CSS can be used to develop cross-schema text-to-SQL systems. On the other hand, the original 2 databases correspond 4,340 samples, which construct the largest Chinese singledomain corpus. This corpus also allows researchers to carry on related studies. Our main contributions can be summarized as follows: 1. We present the cross-schema text-to-SQL task and propose a large-scale dataset, CSS, for corresponding studies. The dataset and baseline models will be available if accepted. 2. We provide a real-world Chinese corpus for single-domain text-to-SQL researches. 3. To show the potential and usefulness of CSS, we conducted and reported the baselines of cross-schema text-to-SQL and Chinese singledomain text-to-SQL. ## 2 Related Works Single-domain text-to-SQL datasets Earliest semantic parsing models are designed for singledomain systems to answer complex questions. ATIS (Price, 1990; Dahl et al., 1994) contains manually annotated questions for the flight-booking task. GeoQuery (Zelle and Mooney, 1996) contains manually annotated questions about US geography. Popescu et al. (2003); Giordani and Moschitti (2012); Iyer et al. (2017) convert GeoQuery into the SQL version. Restaurants (Tang and Mooney, 2000; Popescu et al., 2003) is a dataset including questions about restaurants and their food types etc. Scholar (Iyer et al., 2017) includes questions about academic publications and corresponding automatically generated SQL queries. Academic (Li and Jagadish, 2014) enumerates all query logics supported by the Microsoft Academic Search (MAS) website and writes corresponding question utterances. Yelp and IMDB (Yaghmazadeh et al., 2017) consists of questions about the Yelp website and the Internet Movie Database. Advising (FineganDollak et al., 2018) consists of questions about the course information database at the University of Michigan along with artificial data records. Single-domain text-to-SQL datasets contain only one database. Although text-to-SQL models trained with single-domain datasets are applied in corresponding specific domains, different systems with the same domain but different backgrounds have diverse databases, which means that models should have the generalization ability to be transferred among different systems. Existing singledomain datasets do not own the feature that requires models to improve cross-schema generalization ability. On the contrary, our cross-schema setup is raised for this issue. Cross-domain text-to-SQL datasets Recent researches expect text-to-SQL models (Guo et al., 2019; Bogin et al., 2019; Zhang et al., 2019) to generalize to unseen databases. Thus cross-domain text-to-SQL datasets are released. Zhong et al. (2017) releases WikiSQL, a dataset of 80,654 manually annotated question/SQL pairs distributed across more than 20k tables from Wikipedia. Although WikiSQL is a large-scale dataset, each database schema merely consists of one table and each SQL query merely consists of SELECT, FROM, WHERE clauses. Yu et al. (2018) releases Spider, a large-scale complex cross-domain text-toSQL dataset. Comparing with previous datasets, Spider owns much more complex databases for various domains and complex SQL queries with advanced SQL clauses and nested SQL structures. Wang et al. (2020b) releases DuSQL, yet another large-scale cross-domain text-to-SQL dataset but in Chinese. Having similar form with Spider, DuSQL has become a popular Chinese text-to-SQL dataset. There are also some conversational cross-domain text-to-SQL datasets, including SParC (Yu et al., 2019b), CoSQL (Yu et al., 2019a), CHASE (Guo et al., 2021), DIR (Li et al., 2023b) etc. Although our cross-schema dataset owns more than one databases, it is different from crossdomain datasets. It concentrates on model generalization ability across different databases which share the similar structure since they are in the same domain. ## 3 Dataset Collection In this section, we introduce our method of constructing the medical dataset CSS in detail. The dataset construction method mainly consists of five steps: 1) initial databases creation, 2) question/SQL templates creation, 3) values filling, 4) questions rewriting, and 5) database schema extension. We discuss five steps of constructing the dataset in Section 3.1-3.5 respectively. Figure 1 shows the overview of the complete process. ![2_image_0.png](2_image_0.png) ## 3.1 Initial Databases Creation To construct the dataset, the first step is to create initial databases. We collect two databases from the real world scenario, i.e. the insurance database and the medical database. The insurance database mainly stores medical consumption records of many different patients. The medical database mainly stores records of medical diagnostic and examination results. It is obvious that records data in medical databases are usually sensitive, since the issue of patients privacy is involved in these data. It is not feasible to use data from the real world directly in our dataset. To protect privacy of users involved in the medical system, we generate database cellvalues with certain rules and ensure that generated data are reasonable. ## 3.2 Question/Sql Templates Creation Creating abundant and diverse question/SQL templates is an important step for constructing the dataset, which influences the quality of the generated dataset a lot. A question/SQL template can be regarded as an example of the dataset, which consists of a question template and a SQL query template answering the question. The only difference between the question/SQL template and the real dataset example is that values carrying information (e.g. ID, name, time) in the question/SQL template are replaced with special tokens. In the subsequent steps, values can be generated and filled into corresponding question/SQL templates with certain rules, which means that all question/SQL templates can be transformed into real dataset examples eventually. In general, we use three methods to create various question/SQL templates. Firstly, given medical databases, we enumerate all columns and attempt to raise a question for each column as far as possible. Sometimes we put several columns with close lexical relations into one question/SQL template, since the diversity of the SELECT clause can get increased. It is obvious that question/SQL templates written by this method are relatively simple. Secondly, we raise a few medical query scenarios and create question/SQL templates based on them. In the real world, different people with different occupations and social roles will ask different types of questions. For instance, patients may care their medical consumption records and doctors may care medical examination results. Based on different real-world scenarios, we can raise various questions that meet needs of people with different social roles (e.g. doctor, patient). Furthermore, these question/SQL templates are usually more challenge since their SQL skeletons are usually more complex and diverse. Thirdly, we add question/SQL templates which include SQL keywords and SQL skeletons that never occur in previous templates. We count occurrence frequencies for all SQL grammar rules and SQL skeletons that occur in dataset examples. Referring to statistical results, we create questions and corresponding SQL queries which consist of SQL grammar rules that occur in few dataset examples. Detailed statistical results are shown in Section 4.2. By creating question/SQL templates with this method, the SQL diversity of the dataset can get improved. We eventually raise 434 different question/SQL templates totally. All these templates will get processed in subsequent steps. ## 3.3 Values Filling In order to generate real dataset examples from question/SQL templates, values should be generated and filled into all templates. Different types of values are replaced with different special tokens in question/SQL templates. In this step, we use certain rules to generate random values for various special tokens. Concretely, special tokens indicating number or time are filled with reasonable and suitable random values. Special tokens indicating ID (e.g. person ID, hospital ID) are filled with random strings, which consist of numbers and letters. Other special tokens basically indicate specialized and professional words like disease names. To generate these values, we firstly collect sufficient disease names, medicine names, medical test names, etc. Then these special tokens are filled with values chosen at random from corresponding candidate value lists. Actually one unique question/SQL template can be used to generate several different dataset examples, since the template can be completed with various random values. We generate 10 dataset examples for each question/SQL template. Consequently there are totally 4,340 question/SQL pairs which are directly generated from 434 question/SQL templates. ## 3.4 Questions Rewriting Although 4,340 question/SQL pairs directly generated from templates can already be used to train and test text-to-SQL models, they cannot be directly added into the eventual medical dataset. Question sentences generated from question templates are usually unnatural. Moreover, 10 question sentences generated from the same one question template share the same sentence pattern. which means lack of natural language diversity. To tackle the issue of language naturalness and diversity, we recruit annotators to rewrite dataset examples. All questions directly derived from question templates are rewritten by annotators. In this process, lexical and syntactic patterns of question sentences get changed, which leads to improvement of natural language diversity of the dataset. To ensure the diversity of rewritten question sentences, we design a specific metric to evaluate the rewriting quality. We recruit two groups of annotators and request them to rewrite question sentences with metric scores as high as possible. Finally we merge two rewriting results from different annotating groups with some rules and acquire all rewritten questions. Detailed explanation of the metric is shown in Appendix A. The correctness of rewritten questions is also an important issue. We use the automatic method to examine rewritten questions and make sure that key information are always maintained after the rewriting process. Payment. All annotators were paid based on their annotations. Annotators would get paid 0.58 RMB for each annotation example. ## 3.5 Database Schema Extension Database schema extension is a key feature of CSS. Text-to-SQL models with good performance should have the ability to be used in various medical systems. In the real world application, different medical systems may use different databases. However, these databases may share the similar structure, since all of them are designed for the medical domain. Consequently, we believe that cross-schema generalization ability for text-to-SQL models is significant and add this challenge task in CSS. CSS originally contains 2 databases. Based on them, we follow Li et al. (2023a) and create 19 new databases. Firstly for two tables linked with foreign keys, we create a new relation table between the original two tables and create new foreign keys respectively pointing to them. Secondly for two tables linked with foreign keys, we merge them by putting their columns together in a merged table. Thirdly for a table with a special column which only contains a few different kinds of values (e.g. gender), we split the table into several tables according to those limited values. ![3_image_0.png](3_image_0.png) databases and 29,280 new dataset examples. Therefore, CSS totally contains 33,620 question/SQL pairs across 21 databases. ## 4 Dataset Statistics And Comparison In this section, we list some statistical information of CSS and existing datasets and do comparison. We mainly discuss scale statistics and SQL statistics with various datasets, including single-domain datasets, cross-domain datasets and CSS. ## 4.1 Scale Statistics Table 1 shows scale statistics of existing datasets, including single-domain datasets, cross-domain datasets, and the medical dataset CSS. For singledomain datasets listed in the table and WikiSQL, we use the standardized version from FineganDollak et al. (2018). CSS contains 33,620 examples generated from scratch across 21 databases. Comparing with previous single-domain datasets, CSS has the largest scale and various databases. We extend original databases with several certain rules. Therefore, CSS can help text-to-SQL models generalize to different medical systems, where databases are different but share the similar structure. Databases in CSS have a great number of columns, composite primary keys, and foreign keys, which indicates that databases in CSS commonly possess complex structures. This is also a challenge feature of CSS. It requires models to find out effective information from complex database structures. ## 4.2 Sql Statistics First of all, we clarify the concept named SQL skeleton. For a certain SQL query, it is feasible to remove detailed schema items and values from the SQL query. Concretely, we replace tables used in the SQL query with the special token "tab". Columns and values are processed with the similar method. Columns are replaced with the special token "col" and values are replaced with the special token "value". Then the result is defined as the SQL skeleton, which retains the basic structure of the original SQL query. Table 2 shows SQL statistics of existing datasets. CSS totally possesses 562 different SQL skeletons, which is comparable with ATIS and surpasses other single-domain datasets. Note that SQL queries in CSS are commonly very long. The average and maximum number of SQL query tokens are 55.41 and 243 respectively, which has surpassed almost all existing datasets except ATIS. The statistical result indicates that SQL queries in CSS are diverse and complex. This is still a challenge for text-toSQL models. ## 5 Tasks And Models 5.1 Dataset Splitting We provide three methods to split the dataset into train/dev/test sets. Different dataset splitting methods correspond to different tasks and raise different challenges for models. For the first method, 4,340 original dataset examples are shuffled at random and then are split with the ratio 0.8/0.1/0.1. This sub-task is an ordinary text-to-SQL task setting and requires models to generalize well on natural language. For the second method, 434 question/SQL templates are shuffled at random and then are split with the ratio 0.8/0.1/0.1. Then 4,340 original question/SQL pairs fall into corresponding dataset subsets. Comparing with other dataset splitting methods, larger language gap and SQL gap exist among train/dev/test sets, since different question/SQL templates generally express different meanings. Models are required to have the stronger SQL pattern generalization ability under this sub-task. For the third method, we add extended dataset examples and split all 33,620 examples according to their databases. All databases are split with the ratio 0.6/0.2/0.2. No overlap of databases exists in train/dev/test sets. This dataset splitting method provides a challenge task, which requires models to possess the stronger generalization ability across diverse databases sharing similar structures. ## 5.2 Syntactic Role Prediction How to improve the cross-schema generalization ability of text-to-SQL models is a key challenge raised in CSS. In this section, we introduce our simple method to tackle the issue of model generalization ability across different databases. The text-to-SQL model LGESQL (Cao et al., 2021) add an auxiliary task named graph pruning in order to improve the model performance. Given the natural language question and the database schema, the model is required to predict whether each schema item occurs in the SQL query. Following Cao et al. (2021), we raise a similar auxiliary task named syntactic role prediction (SRP). Under | Dataset | Language | Examples | DBs | Avg T/DB | Avg C/T | Avg P/T | Avg F/T | |-------------|------------|------------|--------|------------|-----------|-----------|-----------| | ATIS | English | 19,201 | 1 | 25 | 5.24 | 0.16 | 1.56 | | GeoQuery | English | 920 | 1 | 8 | 3.88 | 1.75 | 1.12 | | Restaurants | English | 378 | 1 | 3 | 4.00 | 1.00 | 1.33 | | Scholar | English | 1,858 | 1 | 12 | 2.33 | 0.58 | 0.75 | | Academic | English | 200 | 1 | 15 | 2.80 | 0.47 | 0.00 | | Yelp | English | 141 | 1 | 7 | 5.43 | 1.00 | 0.00 | | IMDB | English | 147 | 1 | 16 | 4.06 | 1.00 | 0.19 | | Advising | English | 4,744 | 1 | 18 | 6.89 | 1.39 | 5.39 | | WikiSQL | English | 80,654 | 26,531 | 1.00 | 6.34 | 0.00 | 0.00 | | Spider | English | 9,693 | 166 | 5.28 | 5.14 | 0.89 | 0.91 | | DuSQL | Chinese | 25,003 | 208 | 4.04 | 5.29 | 0.51 | 0.71 | | CSS | Chinese | 33,620 | 21 | 5.62 | 28.49 | 1.68 | 1.65 | | Dataset | # SQL | Avg Len | Max Len | |-------------|---------|-----------|-----------| | ATIS | 828 | 97.96 | 474 | | GeoQuery | 120 | 26.08 | 92 | | Restaurants | 12 | 29.22 | 61 | | Scholar | 158 | 37.07 | 65 | | Academic | 76 | 36.30 | 116 | | Yelp | 62 | 28.92 | 56 | | IMDB | 30 | 27.48 | 55 | | Advising | 169 | 47.49 | 169 | | WikiSQL | 39 | 12.48 | 23 | | Spider | 1,116 | 17.99 | 87 | | DuSQL | 2,323 | 20.23 | 37 | | CSS | 562 | 55.41 | 243 | this task, the model is required to predict in which SQL clause each question token occurs. The SQL query structure may change as the database schema changes. Figure 3 shows an instance, where two databases share the similar structure but the key information "doctor" in the question are used in the FROM clause and the WHERE clause respectively. We hypothesize that model with strong cross-schema generalization ability should distinguish syntactic roles of every question tokens under different databases. Concretely, according to the text-to-SQL model LGESQL, the model input is a graph G = (*V, E*) constructed with the given question and the database schema. Graph nodes V include question tokens and schema items (i.e. tables and columns) and graph edges E indicate relations among them. The model encodes each node i into an embedding vector xi. Then the context vector x˜i for each node i can be computed with multi-head attention. $$\alpha_{i j}^{h}=\mathrm{softmax}_{j\in\mathcal{N}_{i}}\frac{(\mathbf{x}_{i}\mathbf{W}_{q}^{h})(\mathbf{x}_{j}\mathbf{W}_{k}^{h})^{\mathrm{T}}}{\sqrt{d/H}},$$ $$\tilde{\mathbf{x}}_{i}=(\mathrm{concat}_{h=1}^{H}\sum_{j\in\mathcal{N}_{i}}\alpha_{i j}^{h}\mathbf{x}_{j}\mathbf{W}_{v}^{h})\mathbf{W}_{o},$$ where d is the dimension of embedding vectors, H is the number of heads, Niis the neighborhood of the node i, and Wh q,Wh k ,Wh v ∈ R d×d/H,Wo ∈ R d×dare network parameters. For each question node qi, the model can predict in which SQL clause it occurs with xqi and x˜qi . Specifically we divide the SQL query into 16 different parts, which are discussed in detail in Appendix B. Thus the auxiliary task is a binary classification task for each question token and each SQL part. P(yqi|xqi , x˜qi ) = σ([xqi ; x˜qi ]W + b), where W ∈ R 2d×16, b ∈ R 1×16 are network parameters and yqi is the probability vector. The ground truth y g qi,j is 1 when the question token qi occurs in the j-th SQL part. The training object is $$\begin{array}{l}{{{\mathcal L}=-\sum_{q_{i}}\sum_{j}[y_{q_{i},j}^{g}\log P(y_{q_{i},j}|{\bf x}_{q_{i}},\tilde{\bf x}_{q_{i}})}}\\ {{\quad+(1-y_{q_{i},j}^{g})\log(1-P(y_{q_{i},j}|{\bf x}_{q_{i}},\tilde{\bf x}_{q_{i}})].}}\end{array}$$ The syntactic role prediction task is combined with the main task in a multitasking way. In addition, SRP can also be added into the RATSQL model directly, since RATSQL and LGESQL both encode graph nodes into embedding vectors and SRP only takes these vectors as the input. ![6_image_1.png](6_image_1.png) ## 6 Experiments 6.1 Experiment Setup Baseline approaches We adopt three competitive text-to-SQL models as the baseline approaches, i.e. RATSQL (Wang et al., 2020a), LGESQL (Cao et al., 2021), and PICARD (Scholak et al., 2021). RATSQL and LGESQL process given information with graph encoding and decode the abstract syntax tree (AST) of the result SQL query. PICARD is a sequence-to-sequence approach and is different from the other two approaches. RATSQL constructs a graph with question tokens and schema items (i.e. tables and columns) and encodes the graph with the relation-aware selfattention mechanism. With the unified framework, RATSQL can easily establish and handle relations among graph nodes and then encode elements with various categories jointly. Comparing with RATSQL, LGESQL improves the model performance by utilizing the line graph. LGESQL pays more attention to the topological structure of graph edges and distinguishes local and non-local relations for graph nodes. Besides the original graph used in RATSQL, LGESQL also constructs the corresponding line graph, since the line graph can help facilitate propagating encoding messages among nodes and edges. ![6_image_0.png](6_image_0.png) Different from RATSQL and LGESQL, PICARD is a sequence-to-sequence model. Nowadays large pretrained language models have possessed the strong ability for handling and processing natural language with unconstrained output space. However, SQL is a formal language with strict grammar rules. Invalid SQL queries are very likely to be generated if pretrained models are directly finetuned with text-to-SQL datasets. PICARD provides an approach, which can help reject invalid tokens during each decoding step and generate sequences in the constrained output space. For each baseline model, we use pretrained language models (PLMs) within the encoding module. In our experiments, the PLM longformer-chinese-base-4096 is applied in RATSQL and LGESQL and the PLM mbart-large-50 is applied in PICARD. Evaluation metrics There are several metrics to evaluate text-to-SQL model performances, including exact match and execution accuracy etc. The exact match metric requires the predicted SQL query to be equivalent to the gold SQL query. The execution accuracy metric requires the execution result of the predicted SQL query to be correct. We mainly use the exact match (EM) metric in our experiments. Concretely, we present model performances with (w) and without (w/o) value evaluation respectively. ## 6.2 Results And Analysis According to 3 different dataset splitting methods, we test baseline models under 3 sub-task settings. Table 3 shows model performances under dataset splitting method according to examples. LGESQL achieves the best performance under this sub-task, i.e. 90.8% EM(w/o) accuracy and 81.1% EM(w) accuracy on the test set. This indicates that existing text-to-SQL parsing models have had the ability to perform very well if all databases and possible SQL structures have appeared in the train set. Models merely need to generalize on natural language, which is simple when utilizing strong PLMs. Table 5 shows model performances under the template-splitting sub-task. Comparing with the previous sub-task, performances of three baseline models decrease a lot. Although RATSQL achieves the best performance under this sub-task, the EM(w/o) accuracy and the EM(w) accuracy on the test set are only 58.9% and 53.0% respectively. Question/SQL templates in dev/test sets do not appear in the train set. Thus models have to predict unseen SQL patterns when testing. The experiment result indicates that there is still a large room for the improvement of model generalization ability across SQL patterns. We believe that CSS can also help facilitate researches on improving model ability of predicting unseen SQL patterns. ## Q: 列出水天干这位患者在医院7539997住院的 就诊记录里入院科室名字含有耳鼻喉科的记录 Q: List the records of patient Tiangan Shui admitted to hospital 7539997, including the records with the department name containing *Otolaryngology.* Gold: SELECT * FROM person_info JOIN hz_info JOIN zyjzjlb WHERE person_info.XM = "水天干" AND hz_info.YLJGDM = "*7539997*" AND zyjzjlb.JZKSMC LIKE "%耳鼻喉科%" Pred: SELECT * FROM person_info JOIN hz_info JOIN zyjzjlb WHERE person_info.XM = "水天干" AND hz_info.YLJGDM = "*7539997*" AND zyjzjlb.JZKSMC LIKE "%耳鼻炎%" Q: 从01年1月31日 一 直 到09年8月12日 内 患 者80476579被开出盐酸多奈哌齐片(薄膜)的总 次数一共有多少? Q: How many times has patient 80476579 been prescribed *donepezil hydrochloride tablets (thin* film) *from 2001-01-31 to 2009-08-12?* Gold: SELECT COUNT(*) FROM t_kc21 JOIN t_kc22 WHERE t_kc21.PERSON_ID == "*80476579*" AND t_kc22.STA_DATE BETWEEN "*2001-01-31*" AND "*2009-* 08-12" AND t_kc22.SOC_SRT_DIRE_NM == "盐酸多奈哌 齐片(薄膜)" Pred: SELECT COUNT(*) FROM t_kc21 JOIN t_kc22 WHERE t_kc21.PERSON_ID == "*80476579*" AND t_kc22.STA_DATE BETWEEN "*2001-01-31*" AND "*2009-* 08-12" AND t_kc22.SOC_SRT_DIRE_NM == "盐酸多奈" Table 4: Case study for the PICARD model when predicting values. FROM conditions are omitted for clarity. Note that as a sequence-to-sequence approach, PICARD cannot perform as well as the two ASTbased approaches (RATSQL and LGESQL) in the ![7_image_0.png](7_image_0.png) | Model | Dev | Test | | | |--------------|-------|--------|------|------| | w/o | w | w/o | w | | | RATSQL | 36.6 | 35.7 | 43.4 | 42.0 | | RATSQL + SRP | 38.3 | 37.4 | 47.2 | 45.3 | template-splitting sub-task. There is a room of model performances between PICARD and ASTbased approaches, especially when values in SQL queries are concerned in evaluation. Table 4 shows two instances from the test set in the templatesplitting sub-task, where the PICARD model successfully generates the structure of the SQL query but predicts the wrong value. As shown in Table 2, SQL queries in CSS are commonly very long and complex, which leads to great difficulty for PICARD decoding. The decoding error would accumulate as the decoding step increases. According to our statistical results, during the decoding process of AST-based approaches, the average number of AST nodes is 56.95. Although the average number of tokens in the SQL query is 55.41, PLM used in PICARD would split tokens into many subwords. Consequently, decoding steps of PICARD is actually much more than AST-based approaches. Furthermore, table and column names in CSS are commonly consisted of unnatural tokens, which improves the decoding difficulty of PICARD a lot. Table 6 shows model performances under dataset splitting method according to different databases. Under this sub-task, we use RATSQL as the baseline model and attempt to add the auxiliary task SRP, expecting to improve the model performance across different databases. The experiment result shows that the model performance increases about 1.7% on the dev set and increases about 3.3%-3.8% on the test set when SRP is applied into RATSQL. This proves that SRP can help improve the crossschema generalization ability of the model when using SRP as a simple baseline method. ![8_image_0.png](8_image_0.png) Figure 4 is an instance from the test set, where RATSQL predicts the wrong SQL but RATSQL with SRP predicts the correct result. After database schema extension, a new relation table is created. However, RATSQL does not understand the change and misses the relation table in the FROM clause. On the contrary, the auxiliary task SRP helps the model utilize the relation table and eventually predict the correct SQL. ## 7 Conclusion This paper presents CSS, a large-scale crossschema Chinese text-to-SQL dataset designed for the medical domain. We illustrate the detailed process of dataset construction and also present statistical information comparing with existing datasets. We raise a challenge task in CSS, which requires models to generalize across various databases but in the same domain. To tackle the above task, we designed a baseline method named syntactic role prediction as an auxiliary task for model training. We conduct benchmark experiments with three competitive baseline models and prove that future researches on CSS is valuable. ## Limitations We raise a new challenge task in our medical dataset CCS. Comparing with existing datasets, CCS requires text-to-SQL models to generalize to different databases with the similar structure in the same domain. To tackle this problem, we provide a baseline method named syntactic role prediction, which is an auxiliary task and can be combined with the main task in a multitasking way. Our experiments prove that SRP can help improve the cross-schema generalization ability of models. However, the improvement is not that large. How to generalize models across different databases sharing the similar structure is still a challenge issue. We expect that future works can solve this difficult problem. ## Ethics Statement We collect two original medical databases from the real world. However, cell-values in medical databases are commonly sensitive, since the information of patients and doctors are involved in these values. Thus we only retain the database schema and generate sufficient cell-values with certain rules. We ensure that generated values are reasonable and that privacy of medical system users can get protected. ## Acknowledgments We thank Xiaowen Li, Kunrui Zhu and Ruihui Zhao from Tencent Jarvis Lab for providing necessary initial data. We also thank all the anonymous reviewers for their thoughtful comments. This work has been supported by the China NSFC Project (No.62106142 and No.62120106006), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), CCF-Tencent Open Fund and Startup Fund for Youngman Research at SJTU (SFYR at SJTU). ## References Ben Bogin, Jonathan Berant, and Matt Gardner. 2019. Representing schema structure with graph neural networks for text-to-SQL parsing. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 4560–4565, Florence, Italy. Association for Computational Linguistics. Ruisheng Cao, Lu Chen, Zhi Chen, Yanbin Zhao, Su Zhu, and Kai Yu. 2021. LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2541–2555, Online. Association for Computational Linguistics. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In *Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994*. Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving textto-SQL evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351–360, Melbourne, Australia. Association for Computational Linguistics. Alessandra Giordani and Alessandro Moschitti. 2012. Translating questions to SQL queries with generative parsers discriminatively reranked. In *Proceedings of* COLING 2012: Posters, pages 401–410, Mumbai, India. The COLING 2012 Organizing Committee. Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, and Ting Liu. 2021. Chase: A large-scale and pragmatic Chinese dataset for cross-database context-dependent text-to-SQL. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2316–2331, Online. Association for Computational Linguistics. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524–4535, Florence, Italy. Association for Computational Linguistics. Binyuan Hui, Ruiying Geng, Lihan Wang, Bowen Qin, Yanyang Li, Bowen Li, Jian Sun, and Yongbin Li. 2022. S 2SQL: Injecting syntax to question-schema interaction graph encoder for text-to-SQL parsers. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1254–1262, Dublin, Ireland. Association for Computational Linguistics. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963–973, Vancouver, Canada. Association for Computational Linguistics. Fei Li and H. V. Jagadish. 2014. Constructing an interactive natural language interface for relational databases. *Proc. VLDB Endow.*, 8(1):73–84. Jieyu Li, Lu Chen, Ruisheng Cao, Su Zhu, Hongshen Xu, Zhi Chen, Hanchong Zhang, and Kai Yu. 2023a. On the structural generalization in text-to-sql. Jieyu Li, Zhi Chen, Lu Chen, Zichen Zhu, Hanqi Li, Ruisheng Cao, and Kai Yu. 2023b. Dir: A large-scale dialogue rewrite dataset for cross-domain conversational text-to-sql. *Applied Sciences*, 13(4). Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2020. Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4870–4888, Online. Association for Computational Linguistics. Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In *Proceedings of the 8th International Conference on Intelligent User Interfaces*, IUI '03, page 149–157, New York, NY, USA. Association for Computing Machinery. P. J. Price. 1990. Evaluation of spoken language systems: the ATIS domain. In *Speech and Natural Language: Proceedings of a Workshop Held at Hidden* Valley, Pennsylvania, June 24-27,1990. Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, and Zhouhan Lin. 2022. RASAT: Integrating relational structures into pretrained Seq2Seq model for text-to-SQL. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 3215–3229, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 9895–9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 133–141, Hong Kong, China. Association for Computational Linguistics. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020a. RATSQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567–7578, Online. Association for Computational Linguistics. Lijie Wang, Ao Zhang, Kun Wu, Ke Sun, Zhenghua Li, Hua Wu, Min Zhang, and Haifeng Wang. 2020b. DuSQL: A large-scale and pragmatic Chinese text-toSQL dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6923–6935, Online. Association for Computational Linguistics. Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language without reinforcement learning. *arXiv preprint* arXiv:1711.04436. Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Type- and content-driven synthesis of SQL queries from natural language. *CoRR*, abs/1702.01168. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511–4523, Florence, Italy. Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2, AAAI'96, page 1050–1055. AAAI Press. Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. Editingbased SQL query generation for cross-domain context-dependent questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5338–5349, Hong Kong, China. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. ## A Rewriting Metric First of all, we define the rewriting ratio (RR) between two different sentences s1 and s2, i.e. $$R R(s_{1},s_{2})={\frac{\mathrm{EditDistance}(s_{1},s_{2})}{|s_{1}|+|s_{2}|}},$$ where EditDistance(s1, s2) represents the edit distance between s1 and s2. Assume that si,1, si,2, · · · , si,10 are ten rewritten question sentences derived from the same question/SQL template i. In order to improve the language diversity, we expect ten rewritten sentences to differ from each other. Thus we request annotators to maximize $${\frac{1}{N}}\sum_{i=1}^{N}{\frac{1}{55}}\sum_{1\leq j<k\leq10}R R(s_{i,j},s_{i,k}),$$ when rewriting, where N is the number of question/SQL templates. When merging rewriting results from two groups of annotators, for each example with the original question sentence s o, we need to decide between two rewritten sentences s r1 and s r2 . Here we choose s r1 only if RR(s o, sr1) *> RR*(s o, sr2). ## B Syntactic Role Prediction We divide the SQL query into 16 different parts. Table 7 shows detailed situations. For each question token qi, we find out all schema items which have schema linking relations with qi. Then for each SQL part, we label that qi appears in this part if qiitself or one of those schema items appears in this part. | Name | Description | |-------------------------------------|----------------------------------------------------------------| | NONE | Element is not used in SQL. | | SELECT | Element is a normal column in SELECT. | | SELECT_AGG | Element is a column with an aggregation function in SELECT. | | SELECT_NEST | Element appears in SELECT, where SELECT is a nested SQL query. | | FROM | Element is a normal table in FROM. | | FROM_NEST | Element appears in FROM, where FROM is a nested SQL query. | | WHERE | Element is a normal column in WHERE | | WHERE_NEST | Element appears in WHERE, where WHERE is a nested SQL query | | GROUP | Element is a normal column in GROUP BY. | | HAVING | Element appears in HAVING. | | ORDER | Element is a normal column in ORDER BY. | | ORDER_AGG | Element is a column with an aggregation funciton in ORDER BY. | | LIMIT | Element appears in LIMIT. | | INTERSECT | Element appears in INTERSECT. | | UNION | Element appears in UNION. | | EXCEPT | Element appears in EXCEPT. | | Table 7: 16 parts of the SQL query. | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5,6 ✓ B1. Did you cite the creators of artifacts you used? 5,6 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 5,6 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? It is not important. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? It is not important. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5,6 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 3 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? 3,5,6 ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 3 ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3
bassignana-etal-2023-silver
Silver Syntax Pre-training for Cross-Domain Relation Extraction
https://aclanthology.org/2023.findings-acl.436
Relation Extraction (RE) remains a challenging task, especially when considering realistic out-of-domain evaluations. One of the main reasons for this is the limited training size of current RE datasets: obtaining high-quality (manually annotated) data is extremely expensive and cannot realistically be repeated for each new domain. An intermediate training step on data from related tasks has shown to be beneficial across many NLP tasks. However, this setup still requires supplementary annotated data, which is often not available. In this paper, we investigate intermediate pre-training specifically for RE. We exploit the affinity between syntactic structure and semantic RE, and identify the syntactic relations which are closely related to RE by being on the shortest dependency path between two entities. We then take advantage of the high accuracy of current syntactic parsers in order to automatically obtain large amounts of low-cost pre-training data. By pre-training our RE model on the relevant syntactic relations, we are able to outperform the baseline in five out of six cross-domain setups, without any additional annotated data.
# Silver Syntax Pre-Training For Cross-Domain Relation Extraction Elisa Bassignana☼ Filip GinterÚ **Sampo Pyysalo**Ú Rob van der Goot☼ **Barbara Plank**☼U ☼Department of Computer Science, IT University of Copenhagen, Denmark ÚTurkuNLP, Department of Computing, University of Turku, Finland UMaiNLP, Center for Information and Language Processing, LMU Munich, Germany elba@itu.dk ## Abstract Relation Extraction (RE) remains a challenging task, especially when considering realistic outof-domain evaluations. One of the main reasons for this is the limited training size of current RE datasets: obtaining high-quality (manually annotated) data is extremely expensive and cannot realistically be repeated for each new domain. An intermediate training step on data from related tasks has shown to be beneficial across many NLP tasks. However, this setup still requires supplementary annotated data, which is often not available. In this paper, we investigate intermediate pre-training specifically for RE. We exploit the affinity between syntactic structure and semantic RE, and identify the syntactic relations which are closely related to RE by being on the shortest dependency path between two entities. We then take advantage of the high accuracy of current syntactic parsers in order to automatically obtain large amounts of low-cost pre-training data. By pre-training our RE model on the relevant syntactic relations, we are able to outperform the baseline in five out of six cross-domain setups, without any additional annotated data. ## 1 Introduction Relation Extraction (RE) is the task of extracting structured knowledge, often in the form of triplets, from unstructured text. Despite the increasing attention this task received in recent years, the performance obtained so far are very low (Popovic and Färber, 2022). This happens in particular when considering realistic scenarios which include outof-domain setups, and deal with the whole taskin contrast to the simplified Relation Classification which assumes that the correct entity pairs are given (Han et al., 2018; Baldini Soares et al., 2019; Gao et al., 2019). One main challenge of RE and other related Information Extraction tasks is the "domain-specificity": Depending on the text domain, the type of information to extract changes. ![0_image_0.png](0_image_0.png) Figure 1: **Syntactic and Semantic Structures Affinity.** Shortest dependency path (above), and semantic relation (below) between two semantic entities. For example, while in the news domain we can find entities like *person* and *city*, and relations like city of birth (Zhang et al., 2017), in scientific texts, we can find information about metrics, *tasks* and comparisons between computational models (Luan et al., 2018). While high-quality, domain-specific data for fine-tuning the RE models would be ideal, as for many other NLP tasks, annotating data is expensive and time-consuming.1 A recent approach that leads to improved performance on a variety of NLP tasks is intermediate task training. It consists of a step of training on one or more NLP tasks between the general language model pre-training and the specific end task fine-tuning (STILT, Supplementary Training on Intermediate Labeled-data Tasks; Phang et al., 2018). However, STILT assumes the availability of additional high quality training data, annotated for a related task. In this paper, we explore intermediate pretraining specifically for cross-domain RE and look for alternatives which avoid the need of external manually annotated datasets to pre-train the model on. In particular, we analyze the affinity between syntactic structure and semantic relations, by considering the shortest dependency path between two entities (Bunescu and Mooney, 2005; Fundel et al., 2006; Björne et al., 2009; Liu et al., 2015). We replace the traditional intermediate pre-training step 1For example, Bassignana and Plank, 2022 report a cost of 19K USD ( ≈ 1$ per annotated relation) and seven months of annotation work for an RE dataset including 5.3K sentences. 6984 ![1_image_0.png](1_image_0.png) TYPE-OF on additional annotated data, with a *syntax pretraining* step on silver data. We exploit the high accuracy of current syntax parsers, for obtaining large amount of low-cost pre-training data. The use of syntax has a long tradition in RE (Zhang et al., 2006; Qian et al., 2008; Nguyen et al., 2009; Peng et al., 2015). Recently, work has started to infuse syntax during language model pre-training (Sachan et al., 2021) showing benefits for RE as well. We instead investigate dependency information as silver data in intermediate training, which is more efficient. To the best of our knowledge, the use of syntax in intermediate pre-training for RE is novel. We aim to answer the following research questions: 1 Does syntax help RE via intermediate pre-training (fast and cheap approach)? and 2 How does it compare with pre-training on additional labeled RE data (expensive)? We release our model and experiments.2 ## 2 Syntax Pre-Training For Re Syntactic parsing is a structured prediction task aiming to extract the syntactic structure of text, most commonly in the form of a tree. RE is also a structured prediction task, but with the aim of extracting the semantics expressed in a text in the form of triplets—entity A, entity B, and the semantic relation between them.3 We exploit the affinity of these two structures by considering the shortest dependency path between two (semantic) entities (see Figure 1). The idea we follow in this work is to pre-train an RE baseline model over the syntactic relations— Universal Dependency (UD) labels—which most frequently appear on the shortest dependency paths between two entities (black bold arrows in Figure 2). We assume these labels to be the most relevant with respect to the final target task of RE. In order to feed the individual UD relations into the RE baseline (model details in Section 3.1) we treat them similarly as the semantic connections. In respect to Figure 2, we can formalize the semantic relations as the following triplets: - NAMED(LFP,Linear-fractional programming) - TYPE-OF(linear programming,Linear-fractional programming) - NAMED(LP,linear programming). Accordingly, we define the syntax pre-training instances as: - appos(programming,LFP) - nsubj(generalization,programming) - nmod(generalization,programming) - appos(programming,LP). In the next section we describe the detailed training process. ## 3 Experiments 3.1 Setup Data In order to evaluate the robustness of our method over out-of-domain distributions, we experiment with CrossRE (Bassignana and Plank, 2022),4a recently published multi-domain dataset. CrossRE includes 17 relation types spanning over six diverse text domains: news, politics, natural science, music, literature and artificial intelligence (AI). The dataset was annotated on top of a Named 4Released with a GNU General Public License v3.0. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) Entity Recognition dataset—CrossNER (Liu et al., 2021)—which comes with an unlabeled domainrelated corpora.5 We used the latter for the syntax pre-training phase. UD Label Selection In order to select the UD labels which most frequently appear on the shortest dependency path between two semantic entities, we parsed the training portions of CrossRE. Our analysis combines RE annotations and syntactically parsed data. We observe that the syntactic distance between two entities is often higher than one (see Figure 4), meaning that the shortest dependency path between two entities includes multiple dependencies—in the examples in Figure 1, the one above has distance one, the one below has distance two. However, the shortest dependency paths contain an high frequency of just a few UD labels (see Figure 3) which we use for *syntax pre-training*: nsubj, obj, obl, nmod, appos. See Appendix A for additional data analysis. Model Our RE model follows the current stateof-the-art architecture by Baldini Soares et al., 2019 which augments the sentence with four entity markers e start 1, e end 1, e start 2, e end 2before feeding it into a pre-trained encoder (BERT; Devlin et al., 2019). 5Released with an MIT License. The classification is then made by a 1-layer feedforward neural network over the concatenation of the start markers [ˆse start 1, sˆe start 2]. We run our experiments over five random seeds and report the average performance. See Appendix B for reproducibility and hyperparameters settings of our model. Training The training of our RE model is divided into two phases. In the first one—which we are going to call *syntax pre-training*—we use the unlabeled corpora from CrossNER for pre-training the baseline model over the *RE-relevant* UD labels. To do so, 1 we sample an equal amount of sentences from each domain6(details in Section 4), and 2 use the MaChAmp toolkit (van der Goot et al., 2021) for inferring the syntactic tree of each of them. We apply an additional sub-step for disentangling the conj dependency, as illustrated in Appendix C. Then, 3 we filer in only the nsubj, obj, obl, nmod, and appos UD labels and 4 feed those connections to the RE model (as explained in the previous section). Within the RE model architecture described above, each triplet corresponds to one instance. In this phase, in order to assure more variety, we randomly select a maximum of five triplets from each pre-train sentence. In the second training phase—the *fine-tuning* one—we replace the classification head (i.e. the feed-forward layer) with a new one, and individually train six copies of the model over the six train sets of CrossRE. Note that the encoder is fine-tuned in both training phases. Finally, we test each model on in- and out-of-domain setups. ![3_image_1.png](3_image_1.png) ![3_image_0.png](3_image_0.png) ![3_image_3.png](3_image_3.png) TRAIN news politics science music literature AI avg. news 10.98 1.32 1.24 1.01 1.49 1.42 **2.91** politics 16.07 11.30 6.74 7.24 7.29 5.54 9.03 science 6.54 5.95 8.57 7.13 6.65 7.29 7.02 music 3.99 9.91 9.22 19.01 10.43 8.53 10.18 literature 11.30 9.60 9.79 12.49 17.17 9.79 11.69 AI 6.58 7.42 11.03 7.11 6.15 15.57 8.98 news 6.67 1.15 0.72 0.61 1.13 0.75 1.84 politics 13.72 12.09 7.47 7.15 7.78 6.24 **9.08** science 8.46 7.08 8.69 8.19 7.52 8.91 **8.14** music 3.35 10.65 9.35 18.63 11.62 10.34 **10.66** literature 11.85 9.84 10.35 13.58 18.64 9.94 **12.37** AI 8.87 8.59 11.87 8.29 7.68 15.93 **10.21** news 11.88 2.30 2.09 1.13 1.82 2.16 3.56 politics 14.25 13.55 6.52 7.12 7.42 7.07 9.32 science 8.27 10.31 13.59 9.09 7.78 11.11 10.03 music 5.41 11.84 10.85 21.39 12.26 11.22 12.16 literature 12.36 8.05 8.87 13.13 16.44 9.40 11.37 AI 11.00 10.12 14.03 8.93 8.50 18.89 11.91 ## 3.2 Results Table 1 reports the results of our cross-domain experiments in terms of Macro-F1. We compare our proposed approach which adopts *syntax* pre-training with the zero-shot baseline model.7 Five out of six models outperform the average of the baseline evaluation, including in- and out-ofdomain assessments. The average improvementobtained without any additional annotated RE data—is 0.71, which considering the low score range given by the challenging dataset (with limited train sets, see dataset size in Appendix D), and the cross-domain setup, is considerable. The model fine-tuned on the news domain is the only one not outperforming the baseline. However, the performance scores on this domain are already extremely low for the baseline, because news comes from a different data source with respect to the other domains, has a considerable smaller train set, and present a sparse relation types distribution, making it a bad candidate for transferring to other domains (Bassignana and Plank, 2022). As comparison, we report the scores obtained with the traditional intermediate pre-training which includes additional annotated data. We pre-train the language encoder on SciERC (Luan et al., 2018), a manually annotated dataset for RE. SciERC contains seven relation types, of which three overlap ![3_image_2.png](3_image_2.png) ![3_image_4.png](3_image_4.png) ![3_image_5.png](3_image_5.png) with the CrossRE relation set. In this setup, the improvement over the baseline includes the news, but not the literature domain. Nevertheless, while the gain is on average slightly higher with respect to the proposed *syntax pre-training* approach, it comes at a much higher annotation cost. ## 4 Pre-Training Data Quantity Analysis We inspect the optimal quantity of syntactic data to pre-train our RE model on by fine-tuning this hyperparameter over the dev sets of CrossRE. The plot in Figure 5 reports the average performance of the six models when pre-trained on increasing amounts of syntactic dependencies.8 Starting from 8.4K instances onward, the performance stabilizes above the baseline. We select the peak (20.4K, albeit results are similar between 18-20.4K) for reporting our test set results in Table 1. While we are interested in the robustness of our method across multiple domains, and therefore consider the average (Figure 5), domain-optima could be achieved by examining individual domain performance. As example, we report in Figure 6 the plot relative to the model fine-tuned on AI, which is the one obtain-8Pre-training performance in Appendix E. ing the highest gain. The model fine-tuned on AI generally gains a lot from the *syntax pre-training* step, with its peak on 15.6K pre-training instances. ## 5 Conclusion We introduce *syntax pre-training* for RE as an alternative to the traditional intermediate training which uses additional manually annotated data. We pretrain our RE model over silver UD labels which most frequently connect the semantic entities via the shortest dependency path. We test the proposed method over CrossRE and outperform the baseline in five out of six cross-domain setups. Pre-training over a manually annotated dataset, in comparison, only slightly increases our scores in five out of six evaluations, but at a much higher cost. ## Limitations While we already manage to outperform the baseline, the pre-training data quantity is relatively small (∼20K instances). Given the computational cost of training 30 models—six train sets, over five random seeds each—and testing them within inand cross- domain setups, we break the inspection of the optimal pre-training data amount at 24K instances. However we do not exclude that more pre-training instances would be even more beneficial for improving even more over the baseline. Related to computation cost constrains, we test our *syntax pre-training* approach over one set of UD labels only (nsubj, obj, obl, nmod, appos). Different sets could be investigated, e.g. including acl and compound, which present a lower, but still considerable amount of instances (see Figure 3). Finally, while approaching RE by assuming that the gold entities are given is a common area of research, we leave for future work the inspection of the proposed method over end-to-end RE. ## Acknowledgments We thank the NLPnorth and the MaiNLP groups for feedback on an earlier version of this paper, and TurkuNLP for hosting EB for a research stay. EB and BP are supported by the Independent Research Fund Denmark (Danmarks Frie Forskningsfond; DFF) Sapere Aude grant 9063-00077B. BP is in parts supported by the European Research Council (ERC) (grant agreement No. 101043235). FG and SP were supported by the Academy of Finland. ## References Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2895– 2905, Florence, Italy. Association for Computational Linguistics. Elisa Bassignana and Barbara Plank. 2022. CrossRE: A cross-domain dataset for relation extraction. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3592–3604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jari Björne, Juho Heimonen, Filip Ginter, Antti Airola, Tapio Pahikkala, and Tapio Salakoski. 2009. Extracting complex biological events with rich graph-based feature sets. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 10–18, Boulder, Colorado. Association for Computational Linguistics. Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 724–731, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Katrin Fundel, Robert Küffner, and Ralf Zimmer. 2006. RelEx—Relation extraction using dependency parse trees. *Bioinformatics*, 23(3):365–371. Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019. FewRel 2.0: Towards more challenging few-shot relation classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6250–6255, Hong Kong, China. Association for Computational Linguistics. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803–4809, Brussels, Belgium. Association for Computational Linguistics. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In *Proceedings* of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 285–290, Beijing, China. Association for Computational Linguistics. Zihan Liu, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, and Pascale Fung. 2021. Crossner: Evaluating cross-domain named entity recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13452– 13460. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219–3232, Brussels, Belgium. Association for Computational Linguistics. Truc-Vien T. Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1378–1387, Singapore. Association for Computational Linguistics. Yifan Peng, Samir Gupta, Cathy Wu, and Vijay Shanker. 2015. An extended dependency graph for relation extraction in biomedical texts. In Proceedings of BioNLP 15, pages 21–30, Beijing, China. Association for Computational Linguistics. Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. *arXiv* preprint arXiv:1811.01088. Nicholas Popovic and Michael Färber. 2022. Few-shot document-level relation extraction. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5733–5746, Seattle, United States. Association for Computational Linguistics. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 697–704, Manchester, UK. Coling 2008 Organizing Committee. Devendra Sachan, Yuhao Zhang, Peng Qi, and William L. Hamilton. 2021. Do syntax trees help pre-trained transformers extract information? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2647–2661, Online. Association for Computational Linguistics. Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multitask learning in NLP. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 176–197, Online. Association for Computational Linguistics. Min Zhang, Jie Zhang, and Jian Su. 2006. Exploring syntactic features for relation extraction using a convolution tree kernel. In *Proceedings of the Human Language Technology Conference of the NAACL,* Main Conference, pages 288–295, New York City, USA. Association for Computational Linguistics. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 35–45, Copenhagen, Denmark. Association for Computational Linguistics. ## A Ud Analysis For Re We inspect the same statistics as Figure 3 and Figure 4—UD labels on the shortest dependency paths, and shortest dependency path lengths respectivelybut instead of at domain level, at semantic relation type level. Table 2 and Table 3 report this analysis, revealing similar trends over the 17 types. ## B Reproducibility We report in Table 4 the hyperparameter setting of our RE model (see Section 3.1). All experiments were ran on an NVIDIA® A100 SXM4 40 GB GPU and an AMD EPYC™ 7662 64-Core CPU. Within this computation infrastructure the baseline converges in ∼ 7 minutes. The the *syntax pretraining* step takes ∼ 10 minutes, to which we have to add ∼ 7 minutes in order to obtain the complete training time. We train MaChAmp v0.4 on the English Web Treebank v2.10 with XLM-R large (Conneau et al., 2020) as language model with all default hyperparameters of MaChAmp. ![6_image_6.png](6_image_6.png) rel-to artifact cause-eff compare gen-aff named opposite origin part-of physical role social temporal topic type-of usage win-def ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) ![6_image_3.png](6_image_3.png) ![6_image_4.png](6_image_4.png) ![6_image_5.png](6_image_5.png) nsubj 89 106 2 12 120 54 61 53 75 115 248 33 54 10 18 30 68 obj 78 51 1 6 76 36 48 41 83 55 129 9 48 17 14 37 86 iobj 0 0 0 0 0 1 0 0 0 1 3 0 0 0 0 0 0 ccomp 5 7 0 4 7 10 8 2 2 2 9 0 0 0 0 0 0 xcomp 6 9 0 3 15 5 5 9 5 11 17 1 16 3 1 2 10 obl 88 62 5 14 92 53 25 44 77 202 224 19 121 17 26 6 11 advcl 10 9 4 8 47 21 19 10 18 14 41 3 15 2 6 7 2 advmod 0 3 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 nmod 100 140 2 12 181 57 47 58 148 276 386 29 72 43 35 19 48 appos 26 89 0 2 85 108 11 23 41 72 112 9 12 6 6 1 20 nummod 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 acl 40 24 0 0 39 30 10 25 48 33 74 0 11 24 2 13 15 amod 5 1 0 2 31 5 3 3 5 2 3 0 3 2 0 3 4 det 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 conj 1 4 0 0 3 1 0 1 2 3 11 0 1 1 0 0 0 flat 2 3 0 0 1 12 8 0 2 11 37 8 7 1 0 0 3 compound 29 24 0 5 70 27 5 7 54 53 57 2 9 2 5 10 22 list 0 1 0 0 2 2 0 0 0 0 2 0 2 0 0 0 0 parataxis 5 7 0 0 30 14 0 0 14 5 17 1 8 1 5 0 1 orphan 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 punct 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ![6_image_7.png](6_image_7.png) Table 3: **Shortest Dependency Path Length per Relation Type.** Statistics of the shortest dependency path length between two semantic entities divided by the 17 relation types of CrossRE (Bassignana and Plank, 2022). | Parameter | Value | |---------------|------------------------------| | Encoder | bert-base-cased | | Classifier | 1-layer FFNN | | Loss | Cross Entropy | | Optimizer | Adam optimizer | | Batch size | 12, 24 | | Learning rate | 1e −5 (pre-train) | | Learning rate | 2e −5 (fine-tuning) | | Seeds | 4012, 5096, 8878, 8857, 9908 | ![6_image_8.png](6_image_8.png) ## C Handling Of Conj In UD, the first element in a conjuncted list governs all other elements of the list via a conj dependency and represents the list syntactically w.r.t. the remainder of the sentence. CrossRE (Bassignana and Plank, 2022) relations, on the other hand, directly link the two entities involved in the semantic structure. To account for this difference, we propagate the conjunction dependencies in order to reflect the semantic relations, as shown in Figure 7. train dev test **tot.** train dev test **tot.** news 164 350 400 914 175 300 396 871 politics 101 350 400 851 502 1,616 1,831 3,949 science 103 351 400 854 355 1,340 1,393 3,088 music 100 350 399 849 496 1,861 2,333 4,690 literature 100 400 416 916 397 1,539 1,591 3,527 AI 100 350 431 881 350 1,006 1,127 2,483 tot. 668 2,151 2,446 **5,265** 2,275 7,662 8,671 **18,608** SENTENCES RELATIONS ![6_image_9.png](6_image_9.png) ## D Crossre Size We report in Table 5 the dataset statistics of CrossRE (Bassignana and Plank, 2022) including the number of sentences and of relations. ## E Syntax Pre-Training Performance Figure 8 reports the performance of the RE model during the *syntax pre-training* phase, over increasing amounts of pre-training dependency instances. The scores are computed on a set including 600 sentences (100 per domain) not overlapping with the train set used in the syntax pre-training phase. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section "Limitation" after section 5. ✗ A2. Did you discuss any potential risks of your work? We do not see any potential risk in our work: using an existing dataset, an existing model architecture. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? - ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3. ✓ B1. Did you cite the creators of artifacts you used? Section 3. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? I used already published data. In the paper we refer to the original dataset paper. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3, and reference to original dataset paper. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3, Section 4, Appendix D, and reference to original dataset paper. ## C ✓ **Did You Run Computational Experiments?** Section 3. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3, Section 4, and Appendix B. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. - D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. - D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
huang-etal-2023-fastdiff
{F}ast{D}iff 2: Revisiting and Incorporating {GAN}s and Diffusion Models in High-Fidelity Speech Synthesis
https://aclanthology.org/2023.findings-acl.437
Generative adversarial networks (GANs) and denoising diffusion probabilistic models (DDPMs) have recently achieved impressive performances in image and audio synthesis. After revisiting their success in conditional speech synthesis, we find that 1) GANs sacrifice sample diversity for quality and speed, 2) diffusion models exhibit outperformed sample quality and diversity at a high computational cost, where achieving high-quality, fast, and diverse speech synthesis challenges all neural synthesizers. In this work, we propose to converge advantages from GANs and diffusion models by incorporating both classes, introducing dual-empowered modeling perspectives: 1) FastDiff 2 (DiffGAN), a diffusion model whose denoising process is parametrized by conditional GANs, and the non-Gaussian denoising distribution makes it much more stable to implement the reverse process with large steps sizes; and 2) FastDiff 2 (GANDiff), a generative adversarial network whose forward process is constructed by multiple denoising diffusion iterations, which exhibits better sample diversity than traditional GANs. Experimental results show that both variants enjoy an efficient 4-step sampling process and demonstrate superior sample quality and diversity. Audio samples are available at \url{https://RevisitSpeech.github.io/}
# Fastdiff 2: Revisiting And Incorporating Gans And Diffusion Models In High-Fidelity Speech Synthesis Rongjie Huang1, Yi Ren1, Ziyue Jiang1, Chenye Cui1, Jinglin Liu1**, Zhou Zhao**1∗ Zhejiang University1 ## Abstract Generative adversarial networks (GANs) and denoising diffusion probabilistic models (DDPMs) have recently achieved impressive performances in image and audio synthesis. After revisiting their success in conditional speech synthesis, we find that 1) GANs sacrifice sample diversity for quality and speed, 2) diffusion models exhibit outperformed sample quality and diversity at a high computational cost, where achieving high-quality, fast, and diverse speech synthesis challenges all neural synthesizers. In this work, we propose to converge advantages from GANs and diffusion models by incorporating both classes, introducing dualempowered modeling perspectives: 1) FastDiff 2 (DiffGAN), a diffusion model whose denoising process is parametrized by conditional GANs, and the non-Gaussian denoising distribution makes it much more stable to implement the reverse process with large steps sizes; and 2) FastDiff 2 (GANDiff), a generative adversarial network whose forward process is constructed by multiple denoising diffusion iterations, which exhibits better sample diversity than traditional GANs. Experimental results show that both variants enjoy an efficient 4step sampling process and demonstrate superior sample quality and diversity.1 ## 1 Introduction Speech synthesis has seen extraordinary progress with the recent development of deep generative models in machine learning (Lv et al., 2023b; Ye et al., 2023b; Zhang et al., 2021, 2022c; Li et al., 2023). Previous models (Oord et al., 2016; Kalchbrenner et al., 2018) generate waveforms autoregressively from mel-spectrograms yet suffer from slow inference speed. Non-autoregressive methods (Huang et al., 2022c, 2023a; Ye et al., 2023a; Jiang et al., 2021) have been designed to address ∗Corresponding author 1Audio samples are available at https:// RevisitSpeech.github.io/ this issue, they generate samples with extremely fast speed and achieve comparable voice quality with autoregressive models. Among them, Generative adversarial networks (GANs) (Creswell et al., 2018; Mao et al., 2019; Jiang et al., 2022) and denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020; Song et al., 2020) are two popular classes of deep generative models that have demonstrated surprisingly good results and dominated speech synthesis: Jang et al. (2021) utilize local-variable convolution to capture different waveform intervals with adversarial learning. Kong et al. (2020a) propose multireceptive field fusion (MRF) to model the periodic patterns matters. (Kong et al., 2020b) introduce a time-aware wavenet for conditional diffusion modeling. Huang et al. (2022b) and Lam et al. (2022) utilize a noise predictor to learn a tight inference schedule for skipping denoising steps. Despite their success in the high-fidelity generation, few studies have compared these two classes of deep generative models in conditional speech synthesis. In this work, we conduct a comprehensive study to revisit GANs and diffusion models, and empirically demonstrate that: 1) GANs tend to generate high-quality speeches but do not cover the whole distribution, which sacrifice sample diversity for quality and speed; and 2) diffusion models exhibit outperformed sample quality and diversity, buy they typically require a large number of iterative refinements. To this end, simultaneously achieving high-quality and diverse speech synthesis at a low computational cost has become an open problem for all neural synthesizers. In this work, we converge advantages from both classes by incorporating GANs and diffusion models, introducing dual-empowered modeling perspectives for high-fidelity speech synthesis: 1) FastDiff 2 (DiffGAN): a **diffusion model** whose denoising process is parametrized by conditional GANs, and the non-Gaussian denoising distribution makes it much more stable to implement the reverse process with large step sizes; and 2) FastDiff 2 (GANDiff): a **generative adversarial network** whose forward process is constructed by multiple denoising diffusion iterations, which exhibits better sample diversity than traditional GANs. Experimental results show that both variants enjoy an effective 4-iter sampling process and demonstrate the outperformed sample quality and diversity. Moreover, we show that both variants generalize well to the mel-spectrogram inversion of unseen speakers. The main contributions of this work are summarized as follows: - We revisit two popular deep generative models (diffusion models and GANs) in conditional speech synthesis, introducing dual-empowered modeling perspectives to converge advantages from both classes. - FastDiff 2 (DiffGAN) removes the common assumption of Gaussian distribution and utilizes conditional GANs to parametrize the multimodal denoising distribution, implementing the reverse process with large step sizes more stably. - FastDiff 2 (GANDiff) breaks the one-shot forward of conditional GANs into several denoising diffusion steps in which each step is relatively simple to model, and thus it exhibits better sample diversity than traditional GANs. - Experimental results show that both enjoy an effective 4-iter sampling process, providing a principled way for high-fidelity and diverse speech synthesis at a low computational cost. ## 2 Background On Speech Synthesis With the development of deep generative models (Ye et al., 2023b; Lv et al., 2023a, 2022; Zhang et al., 2022a,b), speech synthesis technology has made rapid progress up to date. Most models (Wang et al., 2017; Ren et al., 2019; Huang et al.; Cui et al., 2021; Huang et al., 2023b; Ye et al., 2022) first convert input text or phoneme sequence into mel-spectrogram, and then transform it to waveform using a separately trained vocoder (Kumar et al., 2019; Kong et al., 2020a; Huang et al., 2022a). In this work, we focus on designing the second-stage model that efficiently synthesizes high-fidelity waveforms from mel-spectrograms. Neural vocoders require diverse receptive field patterns to catch audio dependencies, and thus previous models (Oord et al., 2016; Kalchbrenner et al., 2018) generate waveforms autoregressively from mel-spectrograms yet suffer from slow inference speed. In recent years, non-autoregressive methods (Prenger et al., 2019; Kumar et al., 2019; Kong et al., 2020b) have been designed to address this issue, which generates samples with extremely fast speed while achieving comparable voice quality with autoregressive models. Below we mainly introduce two popular classes of deep generative models (diffusion models and GANs) for conditional speech synthesis: ## 2.1 Generative Adversarial Networks Generative adversarial networks (GANs) (Kumar et al., 2019; Huang et al., 2021) are one of the most dominant non-autoregressive models in speech synthesis. Morrison et al. (2021) propose a chunked autoregressive GAN for conditional waveform synthesis, Lee et al. (2022) utilize a large-scale pretraining to improve out-of-distribution quality, Bak et al. (2022) investigate GAN-based neural vocoders and proposes an artifact-free GAN-based neural vocoder. The generator G aims to transform noise z into G(z) that mimics real data, while the discriminator D learns to distinguish the generated samples G(z) from real ones. GANs jointly train a powerful generator G and discriminator D with a min-max game: $$\begin{array}{c}{{\operatorname*{min}_{G}\operatorname*{max}_{D}V(G,D)=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\log(D(\mathbf{x}))]}}\\ {{\quad+\mathbb{E}_{\mathbf{z}\sim p(\mathbf{z})}[\log(1-D(G(\mathbf{z})))],}}\end{array}\tag{1}$$ However, GAN-based models are often difficult to train, collapsing (Creswell et al., 2018) without carefully selected hyperparameters and regularizers, and showing less sample diversity. ## 2.2 Diffusion Probabilistic Models Denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020) are likelihood-based generative models that have recently advanced the state-of-the-art results in most image and audio synthesis tasks. Denote data distribution as q(x0), the diffusion process is defined by a fixed Markov chain from data x0 to the latent variable xT , which gradually adds noise to the data q(x0) in T steps Model Quality Speed **Diversity** MOS (↑) MCD (↓) PESQ (↑) RTF (↓) NDB (↓) JS (↓) GT 4.32±0.06 / / / / / GAN 4.08±0.07 **1.48** 3.87 **0.001** 34 0.0016 Diffusion 4.16±**0.09** 1.62 **3.92** 4.70 **22 0.0010** Table 1: Comparison of GANs and diffusion models for speech synthesis. We crowd-source 5-scale MOS tests via Amazon Mechanical Turk, which are recorded with 95% confidence intervals (CI). We implement real-time factor (RTF) assessment on a single NVIDIA V100 GPU. ![2_image_0.png](2_image_0.png) with pre-defined noise schedule βt: $$q(\mathbf{x}_{1},\cdots,\mathbf{x}_{T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\tag{2}$$ $$q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}\mathbf{x}_{t-1}},\beta_{t}\mathbf{I})$$ The reverse process is to recover samples from Gaussian noises parameterized by shared θ. A guarantee of high sample diversity typically comes at the cost of hundreds of denoising steps: $$p_{\theta}(\mathbf{x}_{0},\cdots,\mathbf{x}_{T-1}|\mathbf{x}_{T})=\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\tag{3}$$ $$p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}):=\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\sigma_{\theta}(\mathbf{x}_{t},t)^{2}\boldsymbol{I})$$ It has been demonstrated that diffusion probabilistic models (Dhariwal and Nichol, 2021; Xiao et al., 2021) can learn diverse data distribution in multiple domains, such as images and time series. However, an apparent degradation could be witnessed when reducing reverse iterations, making it challenging to get accelerated. ## 3 Preliminary Study In image generation, superior sample diversity (Dhariwal and Nichol, 2021; Ho et al., 2020; Song et al., 2020) is a crucial reason for the diffusion model to produce high-quality samples even on the challenging dataset. Due to the distinctive advantages of diversity and distribution coverage over GANs, diffusion models have been demonstrated to generate realistic and vivid images, achieving the current state-of-the-art measured by FID. Despite the comprehensive studies of GANs and diffusion models for image generation, few have compared these two classes of deep generative models in speech synthesis, where an audio signal is different (Oord et al., 2016; Kalchbrenner et al., 2018) for its long-term dependencies, high sampling rate, and strong condition. In this section, we provide an empirical study and investigate the characteristic of both classes with close model capacity in speech. Specifically, we evaluate the performance (including sample quality, speed, and diversity) and explore how distribution coverage impacts sample quality by auditory sensation. ## 3.1 Experimental Setup We prepare 20 unseen samples from the benchmark LJSpeech dataset (Ito and Johnson, 2017) for evaluation. For a fair comparison, we implement the GAN and diffusion model with a shared backbone (Huang et al., 2022b), which comprises three Diffusion-UBlock and DBlock with the up/downsample rate of [8, 8, 4]. Following the common practice (Kumar et al., 2019; Yamamoto et al., 2020), we remove the time embedding in GAN and introduce an auxiliary multi-resolution STFT loss to stabilize adversarial learning. More information has been attached in Appendix D.1. ## 3.2 Visualization We further visualize the marginal distributions P(x|ph) of diffusion models and GANs in Figure 1. Specifically, we 1) randomly sample 100 latent noises z for each testing audio and obtain 2000 utterances in total. 2) split the generated utterances into phoneme-level samples according to the boundary obtained by forced alignment (McAuliffe et al., 2017) and transform them into linear spectrograms; 3) compute the histograms 2and smooth them into probability density functions with kernel density estimation for better visualization. ## 3.3 Analyses Based on the evaluation results presented in Table 1 and the marginal distributions illustrated in Figure 1, we have the following observations: Diffusion models demonstrate better sample diversity at the cost of slow inference speed. A more diverse data distribution could be observed in samples generated by diffusion models, demonstrating a better mode convergence. Diffusion models are better at data sharpness, diversity, and matching marginal label distribution of training data. However, sampling from diffusion models often requires thousands of network iterations, which is significantly slower than GAN and makes their application expensive in practice. GANs trade off diversity for quality and speed. A distinct degradation of mode convergence could be witnessed in GANs, which tend to produce samples but do not cover the whole distribution, indicating a collapsed distribution and less sample diversity. To conclude, GANs sacrifice diversity for quality and speed, while the constrained distribution does not hinder their ability to generate high-fidelity samples. Compared to diffusion models, GANs enjoy high-quality speech synthesis with a minor gap of 0.08 in MOS, while even demonstrating an outperformed performance in MCD evaluation. Regarding inference speed, GANs enjoy an effective one-shot sampling process, significantly reducing the inference time compared with competing diffusion mechanisms. ## 4 Methods After revisiting GAN and diffusion models for speech synthesis, we witness that 1) GANs sacrifice sample diversity for better quality and speed, producing high-quality samples but not covering the whole distribution. 2) Diffusion models exhibit outperformed sample quality and diversity, requiring iterative refinement at a high computational cost. In this section, we aim to converge 2We obtain similar results among different frequency bands and choose the 70-th bin for illustration. advantages from both classes, introducing dualempowered modeling perspectives for high-fidelity, fast, and diverse speech synthesis. ## 4.1 Overview This section presents our proposed models dually empowered by GANs and diffusion: 1) FastDiff 2 (DiffGAN): a diffusion model whose denoising process is parametrized by conditional GANs, and thus the non-Gaussian denoising distribution makes it much more stable to implement the reverse process with large step sizes; and 2) FastDiff 2 (GANDiff): a generative adversarial network whose forward process is constructed by multiple denoising diffusion distributions, thus exhibiting better sample diversity than traditional GANs. ## 4.2 Diffusion Mechanism Leveraging Gan Diffusion models commonly assume that the denoising distribution can be approximated by Gaussian distributions. However, the Gaussian assumption holds only in the infinitesimal limit of small denoising steps, which requires numerous steps in the reverse process. As such, reducing the number of iterative steps always causes a distinct degradation in perceptual quality. In this work, we propose **FastDiff 2 (DiffGAN)** leveraging conditional GANs to model the denoising distribution q(xt|xt−1), and thus the nonGaussian multimodal distribution makes it much more stable to implement the reverse process with large steps sizes. Specifically, our forward diffusion process is set up with the main assumption that the number of diffusion iterations is small (T = 4). The training is formulated by matching the conditional GAN generator pθ(xt−1|xt) and q(xt|xt−1) using an adversarial loss that minimizes a divergence Dadv per denoising step. The discriminator Dϕ (xt−1, xt, t) is designed to be diffusion-stepdependent, which supervises the generator to produce high-fidelity speech sample. The min-max objective can be expressed as: $$\min_{\theta}\sum_{t\geq1}\mathbb{E}_{q(t)}\left[D_{\mathrm{adv}}\left(q\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right)\|p_{\theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right)\right)\right],\tag{4}$$ $$\mathcal{L}_{G}=\sum_{t\geq1}\mathbb{E}_{q(\mathbf{x}_{t})}\mathbb{E}_{p_{\theta}\left(\mathbf{x}_{t-1}|\mathbf{x}_{t}\right)}\left[\left(D_{\phi}\left(\mathbf{x}_{t-1},\mathbf{x}_{t},t\right)-1\right)^{2}\right],\tag{5}$$ $$\mathcal{L}_{D}=\sum_{t\geq1}\mathbb{E}_{q(\mathbf{x}_{t})q\left(\mathbf{x}_{t-1}\left|\mathbf{x}_{t}\right.\right)}\left[\left(D_{\phi}\left(\mathbf{x}_{t-1},\mathbf{x}_{t},t\right)-1\right)^{2}\right]$$ $$+\mathbb{E}_{p\theta}\left(\mathbf{x}_{t-1}\left|\mathbf{x}_{t}\right.\right)\left[D_{\phi}\left(\mathbf{x}_{t-1},\mathbf{x}_{t},t\right)^{2}\right],\tag{6}$$ ![4_image_0.png](4_image_0.png) Where Dadv depends on the adversarial training setup, and the fake samples from pθ (xt−1 | xt) are contrasted against the real one from q (xt−1 | xt). Reparameterization on diffusion model. Different from the conventional diffusion models that require hundreds of steps with small βtto estimate the gradient for data density, recent works (Salimans and Ho, 2022; Liu et al., 2022) have witnessed that approximating some surrogate variables, e.g., the noiseless target data gives better quality. We reparameterize the denoising model by directly predicting the clean data x0. Free from estimating the gradient for data density, it only needs to predict unperturbed x0 and then add perturbation with the posterior distribution q(xt−1|xt, x0) (formulated in Appendix B), and the reverse transition distribution can be expressed as: pθ(xt−1|xt, c) = q (xt−1 | xt, x˜0 = fθ(xt|*t, c*)) (7) ## 4.3 Gan Leveraging Diffusion Mechanism GAN-based models are often difficult to train, collapsing (Mao et al., 2019) without carefully selected hyperparameters and regularizers, and showing less sample diversity. Besides, these models show distinct degradation in training stability, which cannot generate deterministic values due to the complex data distribution. In this work, we propose **FastDiff 2 (GANDiff)** leveraging diffusion mechanism to construct the forward process by multiple denoising iterations, and thus we expect it exhibits better training stability and sample diversity compared to traditional one-shot GANs. To be more specific, we 1) initialize the generator G with a pre-trained diffusion teacher; 2) conduct 4-iter denoising to generate x˜0 with gradient, which is regarded as the forward process of the generator; and finally 3) G plays an adversarial game with the discriminator D, and the min-max objective can be expressed as: $$\mathcal{L}_{G}=\mathbb{E}_{q_{data}}\left[\left(D_{\phi}\left(\tilde{\mathbf{x}_{0}}\right)-1\right)^{2}\right]\tag{8}$$ $$\mathcal{L}_{D}=\mathbb{E}_{q_{data}}\left[\left(D_{\phi}\left(\tilde{\mathbf{x}_{0}}\right)\right)^{2}+\left(D_{\phi}\left(\mathbf{x}_{0}-1\right)\right)^{2}\right]\tag{9}$$ We empirically find that the initialization of dif We empirically find that the initialization of diffusion teacher provides a better understanding of noise schedules, and it reduces the difficulties of adversarial learning by orders of magnitude. FastDiff 2 (GANDiff) breaks the forward process of one-shot conditional GAN into several denoising diffusion iterations, in which each step is relatively simple to model. Thus, it exhibits better sample diversity than traditional one-shot GANs. ## 4.4 Architecture As illustrated in Figure 2(a), we take a stack of time-aware location-variable convolution (Huang et al., 2022b) as a shared backbone to model longterm time dependencies with adaptive conditions efficiently. Convolution is conditioned on dynamic variations (diffusion steps and spectrogram fluctuations) in speech, which equips the model with diverse receptive field patterns and promotes robustness. We build the basic architecture of discriminator upon WaveNet (Oord et al., 2016). It consists of ten layers of non-causal dilated 1-D convolutions with weight normalization. The discriminator is trained to correctly classify the generated sample as fake while classifying the ground truth as real. More details have been attached in Appendix C. ## 4.5 Loss Objective Adversarial GAN Objective. For the generator and discriminator, the training objectives follow (Mao et al., 2017), which replaces the binary cross-entropy terms of the original GAN objectives (Goodfellow et al., 2014) with least squares loss functions for non-vanishing gradient flows. Frequency-domain Reconstruction Objective. To stabilize adversarial learning, we include frequency-domain sample reconstruction loss objective by applying the multi-resolution STFT (Short Time Fourier Transform) operation ST F T(·) (given in Appendix F): $${\mathcal{L}}_{\theta}={\mathcal{L}}_{S T F T}({\tilde{\mathbf{x}}}_{0},\mathbf{x}_{0})$$ Lθ = L*ST F T* (x˜0, x0) (10) ## 4.6 Training Algorithm The training procedures of the proposed FastDiff 2 (GANDiff) and FastDiff 2 (DiffGAN) have been illustrated as follows. The sampling algorithms have been attached in Appendix D.2. ## Algorithm 1 Training Fastdiff 2 (Diffgan) **Acknowledgments** I would like to thank my supervisor, for his kind of support. I would like to thank my supervisor, for his kind of support. 1: **Require**: FastDiff 2 (DiffGAN) generator θ, discriminator ϕ, and mel condition c. 2: **repeat** 3: Sample x0 ∼ qdata, ϵ ∼ N (0, I), and t ∼ Unif({1, · · · , T}) 4: Sample xt, xt−1 according to E.q (2) 5: x˜0 = fθ(xt|*t, c*) 6: Sample x˜t−1 ∼ q(xt−1|xt, x˜0) according to E.q (7) 7: Take gradient descent steps on ∇θ(Lθ+LG) according to E.q (10) and (5) 8: Take gradient descent steps on ∇ϕLD according to E.q (6) 9: **until** FastDiff 2 (DiffGAN) converged ## Algorithm 2 Training Fastdiff 2 (Gandiff) 1: **Require**: Diffusion teacher α with schedule β (T = 4) derived by noise predictor, FastDiff 2 (GANDiff) generator θ, discriminator ϕ, and mel condition c. 2: Initialize θ parameters using teacher α 3: **repeat** **at** **tr $t=T,\cdot\cdot\cdot$, 1 do** Sample $\tilde{\bf x}_{t-1}\sim p_{\theta}({\bf x}_{t-1}|{\bf x}_{t},c)$ **and for** **the** **gradient descent steps on $\nabla\cdot$ ## 6: **End For** 7: Take Gradient Descent Steps On ∇Θ(Lθ+Lg) According To E.Q (10) And (8) 8: Take Gradient Descent Steps On ∇Φld According To E.Q (9) 9: **Until** Fastdiff 2 (Gandiff) Converged 5 Related Works 5.1 Diffusion Probabilistic Model $$(10)$$ The diffusion probabilistic model is a family of generative models with the capacity to learn complex data distribution, which has recently attracted a lot of research attention in several important domains. Diffusion models generate high-fidelity samples yet inherently suffer from slow sampling speed, and thus multiple methods have conducted extensive investigations to accelerate the sampling process: Chen et al. (2020) utilize a grid search algorithm for a shorter inference schedule. Liu et al. (2021) introduces a shallow diffusion mechanism that starts denoising at a particular distribution instead of Gaussian white noise. Huang et al. (2022b); Lam et al. (2022) utilize a noise predictor to learn a tight inference schedule for skipping denoising steps. Their designs make diffusion models more applicable to real-world deployment, while the diffusion/denoising mismatch leads to quality degradation during jumping sampling steps. In this work, we avoid this mismatch by incorporating GANs into diffusion models, which makes it much more stable to implement the reverse process with large step sizes. ## 5.2 Generative Adversarial Network Generative adversarial networks (GANs) (Jang et al., 2021; Kong et al., 2020a) are one of the most dominant deep generative models for speech generation. UnivNet (Jang et al., 2021) has demonstrated its success in capturing different waveform intervals with local-variable convolution. HIFIGAN (Kong et al., 2020a) proposes multi-receptive field fusion (MRF) to model the periodic patterns matters. However, GAN-based models are often difficult to train, collapsing (Creswell et al., 2018) without carefully selected hyperparameters and regularizers, and showing less sample diversity. Differently, we incorporate diffusion models into GANs and break the generation process into several conditional denoising steps, in which each step is relatively simple to model. Thus, we expect our model to exhibit better sample diversity. ## 6 Experiments 6.1 Experimental Setup 6.1.1 Dataset For a fair and reproducible comparison against other competing methods, we use the benchmark | Model | Quality | Speed | Diversity | | | | |--------------------------------|-----------|----------|-------------|---------|--------|-------| | MOS (↑) | STOI (↑) | PESQ (↑) | RTF (↓) | NDB (↓) | JS (↓) | | | GT | 4.32±0.06 | / | / | / | | | | WaveNet (MOL) | 3.95±0.08 | / | / | 85.23 | 0.34 | 0.002 | | WaveGlow | 3.86±0.08 | 0.961 | 3.20 | 0.029 | 0.73 | 0.015 | | HIFI-GAN | 4.06±0.10 | 0.970 | 3.63 | 0.002 | 0.70 | 0.012 | | UnivNet | 4.05±0.09 | 0.969 | 3.54 | 0.002 | 0.71 | 0.010 | | Diffwave (6 steps) | 4.06±0.09 | 0.966 | 3.72 | 0.093 | 0.81 | 0.012 | | WaveGrad (50 steps) | 4.00±0.00 | 0.954 | 3.33 | 0.390 | 0.68 | 0.012 | | FastDiff (4 steps) | 4.09±0.10 | 0.971 | 3.78 | 0.017 | 0.66 | 0.014 | | FastDiff 2 (DiffGAN) (4 steps) | 4.16±0.10 | 0.972 | 3.73 | 0.017 | 0.47 | 0.004 | | FastDiff 2 (GANDiff) (4 steps) | 4.12±0.08 | 0.979 | 3.90 | 0.017 | 0.27 | 0.002 | LJSpeech dataset (Ito and Johnson, 2017) which consists of 13,100 audio clips of 22050 Hz from a female speaker for about 24 hours. To evaluate the model generalization ability over unseen speakers in multi-speaker scenarios, we prepare the VCTK dataset (Yamagishi et al., 2019), which is downsampled to 22050 Hz to match the sampling rate with the LJSpeech dataset. VCTK consists of approximately 44,200 audio clips uttered by 109 native English speakers with various accents. Following the common practice, we conduct preprocessing and extract the spectrogram with the FFT size of 1024, hop size of 256, and window size of 1024 samples. 6.1.2 Model Configurations FastDiff 2 (DiffGAN) and FastDiff 2 (GANDiff) share the same backbone comprising three Diffusion-UBlocks and DBlocks with the up/downsample rate of [8, 8, 4], respectively. The discriminator consists of ten layers of non-causal dilated 1-D convolutions, whose strides are linearly increasing from one to eight except for the first and last layers. Channels and kernel sizes are set to 64 and 5, respectively. Both variants share the same number of denoising steps (T = 4) in both training and inference. The multi-resolution STFT loss is computed by the sum of three different STFT losses described in Appendix F. ## 6.1.3 Training And Evaluation Both models are trained with constant learning rate lr = 2 × 10−4 on 4 NVIDIA V100 GPUs. We use random short audio clips of 25600 samples from each utterance with a batch size of 16 for each GPU. We crowd-source 5-scale MOS tests via Amazon Mechanical Turk to evaluate the audio quality. The MOS scores are recorded with 95% confidence intervals (CI). Raters listen to the test samples randomly and are allowed to evaluate each audio sample once. We adopt additional objective evaluation metrics including STOI (Taal et al., 2010), PESQ (Rix et al., 2001) to test sample quality, and NDB, JS (Richardson and Weiss, 2018) for sample diversity. To evaluate the inference speed, we implement the real-time factor (RTF) assessment on a single NVIDIA V100 GPU. More information about objective and subjective evaluation is attached in Appendix E. ## 6.2 Comparsion With Other Models We compared our proposed models in audio quality and sampling speed with competing models, including 1) WaveNet (Oord et al., 2016), the autoregressive generative model for raw audio. 2) WaveGlow (Prenger et al., 2019), the parallel flow-based model. 3) HIFI-GAN V1 (Kong et al., 2020a) and UnivNet (Jang et al., 2021), the most popular GANbased models. 4) Diffwave (Kong et al., 2020b), WaveGrad (Chen et al., 2020), and FastDiff (Huang et al., 2022b), three diffusion probabilistic models that generate high-fidelity speech samples. For easy comparison, the results are compiled and presented in Table 2, and we have the following observations: For our GAN-empowered diffusion model, FastDiff 2 (DiffGAN) has achieved the highest MOS compared with the baseline models, with a gap of 0.16 compared to the ground truth audio. Regarding inference speed, it enjoys an effective 4-iter sampling process and enables a speed of 58x faster than real-time on a single NVIDIA V100 GPU without engineered kernels. FastDiff 2 (DiffGAN) provides a principled way to accelerate DDPMs in both training and inference, avoiding quality degradation caused by a training-inference mismatch in baseline diffusion models (FastDiff, WaveGrad, Diffwave). It is worth mentioning that FastDiff 2 (DiffGAN) maintains the outperformed sample diversity inherited in DDPMs. For diffusion-empowered GANs, FastDiff 2 (GANDiff) also demonstrates high-quality speech synthesis with the MOS of 4.12. For objective evaluation, it further presents the new state-of-the-art results in PESQ and STOI, superior to all baseline models. Moreover, we can see that it achieves a higher JSD and NDB compared to baseline GAN models. It breaks the generation process into several conditional denoising diffusion steps, in which each step is relatively simple to model. Thus, we expect our model to exhibit better mode coverage and sample diversity than traditional GANs (HIFIGAN, UnivNet). To conclude, by incorporating GAN and diffusion models, the dual-empowered speech models converge advantages from both classes and achieve high-quality and diverse speech synthesis at a low computational cost. ## 6.3 Ablation Study We conduct ablation studies to demonstrate the effectiveness of several designs, including the diffusion reparameterization and frequency-domain objective in dual-empowered speech models. The results of both subjective and objective evaluation have been presented in Table 3, and we have the following observations: 1) Replacing the diffusion reparameterization design and parameterizing the denoising model by predicting the Gaussian noise ϵ has witnessed a distinct degradation in perceptual quality. Specifically, FastDiff 2 (DiffGAN) directly predicts clean data to avoid significant degradation when reducing reverse iterations. 2) Removing the sample reconstruction loss objective results in blurry predictions with distinct artifact (Kumar et al., 2019) in both variants, demonstrating the effectiveness of the multi-resolution STFT regularization in stabilizing adversarial learning, which is helpful to improve the quality of generated waveforms with a MOS gain. ## 6.4 Generalization To Unseen Speakers We use 40 randomly selected utterances of 5 unseen speakers in the VCTK dataset that are not used in training for out-of-distribution testing. Table 4 shows the experimental results for the mel- Model MOS (↑) STOI(↑) PESQ (↑) GT 4.32±0.06 / / FastDiff 2 (DiffGAN) 4.16±0.10 **0.972** 3.73 w/o DR 2.40±0.08 0.922 3.19 w/o RO 2.40±0.08 0.922 3.19 FastDiff 2 (GANDiff) 4.12±0.08 **0.979 3.90** w/o RO 2.71±0.07 0.954 3.15 Table 3: Ablation study results. Comparison of the effect of each component on quality. DR: diffusion reparameterization, RO: reconstruction objective. Model MOS (↑) STOI(↑) PESQ (↑) GT 4.30±0.06 / / WaveNet (MOL) 3.80±0.07 / / WaveGlow 3.65±0.07 0.870 3.10 HIFI-GAN 3.76±0.09 0.862 3.14 UnivNet 3.79±0.08 0.887 3.21 Diffwave (6) 3.80±0.09 0.873 3.22 WaveGrad (50) 3.73±0.07 0.856 3.15 FastDiff (4) 3.84±0.08 0.894 3.25 FastDiff 2 (DiffGAN) (4) 3.96±**0.07** 0.910 3.28 FastDiff 2 (GANDiff) (4) 3.92±0.08 **0.912 3.57** Table 4: Comparison with other neural vocoders of synthesized utterances for unseen speakers. spectrogram inversion of the samples from unseen speakers: We notice that both variants produce high-fidelity samples and outperform baseline models. They universally generate audio with strong robustness from entirely new speakers outside the training set. ## 7 Conclusion In this work, through revisiting two popular classes (diffusion models and GANs) of deep generative models, we observed that 1) GANs tended to generate samples but did not cover the whole distribution, and 2) diffusion models exhibited outperformed sample quality and diversity while requiring iterative refinement at a high computational cost. To achieve high-quality, fast and diverse speech synthesis, we converged advantages by incorporating GANs and diffusion models, introducing dualempowered modeling perspectives: 1) FastDiff 2 (DiffGAN), a diffusion model whose denoising process was parametrized by conditional GANs, and the non-Gaussian denoising distribution made it much more stable to implement the reverse process with large step sizes; and 2) FastDiff 2 (GANDiff): a generative adversarial network whose forward process was constructed by multiple denoising diffusion iterations, and it exhibited better mode coverage and sample diversity. Experimental results showed that both variants enjoyed an efficient 4step sampling and demonstrated superior sample quality and diversity. We envisage that our work serve as a basis for future speech synthesis studies. ## 8 Limitations And Potential Risks The adversarial learning still requests a proper selection of hyperparameters, otherwise the training procedure could be unstable. Besides, training speech diffusion probabilistic models typically require more computational resources, and degradation could be witnessed with decreased training data. Our proposed model lowers the requirements for high-quality speech synthesis, which may cause unemployment for people with related occupations, such as broadcasters and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media, and the voices of the speakers in the recordings might be overused than they expect. ## Acknowledgements This work was supported in part by the National Key R&D Program of China under Grant No.2022ZD0162000, National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397. ## References Taejun Bak, Junmo Lee, Hanbin Bae, Jinhyeok Yang, Jae-Sung Bae, and Young-Sun Joo. 2022. Avocodo: Generative adversarial network for artifactfree vocoder. *arXiv preprint arXiv:2206.13404*. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. 2020. Wavegrad: Estimating gradients for waveform generation. In *Proc. of ICLR*. Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. 2018. Generative adversarial networks: An overview. IEEE Signal Processing Magazine. Chenye Cui, Yi Ren, Jinglin Liu, Feiyang Chen, Rongjie Huang, Ming Lei, and Zhou Zhao. 2021. Emovie: A mandarin emotion speech dataset with a simple emotional text-to-speech model. *arXiv preprint* arXiv:2106.09317. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. In *Proc.* of NeurIPS, volume 34. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In *Proc. of* NeurIPS. Huang. 2022. Fastdiff. Rongjie Huang, Feiyang Chen, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2021. Multi-singer: Fast multi-singer singing voice vocoder with a largescale corpus. In *Proc. of ACM MM*. Rongjie Huang, Chenye Cui, Feiyang Chen, Yi Ren, Jinglin Liu, Zhou Zhao, Baoxing Huai, and Zhefeng Wang. 2022a. Singgan: Generative adversarial network for high-fidelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2525–2535. Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023a. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. *arXiv preprint arXiv:2301.12661*. Rongjie Huang, Max WY Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022b. Fastdiff: A fast conditional diffusion model for high-quality speech synthesis. *arXiv preprint arXiv:2204.09934*. Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. 2023b. Audiogpt: Understanding and generating speech, music, sound, and talking head. *arXiv* preprint arXiv:2304.12995. Rongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech. In Advances in Neural Information Processing Systems. Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren. 2022c. Prodiff: Progressive fast diffusion model for high-quality text-to-speech. In *Proceedings of the 30th ACM International Conference on Multimedia*, pages 2595–2605. Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/. Accessed: 2022-01-01. ivanvovk. 2020. Wavegrad. Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, and Juntae Kim. 2021. Univnet: A neural vocoder with multi-resolution spectrogram discriminators for high-fidelity waveform generation. In *Proc. of InterSpeech*. Ziyue Jiang, Yi Ren, Ming Lei, and Zhou Zhao. 2021. Fedspeech: Federated text-to-speech with continual learning. *arXiv preprint arXiv:2110.07216*. Ziyue Jiang, Zhe Su, Zhou Zhao, Qian Yang, Yi Ren, and Jinglin Liu. 2022. Dict-tts: Learning to pronounce with prior dictionary knowledge for text-tospeech. Advances in Neural Information Processing Systems, 35:11960–11974. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Oord, Sander Dieleman, and Koray Kavukcuoglu. 2018. Efficient neural audio synthesis. In *International Conference on Machine* Learning, pages 2410–2419. PMLR. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020a. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. *Proc. of* NeurIPS, 33:17022–17033. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020b. Diffwave: A versatile diffusion model for audio synthesis. In *Proc. of ICLR*. Robert Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. In *Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing*. Kundan Kumar, Rithesh Kumar, Thibault de Boissiere, Lucas Gestin, Wei Zhen Teoh, Jose Sotelo, Alexandre de Brébisson, Yoshua Bengio, and Aaron C Courville. 2019. Melgan: Generative adversarial networks for conditional waveform synthesis. *Advances in neural* information processing systems, 32. Max WY Lam, Jun Wang, Dan Su, and Dong Yu. 2022. Bddm: Bilateral denoising diffusion models for fast and high-quality speech synthesis. In *Proc. of ICLR*. Sang-gil Lee, Wei Ping, Boris Ginsburg, Bryan Catanzaro, and Sungroh Yoon. 2022. Bigvgan: A universal neural vocoder with large-scale training. arXiv preprint arXiv:2206.04658. Linjun Li, Tao Jin, Wang Lin, Hao Jiang, Wenwen Pan, Jian Wang, Shuwen Xiao, Yan Xia, Weihao Jiang, and Zhou Zhao. 2023. Multi-granularity relational attention network for audio-visual question answering. IEEE Transactions on Circuits and Systems for Video Technology, pages 1–1. Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, Peng Liu, and Zhou Zhao. 2021. Diffsinger: Singing voice synthesis via shallow diffusion mechanism. *arXiv* preprint arXiv:2105.02446, 2. Songxiang Liu, Dan Su, and Dong Yu. 2022. Diffgan-tts: High-fidelity and efficient text-to-speech with denoising diffusion gans. arXiv preprint arXiv:2201.11972. Zheqi Lv, Zhengyu Chen, Shengyu Zhang, Kun Kuang, Wenqiao Zhang, Mengze Li, Beng Chin Ooi, and Fei Wu. 2023a. Ideal: Toward high-efficiency devicecloud collaborative and dynamic recommendation system. *arXiv preprint arXiv:2302.07335*. Zheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022. Personalizing intervened network for long-tailed sequential user behavior modeling. *arXiv preprint arXiv:2208.09130*. Zheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, Beng Chin Ooi, and Fei Wu. 2023b. Duet: A tuning-free device-cloud collaborative parameters generation framework for efficient device model generalization. In Proceedings of the ACM Web Conference 2023. Qi Mao, Hsin-Ying Lee, Hung-Yu Tseng, Siwei Ma, and Ming-Hsuan Yang. 2019. Mode seeking generative adversarial networks for diverse image synthesis. In Proc. of CVPR. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial networks. In *Proceedings of the IEEE international conference on computer vision*, pages 2794–2802. Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech alignment using kaldi. In *Interspeech*, volume 2017, pages 498–502. Max Morrison, Rithesh Kumar, Kundan Kumar, Prem Seetharaman, Aaron Courville, and Yoshua Bengio. 2021. Chunked autoregressive gan for conditional waveform synthesis. *arXiv preprint* arXiv:2110.10139. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. *arXiv preprint arXiv:1609.03499*. philsyn. 2021. Diffwave-vocoder. Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In *Proc. of ICASSP*. Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: fast, robust and controllable text to speech. In *Proc. of* ICONIP. Eitan Richardson and Yair Weiss. 2018. On gans and gmms. In *Proc. of ICONIP*. Antony W Rix, John G Beerends, Michael P Hollier, and Andries P Hekstra. 2001. Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs. In *Proc. of ICASSP*. Tim Salimans and Jonathan Ho. 2022. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512. Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. In Proc. of ICLR. Cees H Taal, Richard C Hendriks, Richard Heusdens, and Jesper Jensen. 2010. A short-time objective intelligibility measure for time-frequency weighted noisy speech. In *Proc. of ICASSP*. Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. *arXiv preprint arXiv:1703.10135*. Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. 2021. Tackling the generative learning trilemma with denoising diffusion gans. *arXiv preprint* arXiv:2112.07804. Junichi Yamagishi, Christophe Veaux, Kirsten MacDonald, et al. 2019. Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit (version 0.92). *University of Edinburgh. The Centre for* Speech Technology Research (CSTR). Ryuichi Yamamoto, Eunwoo Song, and Jae-Min Kim. 2020. Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In *Proc. of ICASSP*. Zhenhui Ye, Rongjie Huang, Yi Ren, Ziyue Jiang, Jinglin Liu, Jinzheng He, Xiang Yin, and Zhou Zhao. 2023a. Clapspeech: Learning prosody from text context with contrastive language-audio pre-training. 2305.10763. Zhenhui Ye, Ziyue Jiang, Yi Ren, Jinglin Liu, JinZheng He, and Zhou Zhao. 2023b. Geneface: Generalized and high-fidelity audio-driven 3d talking face synthesis. *arXiv preprint arXiv:2301.13430*. Zhenhui Ye, Zhou Zhao, Yi Ren, and Fei Wu. 2022. Syntaspeech: syntax-aware generative adversarial text-to-speech. *arXiv preprint arXiv:2204.11792*. Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Jianghe Xu, Shouhong Ding, and Chao Wu. 2021. A practical data-free approach to one-shot federated learning with heterogeneity. *arXiv preprint* arXiv:2112.12371. Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, and Chao Wu. 2022a. Towards efficient data free black-box adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15115–15125. Jie Zhang, Zhiqi Li, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, and Chao Wu. 2022b. Federated learning with label distribution skew via logits calibration. In International Conference on Machine Learning, pages 26311–26329. PMLR. Jie Zhang, Lei Zhang, Gang Li, and Chao Wu. 2022c. Adversarial examples for good: Adversarial examples guided imbalanced learning. *arXiv preprint* arXiv:2201.12356. ## A Detailed Formulation Of Ddpm We define the data distribution as q(x0). The diffusion process is defined by a fixed Markov chain from data x0 to the latent variable xT : $$q(\mathbf{x}_{1},\cdots,\mathbf{x}_{T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}),$$ For a small positive constant βt, a small Gaussian noise is added from xtto the distribution of xt−1 under the function of q(xt|xt−1). The whole process gradually converts data x0 to whitened latents xT according to the fixed noise schedule β1, · · · , βT , where ϵ ∼ N (0, I): $$q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):={\mathcal{N}}(\mathbf{x}_{t};{\sqrt{1-\beta_{t}}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})$$ Efficient training is optimizing a random term of t with stochastic gradient descent: $${\mathcal{L}}_{\theta}=\left\|\epsilon_{\theta}\left(\alpha_{t}\mathbf{x}_{0}+{\sqrt{1-\alpha_{t}^{2}}}\,\epsilon\right)-\epsilon\right\|_{2}^{2}$$ $$(12)$$ $$(13)$$ Unlike the diffusion process, the reverse process is to recover samples from Gaussian noises. The reverse process is a Markov chain from xT to x0 parameterized by shared θ: $$p_{\theta}(\mathbf{x}_{0},\cdots,\mathbf{x}_{T-1}|\mathbf{x}_{T})=\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}),\tag{14}$$ where each iteration eliminates the Gaussian noise. added in the diffusion process: pθ(xt−1|xt) := N (xt−1; µθ(xt, t), σθ(xt, t) 2I) (15) ## B Diffusion Posterior Distribution Firstly we compute the corresponding constants respective to diffusion and reverse process: $$\alpha_{t}=\prod_{i=1}^{t}{\sqrt{1-\beta_{i}}}\quad\sigma_{t}={\sqrt{1-\alpha_{t}^{2}}}$$ $\text{size in the difffoli}$. t(16) The Gaussian posterior in the diffusion process is defined through the Markov chain, where each iteration adds Gaussian noise. Consider the forward diffusion process in Eq. 12, which we repeat here: $q(\mathbf{z}_{1},\cdots,\mathbf{z}_{T}|\mathbf{z}_{0})=\prod_{t=1}^{T}q(\mathbf{z}_{t}|\mathbf{z}_{t-1})$, $q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}(\mathbf{z}_{t};\sqrt{1-\beta_{t}\mathbf{z}_{t-1}},\beta_{t}\mathbf{I})$ $$(17)$$ We emphasize the property observed by (Ho et al., 2020), the diffusion process can be computed in a closed form: $$q(\mathbf{x}_{t}|\mathbf{x}_{0})={\mathcal{N}}(\mathbf{x}_{t};\alpha_{t}\mathbf{x}_{0},\sigma_{t}\mathbf{I})$$ ## C Model Hyperparameters C.1 Architectures As illustrated in Table 5, we list the hyperparameters of dual-empowered speech models. $$(\mathrm{l1l})$$ Table 5: Architecture hyperparameters of FastDiff 2 (DiffGAN)/FastDiff 2 (GANDiff). | Module | Parameter | |--------------------------------------------|-------------| | DBlock Hidden Channels | 32 | | DBlock Downsample Ratios | [4, 8, 8] | | Diffusion UBlock Hidden Channels | 32 | | Diffusion UBlock Upsample Ratios | [8, 8, 4] | | Time-aware LVC layers Each Block | 4 | | Time-aware LVC layers Kernel Size | 256 | | Diffusion Kernel Predictor Hidden Channels | 64 | | Diffusion Kernel Predictor Kernel Size | 3 | | Diffusion Embedding Input Channels | 128 | | Diffusion Embedding Output Channels | 512 | | Use Weight Norm | True | | Total Number of Parameters | 15 M | ## C.2 Diffusion Hyperparameters $$(14)$$ We list the diffusion hyper-parameters in Table 6. Table 6: Diffusion hyperparameters. ## Diffusion Hyperparameter FastDiff 2 (GANDiff): β = [3.6701e −7, 1.7032e −5, 7.908e −4, 7.6146e −1] FastDiff 2 (DiffGAN): β = Linear(1 × 10−4, 0.1, 4) $$\rangle^{2}{}_{1}$$ $\theta$. ## D Training And Inference Details D.1 Preliminary Study $$(16)$$ Both models are trained with constant learning rate lr = 2 × 10−4 on 4 NVIDIA V100 GPUs. We conduct preprocessing and extract the spectrogram with the FFT size of 1024, hop size of 256, and window size of 1024 samples. For audio quality, we adopt objective evaluation metrics including MCD (Kubichek, 1993) and PESQ (Rix et al., 2001). We crowd-sourced 5scale MOS tests via Amazon Mechanical Turk. Raters listened to the test samples randomly, where they were allowed to evaluate each audio sample once. To evaluate the sampling speed, we implement the real-time factor (RTF) assessment on a single NVIDIA V100 GPU. NDB and JSD metrics are employed to explore the diversity of generated mel-spectrograms. $$(18)$$ Algorithm 3 Sampling with FastDiff 2 (DiffGAN) 1: **Input**: FastDiff 2 (DiffGAN) generator θ, and mel condition c. 2: Sample xT ∼ N (0, I) 3: for t = T, *· · ·* , 1 do 4: Sample xt−1 ∼ pθ(xt−1|xt) = q(xt−1|xt, x˜0 = fθ(xt|*t, c*)) 5: **end for** 6: **return** x0 Algorithm 4 Sampling with FastDiff 2 (GANDiff) ## D.2 Sampling Algorithm | 1: Input: FastDiff 2 (GANDiff) generator θ, and mel condition c. 2: Sample xT ∼ N (0, I) 3: for t = T, · · · , 1 do 4: Sample xt−1 ∼ pθ(xt−1|xt) 5: end for 6: return x0 | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## E Evaluation Matrix E.1 Objective Evaluation Perceptual evaluation of speech quality (PESQ) (Rix et al., 2001) and The shorttime objective intelligibility (STOI) (Taal et al., 2010) assesses the denoising quality for speech enhancement. Number of Statistically-Different Bins (NDB) and Jensen-Shannon divergence (JSD). They measure diversity by 1) clustering the training data into several clusters, and 2) measuring how well the generated samples fit into those clusters. Mel-cepstral distortion (MCD) (Kubichek, 1993) measures the spectral distance between the synthesized and reference mel-spectrum features. ## E.2 Subjective Evaluation All our Mean Opinion Score (MOS) tests are crowd-sourced and conducted by native speakers. The scoring criteria have been included in Table 7 for completeness. The samples are presented and rated one at a time by the testers, each tester is asked to evaluate the subjective naturalness of a sentence on a 1-5 Likert scale. The screenshots of instructions for testers are shown in Figure 3. We paid $8 to participants hourly and totally spent about $600 on participant compensation. ## F Multi-Resolution Stft Loss Details By applying the multi-resolution short time fourier transform, we respectively obtain the spectral convergence (L*stf t*−sc) and log STFT magnitude (Lstf t−mag) of L*ST F T* in frequency domain: $$\mathcal{L}_{stft\_sc}=\frac{\|\text{STFT}(\mathbf{x}_0)-\text{STFT}(\bar{\mathbf{x}}_0)\|_F}{\|\text{STFT}(\mathbf{x}_0)\|_F}\tag{19}$$ Lstft−mag = 1 N ∥ log(STFT(x0)) − log(STFT(x˜0))∥1, (20) where *∥ · ∥*F and *∥ · ∥*1 denote the Frobenius and L1 norms. N denotes the number of elements in the magnitude; The final multi-resolution STFT loss is the sum of M losses with different analysis parameters(i.e., FFT size, window size, and hop size), and we set M = 3: $${\cal L}_{STFT}=\frac{1}{M}\sum_{m=1}^{M}\left({\cal L}_{stft\_sc}^{(m)}+{\cal L}_{stft\_mag}^{(m)}\right)\tag{21}$$ ![13_image_0.png](13_image_0.png) 1 Bad Very annoying and objectionable dist. 2 Poor Annoying but not objectionable dist. 3 Fair Perceptible and slightly annoying dist 4 Good Just perceptible but not annoying dist. ![13_image_1.png](13_image_1.png) | FFT size | Frame shift | Window size | |------------|---------------|---------------| | 1024 | 600 | 120 | | 2048 | 120 | 250 | | 512 | 240 | 50 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? See Section 8. ✓ A2. Did you discuss any potential risks of your work? See Section 8. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** See Section 6. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? See section 6.1.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? See section 6.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? See section 6.1.3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? See section 6.1.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** See section 6.1.3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? See section 6.1.3 and section E in Appendix. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? See section 6.1.3 and section E in Appendix. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? See section 6.1.3 and section E in Appendix. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
kew-sennrich-2023-uncovering
Uncovering Hidden Consequences of Pre-training Objectives in Sequence-to-Sequence Models
https://aclanthology.org/2023.findings-acl.438
Some variants of self-supervised denoising objectives for pre-training encoder-decoder language models have been reported to have a negligible impact on downstream performance. Yet the design of these pre-training objectives leads to behavioural differences that can be uncovered with specific manipulations. We reproduce a recently proposed zero-shot control method and find that it is only successful on a subset of models. To understand what causes the difference in its effectiveness, we perform a set of controlled experiments, varying only the pre-training objective, and find unexpected interactions between the pre-training method and downstream controllability of models after fine-tuning. Our results show that different pre-training objectives have consequences that may not be visible in standard downstream evaluation, but which should be taken into account when developing models with controllability in mind.
# Uncovering Hidden Consequences Of Pre-Training Objectives In Sequence-To-Sequence Models Tannon Kew1and **Rico Sennrich**1,2 1Department of Computational Linguistics, University of Zurich 2School of Informatics, University of Edinburgh {kew,sennrich}@cl.uzh.ch ## Abstract ![0_Image_0.Png](0_Image_0.Png) Some variants of self-supervised denoising objectives for pre-training encoder-decoder language models have been reported to have a negligible impact on downstream performance. Yet the design of these pre-training objectives leads to behavioural differences that can be uncovered with specific manipulations. We reproduce a recently proposed zero-shot control method and find that it is only successful on a subset of models. To understand what causes the difference in its effectiveness, we perform a set of controlled experiments, varying only the pre-training objective, and find unexpected interactions between the pre-training method and downstream controllability of models after fine-tuning. Our results show that different pretraining objectives have consequences that may not be visible in standard downstream evaluation, but which should be taken into account when developing models with controllability in mind. ## 1 Introduction Self-supervised denoising objectives have proven extremely powerful for deriving transformer-based pre-trained language models (PLMs) given massive amounts of unlabelled data. These objectives are typically agnostic towards specific downstream tasks and thus do not resemble real-world use cases. Instead, they enable the model to learn optimal parameter initialisations for subsequent fine-tuning on various downstream tasks (Dai and Le, 2015; Erhan et al., 2010). During fine-tuning, the PLM quickly learns new tasks based on the supervised signal provided, rendering pre-training task largely redundant. Previous work has found performance differences on downstream tasks to be negligible given various denoising pre-training objectives (Lewis et al., 2020; Alajrami and Aletras, 2022).1 As 1We confirm these findings with our own models in Appendix C. Figure 1: The effect of CtxAug for inquisitive dialogue modelling with off-the-shelf models. In contrast to BART, T5 models exhibit a minimal response to the context code. T5-small-LM refers to the LM-adapted model from Lester et al. (2021a). a result, the choice of which method to apply in pre-training has largely been based on factors such as efficiency (e.g. Raffel et al., 2020; Song et al., 2019). However, given equally well performing pre-training objectives, we find that encoderdecoder PLMs respond drastically differently to post-hoc manipulations after fine-tuning. Specifically, we investigate the use of context augmentation (CtxAug), proposed by Hazarika et al. (2022), as a zero-shot control method designed to steer a fine-tuned encoder-decoder model towards generating outputs with particular attributes. While they introduce this as a general control mechanism for encoder-decoder transformers, our experiments with BART (Lewis et al., 2020) and two variants of T5 (Raffel et al., 2020; Lester et al., 2021a) show that controllability via context augmentation is predominantly exhibited by BART (Figure 1). Given this observation, we hypothesise that the success of this zero-shot control method may be highly dependent on a model's pre-training objective. To investigate this hypothesis, we set out to identify exactly what aspects of BART's pretraining allow for CtxAug to work. Our findings suggest that fine-tuned models are capable of exhibiting *vestigial behaviours*2 which are endowed by their pre-training objectives and allow for interesting and useful post-hoc manipulation methods in downstream applications. ## 2 Background 2.1 Seq2Seq Pre-Training Objectives To jointly pre-train an encoder-decoder transformer (Vaswani et al., 2017), seq2seq pre-training objectives typically corrupt an input sequence (*noise*) before feeding it to the model and then train the model to recover the original sequence (*denoise*). Usually, this involves span-based masked language modelling (MLM) (Joshi et al., 2020; Devlin et al., 2019a) combined with a standard language modelling objective involving left-to-right prediction (Bengio et al., 2003; Radford et al., 2018). However, popular denoising objectives differ in terms of the extent of corruption applied and the amount that needs to be recovered. For instance, **MASS** (Song et al., 2019) applies MLM to a *single*, randomly selected span of contiguous source tokens and predicts only the noised tokens given their positional information. T5 (Raffel et al., 2020) randomly selects *multiple* token spans and replaces *each span* with a single unique 'sentinel' mask token. The target sequence then corresponds to a stilted sequence consisting of the masked input spans separated by their respective sentinel tokens. **BART** (Lewis et al., 2020) applies span-based MLM in conjunction with sentence permutation. In stark contrast to the previous approaches, BART is tasked with reconstructing the input sequence *in full* and not just the masked spans, which we refer to as partial reconstruction. ## 2.2 Context Augmentation For Zero-Shot Control Despite strong generalisation abilities of fine-tuned PLMs, controlling for desirable attributes in generated text remains an active area of research (e.g. Dathathri et al., 2019; Liu et al., 2021; Yang and Klein, 2021; Krause et al., 2021; Pascual et al., 2021) Recently, Hazarika et al. (2022) pro-2While there is a substantial body on catastrophic forgetting, where information relevant for a learned task is lost upon training on a new task (McCloskey and Cohen, 1989; Goodfellow et al., 2014), we use *vestigial behaviour* to refer to observable properties that remain after fine-tuning and can be traced back to earlier (pre-)training tasks, in analogy to vestigial structures in biology. posed CtxAug as a means of controlling fine-tuned encoder-decoder LMs in a zero-shot setting. Given an encoder-decoder transformer trained on a downstream task, CtxAug aims to provide additional conditioning context, not included in the original source sequence, to guide the model generation towards a particular attribute. CtxAug encodes a set of phrases or sentences that exhibit a target attribute into an averaged representation C, which is concatenated with the hidden representation of the original source sequence: C'encpxq. The decoder can then attend to this augmented input context at inference time without any updates to the model's parameters. To ensure that the model does not simply disregard the context code, the authors also propose to manually re-weight the model's cross attention with an attention biasing parameter. In experiments on dialogue modelling, Hazarika et al. (2022) demonstrate that CtxAug can be used to encourage more inquisitive and positive sentiment responses. ## 3 Experimental Setup 3.1 Pre-Training To investigate the effect of different encoderdecoder pre-training objectives on CtxAug, we use a controlled setup on scaled-down models and datasets, where only the pre-training objective differs. Specifically, we compare the following objectives (depicted in Table 3): i) **MLM+PS**: span-based MLM combined with sentence permutation (i.e. BART's default pretraining objective); ii) MLM: span-based MLM alone; iii) PS: sentence permutation alone; iv) SI**PR-MS**: MASS-style span-infilling with partial reconstruction3; v) SI**PR-T5**: T5-style span-infilling with running partial reconstruction and sentinel tokens; vi) SIFR: span-infilling with full reconstruction of the input sequence. Since methods differ in their original works in terms of how spans are selected for masking, we 3For consistency, our SIPR-MS differs from the original MASS objective in that we select multiple spans for masking in a given input, while (Song et al., 2019) only select a single span per training example, and we do not perform any random mask replacement. | Single Objectives (§4.1) | Mixed Objectives (§4.3) | | | | | | | | | | | |----------------------------|---------------------------|-------|-------|--------|---------|---------|---------|-------|-------|--------|-------| | No PT | MLM+PS | MLM | PS | SIFR | SIPR-MS | SIPR-T5 | SIFR/PR | | | | | | 1:3 | 1:1 | 3:1 | | | | | | | | | | | inquisitive | default | 54.18 | 35.24 | 50.39 | 40.61 | 50.79 | 44.87 | 54.04 | 47.90 | 57.84 | 50.80 | | CtxAug | -8.27 | +9.68 | +5.42 | -0.20 | +6.37 | -10.90 | -7.07 | +2.42 | +2.82 | +5.51 | | | positive | default | 29.24 | 39.19 | 29.17 | 34.52 | 31.46 | 35.65 | 34.00 | 31.46 | 35.65 | 34.00 | | CtxAug | +11.99 | +7.12 | +5.11 | +15.33 | +6.71 | +6.47 | +13.93 | +6.71 | +6.47 | +13.93 | | unify these based on the approach taken by Lewis et al. (2020) and use a Poisson distribution (λ " 3).4 For reference, we also compare to a non-pretrained (**No PT**) baseline, which is trained from scratch on the downstream task. Model We use the BART model architecture, which resembles a standard encoder-decoder transformer with GeLU activation functions. Following Dufter and Schütze (2020) we scale the model down by dividing the size of the hidden layer, intermediate feed forward layers, and the number of attention heads by 12. This results in a hidden size of 64 and intermediate size of 256 and a single attention head. Data As pre-training data we select the BookCorpus5(Zhu et al., 2015; Bandy and Vincent, 2021) due to its stylistic similarities to our downstream task (e.g. dialogues between characters). We perform simple preprocessing, removing preambles and meta data by filtering lines without sentencefinal punctuation or lines containing more than 70% punctuation or numbers. We set aside 100 randomly selected books for validation. The resulting corpus contains approximately 72M and 400k sentences for training and validation, respectively. Given our budgeted training setup, the model only sees approximately 65% of the data before reaching the maximum number of update steps. Finally, we train our own BART tokenizer on the training split with a maximum vocabulary size of 4,096. ## 3.2 Fine-Tuning & Inference To measure the impact of CtxAug for zero-shot controlled generation, we follow the experimental setup from Hazarika et al. (2022) and focus on promoting inquisitive and positive responses in knowledge-grounded dialogue generation with the Topical-Chat dataset (Gopalakrishnan et al., 2019). The task is to generate the target dialogue turn given a relevant knowledge snippet k and the dialogue history h T, where T is the number of turns. At inference time, we use top-p sampling (p=0.9) with beam size of 4 and a temperature of 0.7. Sequences are generated with a maximum length of 40 tokens. For all experiments, we pre-train and fine-tune with 3 different seeds before performing inference with 5 different seeds. This results in a total of 15 inference runs for each model. To promote inquisitiveness with CtxAug we randomly sample 10 questions from the training data to construct the control code. To promote positive sentiment, we use a limited set of only 5 short phrases. Finetuning and inference experiments are performed with Hugging Face's Transformers library (Wolf et al., 2020). We include the full details on training and inference hyperparameters in Appendix A. 6 ## 4 Results 4.1 Pre-Training Objectives For Ctxaug Table 1 shows the effectiveness of CtxAug given the different pre-training objectives considered. For promoting inquisitive responses (top row), BART's original denoising objective (MLM+PS) exhibits the strongest positive response to CtxAug over the default generation setting. Meanwhile, isolating the two independent noising operations used in this objective reveals that sentence permutation (PS) 6We make our code available at https://github.com/ ZurichNLP/understanding-ctx-aug. alone is insufficient for CtxAug to succeed. Comparing span-infilling pre-training objectives (SI*), we can observe that the format of the target sequence used during pre-training is crucial. With noising operations being equal, CtxAug for inquisitive responses works effectively only when the model is pre-trained to reconstruct the target sequence in full, while partial reconstruction yields similar results to that of no pre-training (No PT). In contrast, encouraging more positive responses with CtxAug (bottom row) succeeds regardless of the pre-training strategy7, and even without any pre-training. This suggests that multiple factors may contribute to the overall effectiveness of CtxAug in practice. Firstly, the fact that models trained from scratch can still leverage CtxAug for positive sentiment suggests that there may be effects arising from correlation of source and target attribute features in the fine-tuning data. In such a case, CtxAug may not generalise to other datasets and tasks. Secondly, and most notably, full reconstruction pre-training objectives support CtxAug more than partial reconstruction objectives. Reconstructing the corrupted input sequence in full naturally encourages a strong correlation between input and target attributes. This more closely resembles the central mechanism in CtxAug where a vector representing the desired target attribute is 'reconstructed' in the target sequence. Meanwhile, partial reconstruction objectives yield primarily disjointed source and target sequences. This does not necessarily preclude the possibility of inferring relationships between co-occuring attributes over long distances (e.g., sentence-initial subject-verb inversion together with a sentence-final question mark). However, the likelihood of successfully learning these becomes plausible only in scenarios where some co-occurring features remain unmasked and others are reconstructed. This limits the efficacy of CtxAug for promoting inquisitiveness, and possibly other attributes that occur over longer distances, to certain pre-training methods. ## 4.2 Duration Of Fine-Tuning On Ctxaug To investigate how CtxAug is impacted by the duration of fine-tuning, we conduct an ablation study in which we perform inference at regular intervals throughout fine-tuning. Figure 2 depicts how ![3_image_0.png](3_image_0.png) CtxAug behaves relative to the default generation setting as the model learns the downstream task. When starting from randomly initialised parameters, given question control phrases (top left), the model fails to leverage the control code effectively, resulting in degradation in inquisitiveness relative to the default generations settings. For positive sentiment (bottom left), however, we can observe that the fine-tuning data provides a sufficient signal to support CtxAug. In this setting the model starts to effectively make use of the control code after three epochs. Meanwhile, the SIFR pre-trained model is able to leverage CtxAug at all stages of fine-tuning, highlighting the vestigial behaviour from pre-training. This is most visible when encouraging positive sentiment responses (bottom right), where, in the earliest stages of fine-tuning, we can observe a significant increase in the number of positive sentiment responses generated. As the model adapts to the task, this advantage tapers off, indicating that vestigial behaviours from pre-training weaken over time. For inquisitive responses (top right), the effect of CtxAug is most noticeable after the first few fine-tuning epochs, suggesting that this type of pretraining objective endows the model with a useful bias that can be effectively exploited by CtxAug. We also note that while the effect is only slight under this condition, it reflects the model's overall tendency to generate responses pertaining to the target attributes in question. As the model learns the task, inquisitiveness naturally increases, while positiveness decreases. Manual inspection confirmed that at the earliest stages of training, models tended to output generic and positive responses (e.g. "I know!"), which gradually become slightly more varied to include negative responses (e.g. "I don't know that.") and simple questions. ## 4.3 Mixing Pre-Training Objectives Any encumberment to leveraging interesting and useful post-hoc control techniques such as CtxAug with fine-tuned PLMs may be considered a significant downside of upstream decisions relating to the pre-training objective. Yet in order to scale models and training data, partial reconstruction objectives have been chosen due to their lower computational cost (Raffel et al., 2020). One possible option for striking a desirable balance between pre-training efficiency and downstream flexibility could be to combine different pre-training objectives either within a single pre-training scheme or as a secondary pre-training before fine-tuning (e.g. Lester et al., 2021b). To this end, we experiment with combining SIFR and SIPR-T5 within a single pre-training scheme, SIFR/PR, and investigate various mixing ratios: 1:3, 1:1 and 3:1. Table 1 (right) shows that gradually increasing the degree to which the model is tasked with full reconstruction of the noised input improves the effectiveness of CtxAug but even at 75% adoption (3:1), it fails to reach equivalence with using only SIFR. ## 5 Related Work The study of PLMs, their abilities, properties and behaviours, occupies a significant space in today's NLP research (e.g. Rogers et al., 2020; Lialin et al., 2022; Clark et al., 2019). Numerous works have evaluated and compared downstream performance of seq2seq PLMs, covering a wide array of tasks including abstractive summarisation (Blekanov et al., 2022; Zhu et al., 2021; Tang et al., 2022; Fabbri et al., 2021), question answering (Luo et al., 2022), graph-to-text generation (Ribeiro et al., 2021), dialogue modelling (Shin et al., 2022) and text simplification (Štajner et al., 2022), among others. While such comparisons are useful for guiding researchers in selecting the right model for a task and can sometimes reveal interesting differences on certain task-specific data sets, they tend to neglect important differences between PLMs, such as the underlying model size or the type and amount of data used for pre-training. Thus, it remains difficult to explain exactly why a particular model performs better or worse on a given task. Meanwhile, there is a growing body of literature aimed at explaining some of the interesting and often unexpected behaviours observed among large PLMs. In this area, multilinguality has been linked to the duration of fine-tuning (Dufter and Schütze, 2020), and the ability to perform in-context fewshot learning and zero-shot generalisation has been linked to multiple factors. These include model scale (Brown et al., 2020), the types and formatting of demonstrations (Min et al., 2022), memorisation of pre-training data (Xie et al., 2022) and its distributional properties (Chan et al., 2022). The selection of architecture and pre-training objectives have also been found to be influential (Wang et al., 2022). Our work falls into this category and aims to explain which aspects of seq2seq pre-training objectives contribute to the ability to exploit additional conditioning context provided at inference time. ## 6 Conclusions As PLMs become increasingly commonplace, so too does the importance of understanding the potential downstream consequences of decisions relating to their design. Our experiments indicate that context augmentation, as a method for zero-shot controlled natural language generation, is susceptible to inductive biases learned in pre-training given different types of control codes. Based on this, we conclude that pre-training objectives that aim to reconstruct a noised input *in full*, similar to BART, are best suited to leverage this technique. Looking forward, we expect that even for seemingly equally effective pre-training objectives, we can identify differences in behaviour, e.g. applicability of control methods, that remain after fine-tuning. In searching for optimal pre-training strategies for PLMs, this opens another dimension that needs to be considered and better understood. ## Acknowledgements We kindly thank Fabian Aiolfi for fruitful discussions throughout this project, as well as the anonymous reviewers for their helpful feedback. This work was facilitated by the infrastructure services provided by S3IT, the Service and Support for Science IT team at the University of Zurich. Rico Sennrich is funded by the Swiss National Science Foundation (project MUTAMUR; no. 176727). ## Limitations Comparing downstream performance of pretraining objectives with large-scale models is prohibitively expensive. Because of this, we employ scaled-down models that closely resemble the architectures and training procedures of popular PLMs. In doing so, we assume that our findings are transferable to some larger publicly available models. As noted by Hazarika et al. (2022), CtxAug offers an interesting alternative to prompting generative LMs that are significantly smaller than those that typically exhibit few- and zero-shot capabilities (Brown et al., 2020). While we provide support for both Hazarika et al. (2022)'s claim and our assumption in preliminary and supplementary experiments with select PLMs (see Section 1 and Appendix B), these experiments are still performed on models of up to 140M parameters. Therefore, we stop short of concluding that our findings generalise to LLMs, which dwarf these models in comparison. Additionally, the number and types of target attributes that a user may want to control for in various downstream text generation tasks are potentially endless. However, our study focuses on only two possible target attributes, namely, inquisitiveness and positive sentiment, for the task of conversational dialogue modelling. In this way, our work partially serves as a re-implementation and reproduction study, confirming the main findings from Hazarika et al. (2022), but also highlighting limitations. ## References Ahmed Alajrami and Nikolaos Aletras. 2022. How does the pre-training objective affect what large language models learn about linguistic properties? In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 131–147, Dublin, Ireland. Association for Computational Linguistics. John Bandy and Nicholas Vincent. 2021. Addressing "Documentation Debt" in Machine Learning: A Retrospective Datasheet for BookCorpus. In *Proceedings of the Neural Information Processing Systems* Track on Datasets and Benchmarks, volume 1. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. *Journal of Machine Learning Research*, 3(null):1137–1155. Ivan S. Blekanov, Nikita Tarasov, and Svetlana S. Bodrunova. 2022. Transformer-Based Abstractive Summarization for Reddit and Twitter: Single Posts vs. Comment Pools in Three Languages. *Future Internet*, 14(3):69. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Stephanie C. Y. Chan, Adam Santoro, Andrew K. Lampinen, Jane X. Wang, Aaditya Singh, Pierre H. Richemond, Jay McClelland, and Felix Hill. 2022. Data Distributional Properties Drive Emergent In-Context Learning in Transformers. ArXiv:2205.05055 [cs]. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT's Attention. In *Proceedings of the 2019 ACL Workshop BlackboxNLP:* Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In *Advances in neural information processing systems*, volume 28. Curran Associates, Inc. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and Play Language Models: A Simple Approach to Controlled Text Generation. CoRR, abs/1912.02164. ArXiv: 1912.02164. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. *arXiv:1810.04805 [cs]*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philipp Dufter and Hinrich Schütze. 2020. Identifying Elements Essential for BERT's Multilinguality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4423–4437, Online. Association for Computational Linguistics. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(19):625–660. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Ian J. Goodfellow, Mehdi Mirza, Xia Da, Aaron C. Courville, and Yoshua Bengio. 2014. An empirical investigation of catastrophic forgeting in gradientbased neural networks. In 2nd international conference on learning representations, ICLR 2014, banff, AB, canada, april 14-16, 2014, conference track proceedings. Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tür. 2019. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. In *Interspeech 2019*, pages 1891–1895. ISCA. Devamanyu Hazarika, Mahdi Namazifar, and Dilek Hakkani-Tür. 2022. Attention Biasing and Context Augmentation for Zero-Shot Control of EncoderDecoder Transformers for Natural Language Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10738–10748. Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Yang Liu, Mihail Eric, and Dilek Hakkani-Tur. 2020. Policy-driven neural response generation for knowledge-grounded dialog systems. In Proceedings of the 13th International Conference on Natural Language Generation, pages 412–421, Dublin, Ireland. Association for Computational Linguistics. Peter Izsak, Moshe Berchansky, and Omer Levy. 2021. How to Train BERT with an Academic Budget. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10644–10652, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. *Transactions of the Association for* Computational Linguistics, 8:64–77. Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4929–4952, Punta Cana, Dominican Republic. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021a. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021b. The Power of Scale for Parameter-Efficient Prompt Tuning. ArXiv:2104.08691 [cs]. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Vladislav Lialin, Kevin Zhao, Namrata Shivagunde, and Anna Rumshisky. 2022. Life after BERT: What do other muppets understand about language? In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 3180–3193, Dublin, Ireland. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics. Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral, and Yingbo Zhou. 2022. Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering. In Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge, pages 7–22, Dublin, Ireland and Online. Association for Computational Linguistics. Michael McCloskey and Neal J. Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of *Psychology of learning and motivation*, pages 109–165. Academic Press. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? arXiv:2202.12837 [cs]. ArXiv: 2202.12837. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-andplay method for controlled text generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3973–3997, Punta Cana, Dominican Republic. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. ArXiv:1910.10683 [cs, stat]. Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021. Investigating pretrained language models for graph-to-text generation. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 211–227, Online. Association for Computational Linguistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. *Transactions of the Association* for Computational Linguistics, 8:842–866. Jamin Shin, Hangyeol Yu, Hyeongdon Moon, Andrea Madotto, and Juneyoung Park. 2022. Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking. ArXiv:2203.01552 [cs]. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In *International* conference on machine learning, pages 5926–5936. Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, and Dragomir Radev. 2022. CONFIT: Toward faithful dialogue summarization with linguistically-informed contrastive fine-tuning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5657–5668, Seattle, United States. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in neural information processing systems*, volume 30. Curran Associates, Inc. Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, and Colin Raffel. 2022. What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization? ArXiv:2204.05832 [cs, stat]. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An Explanation of Incontext Learning as Implicit Bayesian Inference. arXiv:2111.02080 [cs]. ArXiv: 2111.02080. Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online. Association for Computational Linguistics. Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, and Xuedong Huang. 2021. Leveraging Lead Bias for Zero-shot Abstractive News Summarization. In *Proceedings of the 44th International ACM SIGIR* Conference on Research and Development in Information Retrieval, pages 1462–1471, Virtual Event Canada. ACM. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A Benchmarking Platform for Text Generation Models. In *The 41st International ACM SIGIR Conference on* Research & Development in Information Retrieval, pages 1097–1100, Ann Arbor MI USA. ACM. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE international conference on computer vision (ICCV)*. Sanja Štajner, Kim Cheng Sheang, and Horacio Saggion. 2022. Sentence Simplification Capabilities of Transfer-Based Models. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11):12172– 12180. ## A Training Details A.1 Pre-Training Hyperparameters Our scaled down models have approximately 1M parameters and are pre-trained using the opensource Fairseq library (Ott et al., 2019). Following recommendations for budgeted pre-training by Izsak et al. (2021), we use a small batch size of 4,096 tokens and a triangular learning rate schedule which warms up for 2,500 steps and decays to zero with over 250k update steps. We also restrict the maximum sequence length to 256 which is sufficient for our downstream task of dialogue modelling. All other hyperparameters are kept the same as those used by Lewis et al. (2020). Our mini model pre-training takes approximately 6 hours on a single Nvidia K80 GPU (16GB memory). ## A.2 Fine-Tuning On Topical-Chat Topical-Chat comprises conversational dialogues between pairs of crowd workers. The crowd workers were provided with reading sets containing different fun facts on eight different topics including sports, pop culture and politics as interesting discussion points. For each target dialogue turn in the dataset, it is assumed that the relevant knowledge snippet is provided as additional context based on previous work from Hedayatnia et al. (2020). Table 2 provides an overview of the dataset's splits. To fine-tune on Topical-Chat, we followed the setup adopted by Hazarika et al. (2022). Specifically, the input sequence comprises a fixed number of 'bucketed' tokens. 32 tokens are reserved for the knowledge snippet and 25 tokens for each turn in the dialogue history. A <pad> token is used to fill empty positions within each bucket and individual text sequences are truncated if their length | Split | Items | |--------------|---------| | Train | 145,238 | | Valid | 8,986 | | Test (freq.) | 9,065 | | Test (rare) | 9,075 | exceeds the allocated bucket size. Dialogue history turns are delimited with speaker identifier tokens and the entire input sequence is prepended with a <bos> token. The model is trained for a maximum of 10 epochs with an effective batch size of 20 and a learning rate of 6.25e ´ 5. The maximum target sequence length is set to 64. Fine-tuning on a single Nvidia K80 GPU (16GB memory) takes around 1.5 to 2.5 hours depending on the model size. ## A.3 Inference On Topical-Chat At inference time, we use the same hyperparameters for all models. Specifically, we use top-p sampling (p=0.9) with beam size of 4 and a temperature of 0.7. The maximum sequence length is set to 40 tokens. When applying CtxAug we manually re-weight the cross attention distribution using method described in Hazarika et al. (2022). Again, we used the recommend hyperparameter value of 5, which the authors found to provide a good balance between exhibiting the target attribute and maintaining fluency. To account for randomness, we run inference with multiple random seeds, which takes approximately 25 minutes for each experiment setting using a batch size of 120. To construct the control code, we adopt the same methods as Hazarika et al. (2022). For inquisitiveness, we randomly sample 10 questions from the Topical-Chat training split. These 10 questions are then embedded once to construct the control code that is concatenated with every instance in the test set. Note that the sampling process is dependent on the random seed for each inference run. This means that each seeded inference setting uses a different set of questions to construct the control code. For positive sentiment, we always use the same five phrases defined by Hazarika et al. (2022): "That's awesome", "That's cool", "Oh that is great", "It's great to", "It's wonderful to". Since Hazarika et al. (2022) reported negligible differences between the different sampling strategies for finding control phrases, we refrained from doing an extensive search over alternative methods and opted to use their recommended settings. Our main experiments are reported on the Topical-Chat 'frequent' test set, however, we observed similar trends across the board when evaluating on the Topical-Chat 'rare' test set also. ## B Ctxaug For Positive Sentiment Encouraging positive sentiment with CtxAug applied to our scaled down models proved successful ![9_image_0.png](9_image_0.png) for all models regardless of the pre-training strategy used. Figure 3 shows that this result also holds with much larger publicly available models, with all differences being statistically significant according to a two-tailed unpaired t-test (p < 0.01). Note that the weaker effect of CtxAug for positive sentiment compared to controlling for response inquisitiveness with BART-base agrees with the findings from Hazarika et al. (2022). ## C Performance Metrics Inspecting the results of automatic metrics, we find only negligible differences on downstream performance across different denoising pre-training objectives, supporting previous findings (Lewis et al., 2020; Alajrami and Aletras, 2022; Raffel et al., 2020). Table 5 provides results for commonly used metrics for evaluating dialogue models. Specifically, we report the total number of unique responses generated (Uniq. Resp.), average response length (Resp. len.), perplexity (PPL) as computed by a distilled GPT-2 model8, the portion of unique unigrams per response (Dist-1), SelfBLEU (Zhu et al., 2018), BLEU (Papineni et al., 8https://huggingface.co/distilgpt2 2002), ROUGE (Lin, 2004) and METEOR (Banerjee and Lavie, 2005). The latter three metrics are computed using ground-truth responses as references and are implemented in Hugging Face's Evaluate library9. Without pre-training, the difference in performance for all metrics is noticeable. | Model | Noised Input | Target | |---------------------------|-----------------------------------------------|------------------------------------------------------| | MASS (Song et al., 2019) | I like [M] [M]. [M] [M] [M] in 1989. | [P] [P] The Simpsons [P] It was released [P] [P] [P] | | T5 (Raffel et al., 2020) | I like [M1]. [M2] in 1989. | [M1] The Simpsons [M2] It was released [M] | | BART (Lewis et al., 2020) | [M] in 1989. I like [postcards]. | I like The Simpsons. It was released in 1989. | | MLM+PS | [M] in 1989. I like [postcards]. | I like The Simpsons. It was released in 1989. | | MLM | I like postcards. [M] in 1989. | I like The Simpsons. It was released in 1989. | | PS | It was released in 1989. I like The Simpsons. | I like The Simpsons. It was released in 1989. | | SIPR-MS | I like [M] [M]. [M] [M] [M] in 1989. | [P] [P] The Simpsons [P] It was released [P] [P] [P] | | SIPR-T5 | I like [M1]. [M2] in 1989. | [M1] The Simpsons [M2] It was released [M] | | SIFR | I like [M]. [M] in 1989. | I like The Simpsons. It was released in 1989. | Table 3: General-purpose seq2seq denoising objectives used for pre-training. The bottom section depicts the pre-training objectives used in our experiments for comparison with those used in publicly available models. [M] and [P] indicate mask and pad tokens, respectively, while words appearing in square brackets indicate a token selected randomly from the vocabulary, following the 80/10/10 mask, replace, keep strategy used in the original MLM objective (Devlin et al., 2019b). | Knowledge snippet: | Daniel Radcliffe voiced the cartoon parody of Twilight's Edward Cullen on The Simpsons episode Treehouse of Horror XXI. | |---------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Speaker A: | Yep me either. I saw the 70's show was made in the UK and was cancelled after only 10 shows. | | Speaker B: | Wow, I guess they didnt love it like people did here. Did you realize that in the first 400 episodes of the SImpsons Homer had 188 jobs. I thought he always worked at the plant. | | Speaker A: | Oh wow that's a lot of jobs. I had no idea. | | Speaker B: | Me neither, that kind of shocked me. Do you remember the Treehouse of Horror xxi from the Simpsons? | | Speaker A: | I do not remember that. Was it a good episode? | | Target: | It had Daniel Radcliffe voicing Edward Cullen. | | Table 4: Example of the knowledge-grounded dialogue task in Topical-Chat. | | | No PT | MLM+PS | MLM | PS | SIFR | SIPR-MS | SIPR-T5 | | |-------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | Uniq. Resp. | 0.57(±0.06) | 0.76(±0.01) | 0.67(±0.02) | 0.68(±0.01) | 0.68(±0.02) | 0.7(±0.01) | 0.7(±0.02) | | Resp. len. | 13.47(±0.53) | 15.75(±0.19) | 16.21(±0.27) | 15.41(±0.34) | 15.99(±0.07) | 15.41(±0.33) | 16.5(±0.34) | | PPL | 50.87(±6.95) | 59.09(±2.4) | 54.91(±3.66) | 57.26(±2.51) | 53.71(±1.09) | 60.62(±3.27) | 58.64(±0.98) | | Dist-1 | 0.91(±0.0) | 0.92(±0.0) | 0.93(±0.0) | 0.93(±0.0) | 0.93(±0.0) | 0.92(±0.01) | 0.93(±0.0) | | Self-BLEU | 0.86(±0.01) | 0.74(±0.0) | 0.79(±0.01) | 0.78(±0.01) | 0.79(±0.01) | 0.78(±0.01) | 0.78(±0.0) | | BLEU | 0.01(±0.0) | 0.03(±0.0) | 0.03(±0.0) | 0.03(±0.0) | 0.03(±0.0) | 0.03(±0.0) | 0.04(±0.0) | | ROUGE-1 | 0.16(±0.01) | 0.2(±0.0) | 0.21(±0.0) | 0.2(±0.0) | 0.21(±0.0) | 0.2(±0.0) | 0.21(±0.0) | | METEOR | 0.11(±0.0) | 0.15(±0.0) | 0.15(±0.0) | 0.15(±0.0) | 0.15(±0.0) | 0.15(±0.0) | 0.16(±0.0) | Table 5: Performance metrics for dialogue modelling with Topical-Chat evaluated on the 'frequent' test set. Results are averaged from 3 different pre-trained/fine-tuned models initialised with different seeds, each with 5 different seeded runs for inference. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✗ A2. Did you discuss any potential risks of your work? We do not foresee any risks stemming from the contribution in this paper. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 4 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 3 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Data and model training artefacts used in this study are either open-source or were previously made publicly available. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Artefacts created in this study relate to small-scale pre-trained language models. We do not foresee an intended use for these models outside of this study. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No new data was collected for this study. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? For all data artefacts used, we cite the original works in which they were presented and which provides information about their coverage of domains, languages, linguistic phenomena, etc. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A, Appendix C ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Where applicable, this information is included in the relevant Github repository that will be made available with the paper. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
haemmerl-etal-2023-exploring
Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity
https://aclanthology.org/2023.findings-acl.439
Previous work has shown that the representations output by contextual language models are more anisotropic than static type embeddings, and typically display outlier dimensions. This seems to be true for both monolingual and multilingual models, although much less work has been done on the multilingual context. Why these outliers occur and how they affect the representations is still an active area of research. We investigate outlier dimensions and their relationship to anisotropy in multiple pre-trained multilingual language models. We focus on cross-lingual semantic similarity tasks, as these are natural tasks for evaluating multilingual representations. Specifically, we examine sentence representations. Sentence transformers which are fine-tuned on parallel resources (that are not always available) perform better on this task, and we show that their representations are more isotropic. However, we aim to improve multilingual representations in general. We investigate how much of the performance difference can be made up by only transforming the embedding space without fine-tuning, and visualise the resulting spaces. We test different operations: Removing individual outlier dimensions, cluster-based isotropy enhancement, and ZCA whitening. We publish our code for reproducibility.
# Exploring Anisotropy And Outliers In Multilingual Language Models For Cross-Lingual Semantic Sentence Similarity Katharina Hämmerl1,2And **Alina Fastowski**1 Jindrich Libovický ˇ 3and **Alexander Fraser**1,2 1Center for Information and Language Processing, LMU Munich, Germany {haemmerl,fraser}@cis.lmu.de 2Munich Centre for Machine Learning (MCML), Germany 3Faculty of Mathematics and Physics, Charles University, Czech Republic ## Abstract Previous work has shown that the representations output by contextual language models are more anisotropic than static type embeddings, and typically display outlier dimensions. This seems to be true for both monolingual and multilingual models, although much less work has been done on the multilingual context. Why these outliers occur and how they affect the representations is still an active area of research. We investigate outlier dimensions and their relationship to anisotropy in multiple pre-trained multilingual language models. We focus on cross-lingual semantic similarity tasks, as these are natural tasks for evaluating multilingual representations. Specifically, we examine sentence representations. Sentence transformers which are fine-tuned on parallel resources (that are not always available) perform better on this task, and we show that their representations are more isotropic. However, we aim to improve multilingual representations in general. We investigate how much of the performance difference can be made up by only transforming the embedding space without fine-tuning, and visualise the resulting spaces. We test different operations: Removing individual outlier dimensions, cluster-based isotropy enhancement, and ZCA whitening. We publish our code for reproducibility.1 ## 1 Introduction Since BERT-like (Devlin et al., 2019) language models rose to popularity, much has been made of the study of their hidden states and parameters (cf. Rogers et al., 2020). Thanks to their ability to incorporate context, they have been a major improvement for most tasks over static input embeddings. However, a certain issue has been shown in a number of works to affect contextual language models to a greater degree: outlier dimensions in the 1https://github.com/kathyhaem/outliers ![0_image_0.png](0_image_0.png) weights and hidden states (Kovaleva et al., 2021) and correspondingly, high anisotropy (Gao et al., 2019; Ethayarajh, 2019, inter alia). At the same time, the raw pre-trained embeddings work surprisingly badly for semantic similarity tasks, prompting efforts to train better sentence embeddings such as done by Reimers and Gurevych (2019). In this paper, we are interested in multilingual sentence embedding quality. We discuss both outliers and anisotropy as two related aspects of embedding quality. Outlier dimensions are typically defined as dimensions that consistently produce values of a magnitude more than three or five times the standard deviation of all dimensions (Kovaleva et al., 2021). If a model has outlier dimensions in its hidden states, it will necessarily have higher anisotropy, since these dimensions create a consistent shift towards a certain direction in the embedding space. On the other hand, high anisotropy can also occur without individual dimensions meeting the outlier definition, namely if some principal components composed of multiple dimensions are much larger than others. Therefore, as we understand it, anisotropy is the wider phenomenon of which outliers are a subset. From a theoretical perspective, high anisotropy is considered a problem because it means that the model is not using the full representation space available, and because it translates to high average cosine similarity even between unrelated words or sentences. Figure 1 illustrates this problem clearly. This can increase the odds of picking a wrong candidate on word and sentence similarity tests, and makes representations produced by the model less expressive and less interpretable. Outlier dimensions, since they contribute to anisotropy, entail similar challenges. On the other hand, they are easy to spot, easy to manipulate, and a straightforward entry point to the anisotropy issue. Previous work has sometimes found that models rely strongly on outlier weights for certain tasks, and are overly vulnerable to pruning a select few weights, e.g. (Kovaleva et al., 2021). Further, outliers have been found to present a challenge in model quantisation (Bondarenko et al., 2021). Because they are aspects of the output representations, studies of anisotropy and outliers often use semantic similarity tests that rely directly on these representations, without fine-tuning the model. We follow this approach as well. In this work, we specifically consider sentence representations. Only a small amount of work has been done on outliers and isotropy in multilingual models, which we focus on. Rajaee and Pilehvar (2022) found that mBERT does not contain outlier dimensions, while XLM-R does. However, both models nevertheless exhibit high anisotropy. Another important aspect to consider in the multilingual case is that even if representations are more or less isotropically distributed, the subspaces for different languages can still be misaligned, which further affects cross-lingual performance. Training with parallel data, as done in Reimers and Gurevych (2020), is one way to radically improve cross-lingual alignment. However, we are interested in pushing models to perform well without parallel data. The present work therefore attempts to separate the effect of anisotropy from other factors that could account for the performance gap, such as the use of parallel data objectives and internal misalignment of languages. Our contributions. This work provides an in-depth exploration of outlier dimensions and anisotropy in XLM-R and other pre-trained multilingual language models, using the Tatoeba (Artetxe and Schwenk, 2019), multilingual STS (Cer et al., 2017), and BUCC 2018 (Zweigenbaum et al., 2018) semantic similarity tasks and looking directly at the relevant hidden state representations. We confirm that certain outlier dimensions have a negative effect on similarity search in the crosslingual setting (§ 5). We find that outlier dimensions can differ between languages, although the largest outliers occur in all or most tested languages (§ 5). Anisotropy also varies across languages, and we observe a possible relationship to pre-training data size (§ 4). In our experiments, mBERT does exhibit outlier dimensions (§ 4). Looking at semantic similarity task performance, we show that zeroing outliers and isotropyenhancing transformations are quick ways to improve model performance on such tasks (§ 5, 6). However, a multilingual sentence-transformer performs much better out-of-the-box, and benefits little to not at all from further increasing isotropy. As we show in § 4, this model is already much more isotropic than XLM-R, its pre-trained equivalent. Finally, we give a clearer intuition of the phenomena in question by using tSNE (van der Maaten and Hinton, 2008) to visualise embedding spaces (§ 7). This allows us to grasp more intuitively how anisotropy is one aspect of misalignment between languages in multilingual models. ## 2 Related Work BERT-like models have dominated NLP research in recent years. Multilingual BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) are two popular models whose variants are used for many different ends. Accordingly, some amount of research has focused on analysing properties of the models, sometimes called "BERTology" (Rogers et al., 2020). The phenomena we discuss in this paper—outlier dimensions and anisotropy—are just two aspects of model analysis. ## 2.1 Describing The Phenomena First, we discuss outlier dimensions specifically. Kovaleva et al. (2021) focus specifically on outlier dimensions in the LayerNorm weights of English BERT. Around the same time that the LayerNorm outliers arise, training loss and evaluation perplexity start to fall off sharply. The exact cause is unknown but this suggests the outliers help the model, which they corroborate by showing that task performance decreases significantly when zeroing out outlier weights after fine-tuning. If zeroing the weights is done before fine-tuning, the model recovers most of the performance, but a slight disadvantage is still observed. Timkey and van Schijndel (2021) take a different view of outliers in that they analyse hidden representations instead of weights. They also focus on similarity measures and find that in this context, the outlier dimensions "obscure representational quality". Rajaee and Pilehvar (2022) are one of few to focus on outliers in multilingual models: They find no outliers in mBERT, but do find them in XLM-R. The paper also looks at the embeddings of different languages separately, an approach we follow for the majority of our experiments. As we mention above, outliers are one way to look at anisotropy in hidden representations. Ethayarajh (2019) is one of the first to present evidence for unusually high anisotropy in contextual embedding models, including BERT and GPT-2. Gao et al. (2019) describe the representation degeneration problem and suggest using cosine regularisation to mitigate it. We discuss mitigation approaches in more detail below (§ 2.3). There are multiple ways to measure (an)isotropy, including but not limited to: - average cosine similarity (cf. Ethayarajh, 2019; Timkey and van Schijndel, 2021) - based on principal components (Mu and Viswanath, 2018) - IsoScore (Rudman et al., 2022) These are continuous measures, with value ranges depending on the method. While lower anisotropy is theoretically desirable, it can be hard to decide at what point a space is "isotropic enough". In the present work, we stick to the first measure, that is, average cosine similarity between random pairs (see § 4). ## 2.2 Searching For Causes It has been shown that word frequency plays a significant role in how representations are distributed in contextual models: For instance, rare words tend to be pushed further from the origin during pretraining, leading to a separation of tokens by frequency. Yu et al. (2022) show that rare token embeddings are the first to become anisotropic during pre-training, and seem to "take down with them" the rest of the space. Puccetti et al. (2022) similarly find that outliers are "driven by token frequency". On the other hand, Luo et al. (2021) argue that outliers are caused by positional embeddings which display outliers, and this propagates forward through the model. They demonstrate this by training RoBERTa models with and without positional embeddings. The model without positional embeddings has much worse perplexity, but no outliers. This idea has not been confirmed by other works, and Rajaee and Pilehvar (2022) find that multilingual BERT, despite having positional embeddings, does not display outliers. We use a different mBERT checkpoint in our experiments which does exhibit outliers, but we draw no conclusions about positional embeddings. ## 2.3 Attempts At Mitigation Various methods have been suggested to increase isotropy in the contextual embedding space. During training. Gao et al. (2019), who described anisotropy early on, proposed a cosine regularisation term to mitigate it. This term simply maximises the angle between any non-identical words. Building on this, Zhang et al. (2020) propose Laplacian regularisation as a way to specifically reduce similarity of word pairs that do not occur in similar contexts. Ferner and Wegenkittl (2022) apply a token-level variational loss to an encoder-decoder Transformer, similar to what is done in Variational Auto-Encoders. All three works add the regularisation terms to a model they train from scratch. On the other hand, Ding et al. (2022) test several BERT-like models on GLUE tasks before and after "isotropy calibration" (fine-tuning with regularisation terms), and find that task scores do not consistently improve. They reason that this is because the models already benefit from local isotropy, thus further isotropy calibration does not help. We also note that these experiments are all done on tasks that use fine-tuning. Post-hoc. Rather than training a model from scratch, Li et al. (2020) train normalising flows on STS and similar datasets that they want to test on, starting with a pre-trained BERT model— they call this approach *BERT-flow*. Both Su et al. (2021) and Huang et al. (2021) apply whitening to sentence representations. This operation transforms the mean of the sentence vectors to zero, and the covariance matrix to the identity matrix, as we discuss in more detail in § 6. Su et al. (2021) combine this with a dimensionality reduction strategy. Timkey and van Schijndel (2021) also test several ways of postprocessing representations, such as standardisation and removing the top few principal components. Liang et al. (2021) and Rajaee and Pilehvar (2021a) remove dominant directions from the embedding space. The former learns a set of parameters for weighted removal (scaling) of principal components, while the latter clusters the data before removing the top principal components from each cluster. Rajaee and Pilehvar (2021b) find that removing the dominant directions after SBERT training decreases STS performance, while removing them from the vanilla model improves performance. We corroborate these findings for the multilingual case. Jung et al. (2023) apply isotropyimproving methods, namely normalising flows and Whitening, in the context of dense retrieval models, and find score improvements on the target task. Contrastive fine-tuning. Contrastive learning has become a popular technique in NLP in recent years (Zhang et al., 2022). Among other things, it has been shown to improve sentence embeddings and ensure they are more uniformly distributed. Examples include Gao et al. (2021); Kim et al. (2021); Zhang et al. (2021); Yan et al. (2021), and Reimers and Gurevych (2019). The latter, which we use as a reference in this work, uses in-batch contrastive optimisation in later implementations. ## 3 Datasets Because we will show results of each of our experiments as we go along, we start here by introducing the datasets used. ## 3.1 Tatoeba This is a cross-lingual sentence retrieval task compiled by Artetxe and Schwenk (2019) and pruned to 36 languages by Hu et al. (2020). We follow the implementation used by the latter. Each language is matched with English, and the objective is to find the correct translation for each query. The subtasks per language contain 1k examples each. The most similar translations are retrieved using the cosine similarity of the mean-pooled hidden representations from layer eight. The metric is accuracy. ## 3.2 Bucc This is another similarity search task introduced by Zweigenbaum et al. (2018). However, since it focuses on parallel corpus building, not every query sentence has a match in the target language. Therefore, both precision and recall are important to performance. BUCC has four subtasks: German-English, French-English, Russian-English, and Chinese-English. We again follow the implementation by Hu et al. (2020). The test data contains several hundred thousand examples in each corpus, with between 1900 (Chinese) and 14400 (Russian) matched pairs. The task metric is F1. ## 3.3 Multilingual Sts Another cross-lingual semantic similarity task is Multilingual STS (Cer et al., 2017) from SemEval 2017. The task here is to score sentence pairs on a scale from 0 to 5 representing their relative similarity. There are four cross-lingual subtasks, namely Arabic-English, two Spanish-English tasks of varying difficulty, and Turkish-English. Each subtask contains 250 examples. The task metric is Pearson correlation with the gold labels. ## 3.4 Wikipedia Following Rajaee and Pilehvar (2022), we further use a sample of Wikipedia data in six languages (Arabic, English, Spanish, Sundanese, Swahili, and Turkish) for our analysis. We use these for comparability, as we investigate some of the same multilingual models. The datasets contain between 347 (Sundanese) and 4952 (English) sentences. ## 4 Outlier And Anisotropy Analysis Starting with data from Tatoeba, we derive sentence embeddings for all statements in each dataset. By deriving sentence embeddings, we mean encoding each sentence using the model's standard tokeniser, running it through the model in inference mode, then mean-pooling the result while ignoring special tokens. We proceed to calculate anisotropy scores for each language and dataset, as well as the outlier dimensions. We use the 3σ definition of outliers here. Note, however, that by considering sentence embeddings, which are already mean-pooled in one direction, we essentially have a smaller standard deviation and thus a more sensitive measure. For this reason, we also show which outliers are smaller than 5σ by *italicising* them in our tables. | Model | Anisotropy | Outliers | Means | Mean Cosine Contribution | |----------------|--------------|------------|---------|----------------------------| | 588 | -15.18 | 0.77 | | | | 306 | 3.08 | 0.03 | | | | 239 | -2.06 | 0.02 | | | | 180 | 1.86 | 0.01 | | | | XLM-R | 0.92 | 227 | -11.64 | 0.39 | | mBERT | 0.73 | 195 | -8.01 | 0.16 | | 731 | 2.70 | 0.02 | | | | 588 | -6.78 | 0.22 | | | | 145 | -1.54 | 0.02 | | | | 306 | 1.46 | 0.003 | | | | 459 | -1.43 | 0.01 | | | | 741 | 1.21 | 0.01 | | | | Multil. S-BERT | 0.35 | | | | For the anisotropy score, we adapt Timkey and van Schijndel's (2021) definition to the sentence level. Let S be a sample of n random sentence pairs from a corpus D. The approximate anisotropy A(fl) of layer l in model f is then: $$A(f_{l})=\frac{1}{n}\cdot\sum_{\{x,y\}\in S}\cos(f_{l}(x),f_{l}(y))\quad\quad(1)$$ where cos(*u, v*) is the cosine similarity. Further, we calculate the contributions to anisotropy of the largest dimensions. Analogously to the overall anisotropy, if CCi(*u, v*) = uivi ∥u∥ ∥v∥ is the contribution of dimension i to the total cosine similarity of u and v, then the contribution of dimension i to the overall anisotropy is: $$C C(f_{l}^{i})=\frac{1}{n}\cdot\sum_{\{x,y\}\in S}C C_{i}(f_{l}(x),f_{l}(y)).\quad(2)$$ We use hidden representations from layer 8 when applying these techniques on Tatoeba data, since this task is usually done using layer 8. We test XLM-R, mBERT, and a multilingual S-BERT (Reimers and Gurevych, 2020) model which we have found to create good sentence embeddings across many languages.2 Results of the analysis are shown in Table 1. XLM-R has an extremely high anisotropy score: Any given random sentence pair is already considered very similar to each other. One of its outlier dimensions (588) contributes far and away the 2The specific model we used is sentence-transformers/xlm-r-100langs-bert-basenli-stsb-mean-tokens and can be found on Huggingface. largest part to the expected cosine similarity. This dimension is still present as an outlier, though with a smaller magnitude and cosine contribution, in the multilingual S-BERT which was derived from XLM-R. The S-BERT model also has much lower anisotropy overall. mBERT shows lower anisotropy than XLM-R but much higher values than the S-BERT. Its two largest dimensions both contribute significantly to anisotropy. Unlike Rajaee and Pilehvar (2022), we do find outlier dimensions in multilingual BERT. It is worth noting that we use a different checkpoint than they do (they use the uncased model, we use the cased version), and we focus on sentence representations rather than individual word embeddings. To verify our findings, we repeat our experiments on the same Wikipedia data they used—this now concerns the final layer of the model. We calculate sentence embeddings in this case as well. These results are listed in Table 2. Note that outlier dimensions can and do differ from layer to layer, which we observe in all three of these models. The multilingual S-BERT has no outliers larger than 5σ in the output layer, but does have larger outlier dimensions in the middle layer 8. It may be that the sentence-transformer tuning affects the later layers first and therefore more thoroughly. In Table 3, we report anisotropy scores per language for our models. We also use Wikipedia data here, since this includes fewer languages but is of a more natural domain than Tatoeba. XLM-R exhibits such high anisotropy in these sentence embeddings that there is no meaningful difference | Model | Anisotropy | Outliers | Means | Mean Cosine Contribution | |----------------|--------------|------------|---------|----------------------------| | XLM-R | 0.99 | 588 | 17.86 | 0.89 | | 741 | -5.62 | 0.09 | | | | 423 | -1.97 | 0.03 | | | | 731 | -1.54 | 0.02 | | | | 373 | -1.22 | 0.01 | | | | 89 | -1.04 | 0.01 | | | | 511 | -0.99 | 0.01 | | | | 761 | -0.92 | 0.01 | | | | 493 | -0.86 | 0.01 | | | | mBERT (cased) | 0.61 | 308 | -0.80 | 0.01 | | 281 | 0.67 | 0.003 | | | | 176 | 0.57 | 0.002 | | | | 152 | -0.57 | 0.002 | | | | Multil. S-BERT | 0.27 | | | | | Model | ar | en | es | su | sw | tr | |----------------|-------|-------|-------|-------|-------|-------| | XLM-R | 0.996 | 0.997 | 0.996 | 0.996 | 0.995 | 0.996 | | mBERT (cased) | 0.65 | 0.49 | 0.56 | 0.64 | 0.69 | 0.6 | | Multil. S-BERT | 0.21 | 0.17 | 0.19 | 0.28 | 0.59 | 0.17 | Table 3: Anisotropy scores, final layer, per language, on the Wikipedia data. between the scores across languages. However, the other two models both show an interesting pattern: English and Spanish have the most isotropic spaces, with anisotropy increasing roughly as training data size decreases. This observation fits with the idea that anisotropy is frequency-driven (Yu et al., 2022; Puccetti et al., 2022), i.e., that less frequent tokens tend to be pushed further from the origin. Arabic is more anisotropic than Turkish despite having the same (S-BERT) or double (mBERT) the pre-training data size. Presumably this is due to Arabic using a non-Latin script, since the model has seen more Latin-script data. Sundanese and Swahili are the two languages with the smallest pre-training data of this set. Swahili has the highest anisotropy in both models, and by a large margin in the S-BERT model. This is somewhat surprising, since Sundanese has even smaller pre-training data, but may be down to data quality or tokenisation issues. It may even be that the S-BERT tuning included bad Swahili data—however, this is speculation, since the relevant documentation is lacking. For XLM-R, we further graph the average hidden representations per layer using Tatoeba data. Layer 8 is shown in Figure 2; all layers in Figure 4 in the Appendix. ![5_image_0.png](5_image_0.png) ## 5 Zeroing Out Dimensions Based on the outlier analysis, we experiment with zeroing out dimensions from the sentence representations before feeding them to the similarity search functions. The biggest outlier, 588, clearly damages performance by greatly raising the similarity of all sentences. The correct candidate may thus be eclipsed by a false one more easily. Figure 1 illustrates how this occurs. On the x-axis are the ranking positions of candidate sentences, on the y-axis their average cosine distances (inverse to cosine similarity). In the unmodified model, all candidates are highly similar to the query sentence. After removing 588, candidates with lower ranking | Model | Tatoeba | BUCC | |---------------------|-----------|--------| | XLM-R | 50.35 | 59.1 | | XLM-R -588 | 52.99 | 59.6 | | XLM-R -306 | 50.59 | 58.0 | | XLM-R -239 | 51.11 | 59.2 | | XLM-R, 18 dims rem. | 60.09 | 64.4 | | Multil. S-BERT | 85.17 | 85.7 | become much more dissimilar, and the difference between the top candidate and the other sentences increases, which is a desirable property (note that the graphic does not show whether and which candidate sentences changed their ranking as a result). In addition to zeroing the largest outliers, we identified other dimensions of interest by their magnitude. We included the ten largest dimensions in each language of Tatoeba, finding a total of 18 dimensions that are in the top ten for any of the 36 languages.3 These dimensions include the outliers previously identified, as well as additional large dimensions. We explored removing these dimensions individually and generally found smaller effects, though still a marked effect for some of them. The results are listed in Table 4. We removed the same dimensions from sentence embeddings of BUCC (Zweigenbaum et al., 2018) data. Interestingly, this sometimes improved precision while also worsening recall. Thus, the overall improvements on this task were small (e.g., 588) or even negated (306). Removing all 18 large dimensions from Tatoeba and BUCC yields +9.7 accuracy and +5.3 F1 over the vanilla XLM-R model, respectively. That said, even with this performance gain, the gap to the sentence-transformer is still very large. In addition, manually zeroing a large number of dimensions depending on the task data cannot be done in a real-world system. ## 6 Isotropy-Enhancing Operations Aside from directly zeroing out individual dimensions, we can apply transformations over the set of embeddings that largely eliminate anisotropy and mean-center the representations. In this work, we test two such transformations: 3[12, 63, 145, 151, 152, 266, 267, 459, 723, 728, 588, 306, 239, 184, 180] 1. ZCA Whitening (cf. Huang et al., 2021) 2. Cluster-based isotropy enhancement (Rajaee and Pilehvar, 2021a) ## 6.1 Zca Whitening Whitening is an operation originally used in data pre-processing, in order to remove correlations between the input data features to a machine learning system. It is also called a "sphering transformation", since the resulting data space is a hyperdimensional sphere. However, whitening has recently been used to transform output embeddings of models such as BERT (cf. Huang et al., 2021), before using them for downstream applications. For a given space X with covariance Σ and mean 0, there are many valid whitening transformations. The resulting matrix Y = W X must have the identity matrix I as its covariance, and the whitening transformation W must satisfy the condition: $$W^{T}W=\Sigma^{-1}.$$ $$({\mathfrak{I}})$$ Given that $\Sigma$ can be decomposed into: . $$\Sigma=D\Lambda D^{T},$$ $$(4)$$ Σ = DΛDT, (4) a valid $W$ can be found as follows: $$W=D\Lambda^{-\frac{1}{2}}D^{T}.$$ $$({\boldsymbol{5}})$$ ## 6.2 Cluster-Based Isotropy Enhancement We adopt this method from Rajaee and Pilehvar (2021a). The first step is to separate the provided data into clusters. In their paper, Rajaee and Pilehvar (2021a) use 27 clusters. We make the number of clusters dependent on the number of exampleswith too few examples in a single cluster, the concept of "isotropy" becomes meaningless, and it can lead to computation errors. Each cluster is meancentered, which is necessary for the subsequent steps. Then, PCA is applied to every cluster, and the top k principal components ("dominant directions") are zeroed out. We follow the original paper in setting k = 12. ## 6.3 Discussion The common thread of these methods is that they transform the output representations based on some set of encoded data. This means that either the transformation must be calculated anew for every set of data, or retained from a training set in order to apply it to new data. Though this is not ideal from | Model | Anisotropy | Tatoeba | STS | | | | |---------------------|--------------|-----------|-------|------|-------|------| | ar-en | es-en a) | es-en b) | tr-en | | | | | XLM-R | 0.92 | 50.35 | .114 | .04 | -.059 | .141 | | XLM-R, 18 dims rem. | 0.47 | 60.09 | - | - | - | - | | XLM-R + CBIE | −3.9 × 10−5 | 69.01 | .316 | .445 | .121 | .37 | | XLM-R + Whitening | 7.6 × 10−5 | 70.03 | .355 | .444 | .153 | .36 | | mBERT | 0.73 | 37.53 | .20 | .244 | .146 | .172 | | mBERT + CBIE | 5.7 × 10−5 | 45.79 | .25 | .403 | .15 | .217 | | mBERT + Whitening | −6.6 × 10−6 | 45.14 | .208 | .395 | .171 | .154 | | Multil. S-BERT | 0.35 | 85.17 | .772 | .779 | .235 | .762 | | S-BERT + CBIE | 5.8 × 10−5 | 86.36 | .722 | .742 | .233 | .724 | | S-BERT + Whitening | 0.0001 | 87.35 | .745 | .772 | .222 | .748 | an application perspective, we follow the approach of calculating the transformation for every new set of encoded data. The tasks in question do not use fine-tuning on any kind of training data, so we transform the embedded test data. An alternative would be to learn and retain a transformation based on some external dataset, then apply this to the task data. Such an approach would be especially helpful when doing inference on only a few queries at a time, or when the overhead of computing the transformation should be avoided at inference time. ## 6.4 Results After applying the transformations, we run our anisotropy analysis again. We also test Tatoeba and STS performance before and after the transformations. The results are listed in Table 5. For XLM-R, the transformations lead to a performance boost of almost 20 points on Tatoeba. Recall that removing the top dimensions improved accuracy by only around 10 points. For mBERT, which is more isotropic to begin with, the difference is only eight points. Other factors, such as a more complex misalignment of different languages, seem to be a bigger bottleneck for its performance. The multilingual S-BERT benefits very little from the isotropy-enhancing transformations. For STS, the multilingual S-BERT in fact performs better without the transformations. mBERT and XLM-R do benefit from the transformations to some degree: In most cases, there is a large improvement, particularly in XLM-R. For mBERT, the **es-en b)** subset only shows a small improvement, and the others benefit more from CBIE than from whitening. Rajaee and Pilehvar (2022) also test on STS, including the monolingual subsets. However, since they report Spearman correlations rather than Pearson, as well as using a different mBERT checkpoint than we do, the numbers are not directly comparable, and we do not show them in our table. The main takeaway here is that using the whitening transformation yields similar results overall to CBIE, and that both work to improve sentence-level representations for semantic similarity. Also, they both have little to no benefit in the S-BERT model, which was tuned with parallel data and is already much more isotropic. After the transformations, anisotropy scores are very close to zero; that is, the spaces are extremely isotropic. We can also see this in the t-SNE visualisations of these spaces, see § 7. However, applying the outlier definition of three times the standard deviation, we still find outlier dimensions in the transformed spaces. These all have very small magnitude, and are not necessarily related to the dimensions that were outliers before. Since the transformations are not deterministic, these outlier dimensions can also change when recalculating the transformed spaces. Therefore, we do not consider these dimensions true outliers. In an (artificially) highly isotropic space, the traditional outlier definition of larger than three standard deviations may simply not apply. ## 7 Embedding Space Visualisation To visualise the representation space, we use t-SNE (van der Maaten and Hinton, 2008). First, we apply a PCA dimensionality reduction to 50 dimensions. Then, we reduce the dimensionality further using ![8_image_0.png](8_image_0.png) t-SNE and plot the space in two dimensions. In Figure 3, we show examples from Tatoeba data in XLM-R: Arabic, Bengali, and German. For the first two, accuracy increased by more than 20 points after the transformation, while German is already a high-resource language where accuracy only increased by around 5 points. Since CBIE and Whitening produce very similar visualisations, we only show CBIE. The unmodified spaces very clearly show the problem of internal misalignment between different languages in the model, which disproportionately affects languages with less pre-training data and/or non-Latin scripts. With Arabic-English and Bengali-English, the source and target language spaces are almost disjunct. This issue can be addressed using isotropy-increasing transformations, but they do not solve the problem entirely. For instance, the unmodified sub-spaces of Bengali and English also have markedly different shapes, despite representing a set of parallel sentence pairs. Matching the equivalent sentences to each other starting from such different spaces is more complex than merely applying a linear transformation to increase isotropy. ## 8 Conclusions We have analysed how outlier dimensions and anisotropy interact with cross-lingual semantic similarity tasks in pre-trained multilingual language models. In particular, we focused on the sentence representations of multilingual BERT and XLM-R, comparing them to the sentence representations of a multilingual S-BERT model—essentially a modified XLM-R trained with parallel data to optimise for sentence representations. We employed a range of methods on several different tasks to approach the question from multiple angles. The simplest method of increasing isotropy is removing the largest (outlier) dimensions from the sentence embeddings. We compared the results of this with further-reaching isotropy-increasing transformations. Additionally, we examined how changing the representations affected anisotropy measures and outlier dimensions. Finally, we plotted unmodified and transformed sentence representation spaces to illustrate how anisotropy is one aspect that affects sentence similarity, but reducing it does not resolve all issues in the space. Future Work. Potential future research questions include: Are outliers and anisotropy also relevant when using *fine-tuned* models for cross-lingual transfer? Do larger, particularly generative models, have these issues affecting cross-lingual similarity? Are the pre-training dynamics of anisotropy in multilingual models similar to those of monolingual models? How can we train multilingual models to avoid a degenerating representation space? ## Limitations This paper examines the anisotropy and outlier phenomenon only for a few, relatively similar, models. The isotropy-increasing transformations are nondeterministic and have to be calculated post-hoc based on some set of embedded data, which may not be practical for applications where inference is done on individual or small batches of examples. Since we specifically consider sentence representations, we first average over word embeddings before calculating the mean and standard deviation for outlier analysis. This in effect reduces the sample size and leads to a smaller standard deviation, making our analysis more sensitive to even slight outlier dimensions. Another reason to work with relatively small datasets is to make computing the transformations simple and fast, but this may limit the ability of these transformations to generalise. ## Acknowledgements This publication was supported by LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Länder; and by the German Research Foundation (DFG; grant FR 2829/4-1). The work at CUNI was supported by Charles University project PRIMUS/23/SCI/023, and by the European Commission via its Horizon research and innovation programme (No. 870930 and 101070350). ## References Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *Transactions* of the Association for Computational Linguistics, 7:597–610. Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7947–7969, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yue Ding, Karolis Martinkus, Damian Pascual, Simon Clematide, and Roger Wattenhofer. 2022. On isotropy calibration of transformer models. In *Proceedings of the Third Workshop on Insights from Negative Results in NLP*, pages 1–9, Dublin, Ireland. Association for Computational Linguistics. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Cornelia Ferner and Stefan Wegenkittl. 2022. Benefits from variational regularization in language models. Machine Learning and Knowledge Extraction. Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Representation degeneration problem in training natural language generation models. In *International Conference on Learning Representations*. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. *CoRR*, abs/2003.11080. Junjie Huang, Duyu Tang, Wanjun Zhong, Shuai Lu, Linjun Shou, Ming Gong, Daxin Jiang, and Nan Duan. 2021. WhiteningBERT: An easy unsupervised sentence embedding approach. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 238–244, Punta Cana, Dominican Republic. Association for Computational Linguistics. Euna Jung, Jungwon Park, Jaekoel Choi, Sungyoon Kim, and Wonjong Rhee. 2023. Isotropic representation can improve dense retrieval. In *Advances* in Knowledge Discovery and Data Mining, page 125–137, Cham. Springer Nature Switzerland. Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for BERT sentence representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2528–2540, Online. Association for Computational Linguistics. Olga Kovaleva, Saurabh Kulshreshtha, Anna Rogers, and Anna Rumshisky. 2021. BERT busters: Outlier dimensions that disrupt transformers. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 3392–3405, Online. Association for Computational Linguistics. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130, Online. Association for Computational Linguistics. Yuxin Liang, Rui Cao, Jie Zheng, Jie Ren, and Ling Gao. 2021. Learning to remove: Towards isotropic pretrained BERT embedding. In *Artificial Neural Networks and Machine Learning - ICANN 2021*, page 448–459, Cham. Springer International Publishing. Ziyang Luo, Artur Kulmizev, and Xiaoxi Mao. 2021. Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5312–5327, Online. Association for Computational Linguistics. Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations. Giovanni Puccetti, Anna Rogers, Aleksandr Drozd, and Felice Dell'Orletta. 2022. Outlier dimensions that disrupt transformers are driven by frequency. In *Findings of the Association for Computational Linguistics:* EMNLP 2022, pages 1286–1304, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Sara Rajaee and Mohammad Taher Pilehvar. 2021a. A cluster-based approach for improving isotropy in contextual embedding space. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 575–584, Online. Association for Computational Linguistics. Sara Rajaee and Mohammad Taher Pilehvar. 2021b. How does fine-tuning affect the geometry of embedding space: A case study on isotropy. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3042–3049, Punta Cana, Dominican Republic. Association for Computational Linguistics. Sara Rajaee and Mohammad Taher Pilehvar. 2022. An isotropy analysis in the multilingual BERT embedding space. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1309–1316, Dublin, Ireland. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842–866. William Rudman, Nate Gillman, Taylor Rayne, and Carsten Eickhoff. 2022. IsoScore: Measuring the uniformity of embedding space utilization. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3325–3339, Dublin, Ireland. Association for Computational Linguistics. Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. *ArXiv*, abs/2103.15316. William Timkey and Marten van Schijndel. 2021. All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 4527–4546, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing data using t-SNE. *Journal of Machine* Learning Research, 9:2579–2605. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5065–5075, Online. Association for Computational Linguistics. Sangwon Yu, Jongyoon Song, Heeseung Kim, Seongmin Lee, Woo-Jong Ryu, and Sungroh Yoon. 2022. Rare tokens degenerate all tokens: Improving neural text generation via adaptive gradient gating for rare token embeddings. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 29–45, Dublin, Ireland. Association for Computational Linguistics. Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021. Pairwise supervised contrastive learning of sentence representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5786–5798, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Rui Zhang, Yangfeng Ji, Yue Zhang, and Rebecca J. Passonneau. 2022. Contrastive data and learning for natural language processing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts, pages 39–47, Seattle, United States. Association for Computational Linguistics. Zhong Zhang, Chongming Gao, Cong Xu, Rui Miao, Qinli Yang, and Junming Shao. 2020. Revisiting representation degeneration problem in language modeling. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 518–527, Online. Association for Computational Linguistics. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the third BUCC shared task: Spotting parallel sentences in comparable corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). ## A Xlm-R Mean Embeddings Of Tatoeba In All Layers See Figure 4. ![12_image_0.png](12_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations; Section 6 ✗ A2. Did you discuss any potential risks of your work? We do not see additional risks of this work beyond pre-trained multilingual language models in general. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 for dataset description; in Section 6 we cite implementations from previous work ✓ B1. Did you cite the creators of artifacts you used? yes, as above ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? we do not release any artifacts at this point ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? we do not release artifacts at this point; existing artifacts were released for further research B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 ## C ✓ **Did You Run Computational Experiments?** Sections 4-7 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? the majority of the experiments were a) small scripts run on CPU, b) done on many different machines c) done by different authors. we did try to avoid recomputing embeddings if possible The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sections 4, 5, 6 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? cosine similarity is deterministic, so the results would have been the same ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 4-7 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
hirsch-etal-2023-revisiting
Revisiting Sentence Union Generation as a Testbed for Text Consolidation
https://aclanthology.org/2023.findings-acl.440
Tasks involving text generation based on multiple input texts, such as multi-document summarization, long-form question answering and contemporary dialogue applications, challenge models for their ability to properly consolidate partly-overlapping multi-text information. However, these tasks entangle the consolidation phase with the often subjective and ill-defined content selection requirement, impeding proper assessment of models{'} consolidation capabilities. In this paper, we suggest revisiting the sentence union generation task as an effective well-defined testbed for assessing text consolidation capabilities, decoupling the consolidation challenge from subjective content selection. To support research on this task, we present refined annotation methodology and tools for crowdsourcing sentence union, create the largest union dataset to date and provide an analysis of its rich coverage of various consolidation aspects. We then propose a comprehensive evaluation protocol for union generation, including both human and automatic evaluation. Finally, as baselines, we evaluate state-of-the-art language models on the task, along with a detailed analysis of their capacity to address multi-text consolidation challenges and their limitations.
# Revisiting Sentence Union Generation As A Testbed For Text Consolidation Eran Hirsch1 Valentina Pyatkin1 **Ruben Wolhandler**1 Avi Caciularu1 Asi Shefer2**Ido Dagan**1 1 Bar-Ilan University 2 One AI eran.hirsch@biu.ac.il dagan@cs.biu.ac.il ## Abstract Tasks involving text generation based on multiple input texts, such as multi-document summarization, long-form question answering and contemporary dialogue applications, challenge models for their ability to properly *consolidate* partly-overlapping multi-text information. However, these tasks entangle the consolidation phase with the often subjective and illdefined content selection requirement, impeding proper assessment of models' consolidation capabilities. In this paper, we suggest revisiting the *sentence union* generation task as an effective well-defined testbed for assessing text consolidation capabilities, decoupling the consolidation challenge from subjective content selection. To support research on this task, we present refined annotation methodology and tools for crowdsourcing sentence union, create the largest union dataset to date and provide an analysis of its rich coverage of various consolidation aspects. We then propose a comprehensive evaluation protocol for union generation, including both human and automatic evaluation. Finally, as baselines, we evaluate state-of-theart language models on the task, along with a detailed analysis of their capacity to address multi-text consolidation challenges and their limitations.1 ## 1 Introduction In order to acquire knowledge on a new subject or find answers to complex questions, it is often necessary to consult multiple sources of written information. While information provided in a single document is usually consistent, textual materials from various sources often use different language expressions, which may vary in terms of level of specificity, to convey similar information. An illustration of this phenomenon can be seen in Figure 1. In this paper, we aim to address the process of combining such multiple partially overlapping textual 1Our data and code is available at: https://github.com/ eranhirs/sentence_union_generation [S1] The fire has destroyed a large section of the store and fire crews and investigators are still on the scene. [S2] *A FIRE has badly damaged the Waitrose supermarket* in Wellington's High Street. [Union] The fire has destroyed a large section of the ![0_image_0.png](0_image_0.png) Waitrose supermarket in Wellington's High Street *and fire* crews and investigators are still on the scene. Figure 1: An example of a sentence pair and its union sentence. Information that must be included in the union is highlighted differently for each sentence (*green* and purple for sentences 1 and 2, respectively), unless the information is paraphrastic (equivalent) between the two sentences, which is then highlighted by the same color (*blue*). Non-highlighted information indicates that there is corresponding information in the other sentence that is more specific. sources into a single unified and comprehensive format, to which we refer as *text consolidation*. Text consolidation plays a crucial role in almost any text-based information access application, such as Multi-Document Summarization (MDS) (Fabbri et al., 2019; Giorgi et al., 2022), long-form question answering (Fan et al., 2019; Nakano et al., 2022), and contemporary dialogue applications (Thoppilan et al., 2022; OpenAI, 2023). It is important to point out here that content selection and consolidation manifest two distinct sub-tasks in such applications, where the former involves identifying the sought information in the source texts, based on considerations such as salience and user needs. Consolidation, on the other hand, involves merging the selected information into a coherent output text. Accordingly, we suggest that each sub-task deserves separate investigation, while focusing in this paper on the consolidation task, manifested as information union. This approach enables targeted investigation of information union capabilities of models, while enabling modular architectures, where an effective information consolidation model can be paired with different content selection models and strategies, whether fully-automatic or interactively involving a user in the loop. To achieve a more controlled research environment, a sentence fusion task was introduced, which fuses a set of sentences into a single sentence (Barzilay et al., 1999; Thadani and McKeown, 2013; Agarwal and Chatterjee, 2022). However, being similar to summarization, the general sentence fusion task is ill-defined, because it allows for *subjective* salience-based content selection decisions (Daume III and Marcu, 2004; Krahmer et al., 2008). In contrast, the sentence union generation task is strictly defined as generating a sentence that contains *exactly all* information from the source sentences (see Fig. 1). While identifying the union task to be more attractive due to its more *objective* and semantically challenging nature, we found that datasets for this topic are relatively scarce (McKeown et al., 2010; Geva et al., 2019; Lebanoff et al., 2020), none of them sufficiently addressing the text consolidation setting. Consequently, we revisit the sentence union generation task and propose that it can be used as an effective generic testbed for text consolidation. Compared to the sentence intersection task, the union task is more challenging, as it requires merging both joint and disjoint information in the output and hence provides a more complete testbed for text consolidation. Our input format is rich and challenging enough, as shown in our analyses, to support research on information merging models. Further, this setting may already be of practical use for downstream text generation tasks, for example when combined with sentence compression or decontextualization models. Our contributions are outlined as follows: (1) we suggest focusing on sentence union generation as a resource for studying cross-text consolidation capabilities, and point out that properly identifying informational relations between pairs of sentences is necessary for proper consolidation; (2) we provide the largest union fusion dataset to date, while proposing a controlled annotation protocol and interface for careful creation of a sentence union corpus; (3) we suggest evaluation protocols to assess the quality of a generated sentence union, accompanied by automatic metrics that can be used for comparing multiple systems; (4) we provide empirical results on the abilities of prominent neural generative models to address the union task, assessing their capabilities and limitations. ## 2 Background In Multi-Document Summarization (MDS) (Narayan et al., 2018; Fabbri et al., 2019) multipletexts are summarized into a single, shorter text. In a more controlled variant of MDS, the task requires the fusion of partly-overlapping sentences (Barzilay et al., 1999; Thadani and McKeown, 2013; Agarwal and Chatterjee, 2022). Generally, the sentence fusion task included a saliency detection (or importance) component which requires identifying which pieces of information to preserve in the fused output. As a result, sentence fusion is generally ill-defined, as different possible content selections may be valid, making the task subjective to varying necessities of a user (Daume III and Marcu, 2004; Krahmer et al., 2008). Its output could be seen as covering a "loose" intersection of the content of two sentences. McKeown et al. (2010) on the other hand, to ensure more consistent fusion settings, makes a distinction between two strict variants of the task: sentence intersection and sentence union generation. Given two (or a set of source sentences), their intersection is a sentence that contains only information that is *common* to both source sentences, while their union is a sentence that contains all information from the source sentences. As we will see in §3, these tasks can indeed be formulated in strict entailment terms. McKeown et al. (2010) crowdsourced a dataset of 300 examples for sentence intersection and sentence union, but subsequent works mostly focused on the intersection fusion part of the dataset (Thadani and McKeown, 2011; Fuad et al., 2019). Further, their dataset size is relatively small and primarily intended for evaluation purposes, making it inadequate for partitioning into a training dataset for fine-tuning large language models. While McKeown et al. (2010) used similar sentences, whose contents partly overlap, as input, later works researched the union of disparate sentences (Geva et al., 2019; Lebanoff et al., 2021) where contents are disjoint. This does not address the challenge of consolidating partly overlapping texts. In this work, we chose sentence union as a more complete testbed for multi-text consolidation. We see our work as a continuation of the work by McKeown et al. (2010), and complementary to works that introduced fusion datasets for disparate sentences. Our work further relates to a line of research that focuses on objective generation of text. Castro Ferreira et al. (2020) introduced a data-to-text generation task, wherein knowledge graph triplets describing facts are transformed into natural language text. While there are many possible realizations of the knowledge graph into natural language, the task is semantically objective, with respect to the informational content expected in the output, and is hence similar to the sentence union task. Recently, Slobodkin et al. (2022) introduced a new controlled text reduction task: given an input document with highlighted spans, the task is to generate a summary in which only the information covered in the highlighted spans is included, which could be compared to a highlight union task. Compared to our work, the spans that they used all appear in a single document, which makes it more similar to datasets which fuse disparate sentences. ## 3 Task Formulation The input for our sentence union task consists of two related sentences whose content partly overlap. The output union is then defined as a single sentence that follows two conditions: (a) it contains exactly the information from the two input sentences, and (b) it does not include any redundancies in its content. Condition (a) implies that there cannot be any information missing from the union that is mentioned in the source sentences, while at the same time the union cannot contain information that is not mentioned in the source sentences (i.e., hallucinations). Condition (b) implies that the union must avoid repetition of any units of information stemming from the source sentences, even if they are conveyed in different lexical terms. Notably, the semantic content of the output union (condition (a)) can be defined objectively in strict textual entailment terms. Formally, given an input of two related sentences s1 and s2, and their union u, u should satisfy u |= s1 , u |= s2 and s1 + s2 |= u, where |= denotes textual entailment and + denotes concatenation of the two sentences. This definition, however, does not cover condition (b) of avoiding redundancies. Identifying relevant informational links is crucial for producing a union, as demonstrated by the example in Fig. 2. We observe three types of relations between information units in the source sentences that affect the content of the resulting unit: (1) equivalent content, (2) uni-directional entailing content, and (3) disjoint content. Equivalent content, such as lexical equivalence or paraphrases, needs to be identified and included exactly once in the union to avoid redundancy. Uni-directional entailing content pertains to aligned text spans where one span can be implied from the other. In this case, only the entailing text unit should be included: including both spans would be redundant, while including only the less specific mention would result in missing information. Disjoint content must be included in the union as it provides distinct information not mentioned in the other sentence. For example, in Fig.2, sentence 1 mentions the reason for firing Weightman while sentence 2 mentions that Harvey resigned, each providing distinct information. In addition, according to our annotation scheme, we assume that the date of the publication is known, which means that when a phrase such as "the previous Thursday" is mentioned, we can infer the specific date. Thus, the text spans "On March 1st" and "the previous Thursday" are equivalent, while "Francis Harvey" in sentence 1 is more specific than the text span "Harvey" in sentence 2. By considering these three types of relations, a proper union can be produced. As noted earlier, we see the union generation task as a more comprehensive setup for information consolidation than the *intersection* generation task2. This is because the union output should combine all the content from both source sentences, while the output of the intersection task does not include information mentioned in only one of the sentences. As a result, the union is more informative than the intersection, which makes it more representative for downstream multi-text tasks requiring information consolidation, aiming to create an efficient, nonrepetitive output text. ## 4 Dataset 4.1 Data Sources Annotating a text consolidation sentence union dataset requires a collection of *related* sentences, as input, as seen in Fig. 1. Specifically, we require naturally occurring sentences with some semantic overlap, where different types of informational relations are present. Note that we do not consider sentences with no content overlap as relevant for our dataset. 2The information content for the intersection task can also be defined in strict textual entailment terms. Formally, for the intersection i of the two sentences s1 and s2, it is required that s1 |= i , s2 |= i and for all i ∗such that s1 |= i ∗, s2 |= i ∗, then i |= i ∗. ![3_image_0.png](3_image_0.png) To that end, we use the dataset created by Weiss et al. (2021), which includes pairs of relevant sentences with high semantic overlap. Their dataset was curated by identifying information overlap between sentences, based on the repurposing of existing human annotations. This approach is preferable to using models that identify semantic overlap, such as Thadani and McKeown (2013), since it introduces less bias to the dataset. The original datasets from which they sourced the sentences include: (1) the Event Coreference Bank (ECB+, an extension over ECB) (Cybulska and Vossen, 2014), which provides annotations for coreferring event and entity mentions, (2) MultiNews (MN) (Fabbri et al., 2019), which contains clusters of news articles along with human-written summaries, and (3) The Document Understanding Conference (DUC) and the Text Analysis Conference (TAC)3, both providing MDS evaluation datasets. ## 4.2 Annotating Sentence Union The process of writing a sentence union involves carefully tracking information units and blending them together to form the output, as outlined in §3. We introduce an elaborate crowdsourcing approach and interface (see Figure 3) for annotating union datasets at a large scale, which splits the annotation process into multiple steps. Starting with the two source sentences, the first step is to choose one sentence as the *base sentence*, 3https://duc.nist.gov/ , https://tac.nist.gov/ ![3_image_1.png](3_image_1.png) that will be used as the basis for generating the sentence union, depicted in (Fig. 3, [1]). Our early experiments have shown that it is easier to merge the information from one sentence by adding it to the other sentence than write a merged sentence from scratch. We instruct the workers to choose the more detailed sentence as the base sentence, since this sentence would usually require less edits when merging into it information from the other sentence. In the other sentence, termed the *integrated sentence*, the worker has to highlight which spans they would like to integrate into the base sentence (Fig. 3, [2]). Finally, in the writing step, the worker blends the highlighted spans into the base sentence, thus creating the sentence union (Fig. 3, [3]). To optimize the diversity of inputs within our dataset while considering our annotation budget, each example was assigned to a single annotator. ![4_image_0.png](4_image_0.png) Table 1: Sizes of the splits of our dataset, as well as of the skipped examples (19.3% of Weiss et al. (2021)). To ensure the quality in annotators' decisions, our process follows the controlled crowdsourcing approach (Roit et al., 2020). See App. C for more details and screenshots of the entire annotation process. Skipping examples In certain cases, it may not be possible to generate a coherent sentence union from a pair of sentences, and annotators were given the option to skip such examples. A comprehensive analysis of these skipped cases is presented in Appendix A. Mainly, our findings indicate that the dataset from which we derived our data(Weiss et al., 2021), and was primarily designed for proposition alignment, contains many sentence pairs that are not sufficiently related to each other and hence are not suitable for producing a meaningful union. Subtle annotation cases In addition to the aforementioned instructions, we took into consideration a few prominent special cases concerning the source sentences that would affect the resulting sentence union. Such cases include the need for world knowledge, temporal issues, subjectivity and attribution. For examples and guidelines provided to the workers for such cases, refer to App. B. ## 4.3 Cleaning Annotations In order to ensure a high quality dataset, we introduced a post-processing step in which we either removed or manually edited examples matching specific filtering criteria. Filtering included finding non-overlapping input sentences based on their output union (i.e., the output was a simple concatenation of the two source sentences), as well as automatically identifying and manually reviewing subtle annotation cases described in App. B. For more details, see App. D. ## 5 Dataset Analysis And Assessment In the following subsections, we report various analyses of the quality and other properties of our dataset. Dataset split statistics appear in Table 1. Our approach yielded a test dataset comprising of 477 instances, a sample size which is reasonable in light of the confidence intervals outlined in §8. | Datasets | Coverage | Faithfulness | Redundancy | |-----------------------|------------|----------------|--------------| | Ours | 98.3% | 99.8% | 99.8% | | McKeown et al. (2010) | 96.5% | 99.5% | 98.6% | Table 2: Evaluation of union quality. Moreover, our analysis of learning curves (see Appendix G) suggests that the size of our training dataset is sufficient, and further expansion may not yield significant benefits. ## 5.1 Sentence Union Quality To estimate the reliability of our dataset, we have conducted a human assessment on a sample of 100 examples of sentence unions generated by our annotators. Our goal is to check whether the sentences in the dataset objectively fulfill the union requirements defined in Sec. 3. For this purpose we designed two evaluation criteria for content (coverage, *faithfulness*), and one criterion for finding redundancies (*redundancy*). In addition, we evaluate the fluency of the generated sentence, as commonly done for generation tasks. - **Coverage:** Does the sentence union contain all information expressed in the source sentences? - **Faithfulness:** Does the sentence union describe only information expressed in the source sentences? - **Redundancy:** Does the sentence union redundantly repeat some information? - **Fluency:** Does the sentence union progresses fluently, form a coherent whole and is easy to understand? The content criteria resemble closely those used for data-to-text generation tasks (Castro Ferreira et al., 2020) which also require exact content matching between their input and output. We add another criterion for evaluating redundancies, as our input does include redundancies which needs to be avoided in the output. As a simple way to measure the content criteria, we count the number of content words4involved in pieces of information that are missing from the sentence union, or are unfaithful to the source sentences. For example, if the sentence union in Fig 2 would not mention the name *"Nick Jones"*, which was mentioned in sentence 2, we count this as 2 4We removed stop words using www.nltk.org. misses. A more complicated example would be if the sentence union attributes *"Nick Jones"* to the wrong entity, such as "FBI Deputy Director Nick Jones". In such case, we consider the entire span (5 words) as missing, as well as unfaithful. Note that faithfulness can be seen as symmetrical to coverage, where we simply count content words in the sentence union that are not supported in the source sentences. Similarly, for the redundancy score, we count the number of content words involved in pieces of information that are redundant in the union. For example, in the phrase "Thursday overnight at 2:09am", the phrase *"overnight"* is considered redundant, and we will count 1 redundant word. We did not notice any fluency issues in the sentence unions created by the workers, as may be naturally expected given the high quality of our selected workers. We start by counting the number of content words in all of the sentence unions in our sample, which adds up to 2372 content words, termed w*total*. Then, to create a *coverage* score, the count of missing content words is termed w*missing*, and the coverage score is calculated as w*total* wtotal+w*missing* . To create a *faithfulness* and *redundancy* scores, we calculate 1− wunfaithful w*total*and 1− wredundant w*total*, respectively, where w*unfaithful* is the number of unfaithful words and w*redundant* is the number of redundant words. Results for these metrics are available in Table 2. Overall, coverage issues were encountered in 8 examples out of 100, faithfulness and redundancy issues in one example each. Quality comparison to the prior dataset We compare our dataset to the McKeown et al. (2010) dataset of 300 sentence unions examples. In their annotation process, 5 workers annotated each pair of sentences, and then a single sentence union out of the 5 was automatically chosen as a representative. We evaluated a sample of 20 such representative sentence unions and used the same quality metrics that were used in our dataset quality analysis, reported in Table 2. We conclude that our controlled process, which separates the identification of informational relations from the writing phase, results in higher quality sentence unions, making significantly less coverage and redundancy mistakes, which are often due to lack of attention to details. For the faithfulness criterion, both approaches achieved similar high scores, which is expected since humans are not prone to hallucinate when editing a sentence. Overall, our annotation ![5_image_0.png](5_image_0.png) process achieves slightly better results, while employing only one worker instead of five. ## 5.2 Dataset Compression Rate Our motivation for the union task is to develop models that can consolidate information from naturally occurring texts with varying degrees of overlapping information. Hence, in order to assess the diversity of our dataset with respect to the degree of such information overlap, we suggest to compute and analyze the *Compression Rate* (CR) in our instances, which measures in our setting the amount of redundancies (unlike the data-to-text setting) between the two source sentences5. By design, a CR of 100% would imply that a single source sentence contains all of the information in both source sentences, which means that the other sentence is completely redundant. A CR of 0% would imply that there is no redundancies between the source sentences. Denoting our two input sentences short and long, per their lengths, as well as the union sentence, and following the rationale above, the compression rate is calculated as the amount of information that is eliminated from the shorter sentence. Formally, we have CR(short, long, union) = 1 − |union|−|long| |short|, counting sentence length by content words. As can be seen in Fig. 4, our dataset supplies a variety of examples in terms of CR for every split. We report an average CR score of 60.82±0.67 for our dataset and an average CR score of 65.62±1.35 for McKeown et al. (2010). These results imply that our dataset on average contains somewhat less 5In the union task, compression refers only to the merging of redundancies across the source sentences. overlap between the source sentences, overall includes a large variety of redundancy levels. ## 5.3 Informational Relations Analysis Complementary to the analysis in §5.2, naturally occurring texts can include a wide variety of crosstext informational relations, as described in §3. For this reason, we analyzed the frequency of the more challenging relations necessary to generate proper sentence union. Our analysis includes a sample of 30 sentence pairs from our dataset. On average, a sample of 10 examples is expected to include 17 "paraphrastic uni-directional entailment" relations (a uni-directional entailment which differs lexically), such as *"supermarket"* entailing "store", or *"gave interviews on NBC's today"* entailing *"appearance on NBC's today"*. As described in §3, such examples challenge a consolidation model to include only the *entailing* expression in the output. In addition, such a sample is expected to include 21 paraphrastic equivalence relations. These challenge the model to include only one of the equivalent expressions in the output, to avoid repetition. Overall, these statistics assess the abundant semantic challenges posed by our dataset. ## 6 Baseline Models We present baseline models, aiming to test neural pretrained language models' for their ability to implicitly recognize relevant informational relations between input sentences and properly create their union. Fine-tuned models As our first type of baseline we fine-tune a large pre-trained sequenceto-sequence model using our data. To that end, we picked two strong models: T5*large* (Raffel et al., 2019), which is commonly applied to endto-end text generation tasks (Chen et al., 2020), and PRIMERA (Xiao et al., 2022), which was pretrained in a cross-document fashion (Caciularu et al., 2021) and achieves state-of-the-art results over multi-document summarization datasets. This makes this model appealing for our sentence fusion task, where the two sentences originate in different documents. See App. F for information about training details. In-context learning Another current baseline approach is in-context learning, in which the instructions and examples to the task are provided as input (the prompt) at inference time to very large pre- ![6_image_0.png](6_image_0.png) trained language models. We used *GP T*3 (Brown et al., 2020), specifically *text-davinci-003*. The instructions we initially used were similar to those given to the annotators. We then optimized the prompt by running it on the training dataset and manually identifying mistakes. The identified mistakes were added to the prompt as examples. In addition, we added to the instructions "important" notes to what the model should pay attention to. See App. E for the complete final prompt and configuration used. ## 7 Model Evaluation Protocols We evaluate our baseline systems both through human evaluation (§7.1) and with automatic metrics (§7.2) suitable for the task, which can generally be used in the development cycles of union generation systems (§7.2). ## 7.1 Human Evaluation The human evaluation is conducted over the predicted unions for the test set for each of the baseline models. Instead of judging the generated sentence union for each baseline system separately, the evaluation is done in a comparative fashion, following previous works where the evaluator sees together the outputs of all baseline systems (Callison-Burch et al., 2007; Novikova et al., 2018). Similar to the analysis of the dataset quality in §5, we are interested in evaluating the coverage, faithfulness, redundancy and fluency of the predicted union, this time in a manner that fits crowdsourced human evaluation. Content and redundancy are scored on a scale from 1 to 4 (higher is better), described in Table 3. This scale is inspired by the Semantic Textual Similarity human evaluation approach (Agirre et al., 2013), which also tests for information overlap. For the fluency score, we use a common Likert scale from 1 to 5 (Fabbri et al., 2021). See App. H for details and screenshots. As there exist trade-offs between the two content measures and the redundancy measure, we add an additional measure which evaluates *consolidation* ![7_image_0.png](7_image_0.png) as a whole. For example, by arbitrarily adding more information to the union we can increase the coverage, but also risk increasing redundancies and unfaithfulness. The *consolidation* measure simply averages the three aforementioned measures, thus testing for overall text consolidation quality. ## 7.2 Automatic Evaluation In line with previous works in text generation, we report the ROUGE metric between the reference union and the predicted union. However, like for most generation tasks, ROUGE will unfairly penalize correct but paraphrastic sentence unions (as described in §3). To partly address this issue, we add another automated metric which tests for bi-directional textual entailment (aka NLI), comparing the reference union sentence to the predicted union sentence, requiring entailment in both directions. Specifically, we use the DeBERT a*xxlarge*v2 model (He et al., 2020), finetuned with the MNLI task (Williams et al., 2017) and a threshold of 0.5. While both metrics test for content matching, they would not penalize a model that bluntly concatenates the two input sentences. Therefore, we also report ∆CR (§5.2), calculated as the average difference between the CRs of the predicted vs. the reference union sentences (the latter is subtracted from the former), on each instance. A positive value thus indicates that the model compression rate is higher than that of the reference union, while a negative value indicates the opposite (model compresses less than the reference). ## 8 Results And Analysis 8.1 Human Evaluation Of The Models Results are presented in Table 4, and example generations with their respective scores are provided in App. I. The trade-off mentioned in §7.1 between increasing coverage while still remaining faithful and without redundancies is evident in the results of T5*large* and *GP T*3. PRIMERA comes out as a slightly better model, as it achieves the highest consolidation score, with yet a lot of room for improvement. To get a better sense of the absolute performance of the union sentences generated by the baseline models, we compare them to two naive models which output: (1) the concatenation of the source sentences (no avoidance of *redundancy*), and (2) the longer sentence (no attempt to consolidate and cover information from the other sentence). Based on evaluation of 50 examples completed by the authors, we report an average redundancy score of 1.6±.1 for the concatenation and an average coverage score of 2.3±.1 for the longer sentence. As reported below, all our baseline models outperform these naive models by a large margin. Further, we draw a plot (Fig. 5) of the minimal system score amongst the three component measures that the consolidation measure combines. We note that even for the best model, PRIMERA, only 29.7% of the predictions are fully correct with respect to content and redundancy, another 40.6% examples include minor errors, and 26% examples contain substantial errors in at least one of the measures, indicating the limitations of current models. ## 8.2 Automatic Evaluation Of The Models While automatic metrics are clearly less reliable than human metrics, they can be useful for development cycles. The automatic metric results are also reported in Table 4, observing that both the ROUGE1 score is highest for PRIMERA, while the NLI score is highest for *GP T*3. The ∆CR scores roughly correlate with the combination of coverage and redundancy detected in the human evaluation, where both lower coverage (undesired) and lower redundancy (desired) increase compression rate. To identify the potential utility of our automatic metrics, we follow the standard practice (Fabbri et al., 2021) and calculate a Kendall τ coefficient (McLeod, 2005) between the human and automatic evaluation results. Our results show that *ROUGE*1 | Coverage | Faithfulness | Redundancy | Consolidation | Fluency (1 to 5) | ROUGE1 | NLI | ∆CR | | |------------|----------------|--------------|-----------------|--------------------|----------|----------|-----------|-----------| | (1 to 4) | (1 to 4) | (1 to 4) | (1 to 4) | | | | | | | PRIMERA | 3.2 | 3.7 | 3.8 | 3.6 | 4.1 | 89.92±.4 | 86.37±1.6 | 9.28 ±1.5 | | GP T3 | 3.5 | 3.5 | 3.5 | 3.5 | 3.8 | 85.35±.4 | 96.23±.9 | -8.83±1.6 | | T5large | 2.8 | 3.8 | 3.8 | 3.5 | 4.2 | 85.88±.5 | 73.38±2.0 | 27.2 ±1.7 | is the highest correlated metric with the consolidation measure (τ = 0.38, p < 0.05). Overall, these automatic metrics can be used in tandem to provide certain feedback during model development cycles. ## 8.3 Error Analysis To shed light on the various errors made by the baseline models, we examined 20 erroneous examples identified in the human evaluation, with each example consisting of three predictions, one from each of the baseline systems. Our findings indicate that the most frequent causes of model errors are related to the complexity of informational relationships present in the source sentences, with uni-directional entailment being the most common. Moreover, the models seem to face difficulties in accurately combining related information, which often results in incorrect merging of information with the wrong entity or predicate. Further details on the analysis can be found in Appendix J. ## 9 Conclusions In this paper, we advocate for using the sentence union task as a testbed for multi-text consolidation. We release a realistic dataset, together with a set of analyses that show that the dataset is of high quality, and challenging for multi-document consolidation efforts. We evaluate the performance of state-of-the-art pretrained large language models on text consolidation, where our findings suggest key challenges for future research. Future research may expand upon our dataset to include consolidation beyond 2 input sentences, and may examine the use of explicit text consolidation structures for improving multi-text consolidation in large language models. ## Limitations We enumerate some limitations to our work. While we did create the largest union dataset to date, it is still of moderate size. As shown by our learning curves (App. G), the amount of training data we created seemed sufficient to saturate the learning of the models with which we experimented, but it might still be found insufficient for training other models. Our annotation protocol might have influenced the compression rates of the unions, as we instructed workers to annotate sentence unions by first choosing a base sentence and then highlighting the other sentence. Additionally, while the highlighting facilitates the annotation process, it cannot directly be used for analyses of the dataset since it is uni-directional. The dataset includes only input with exactly two sentences and it might be desirable for future works to also be able to train systems that take more than two sentences as input. Our dataset is also domain specific, in that all the sentences are taken from news sources. This might result in challenging cross-domain generalization. This dataset is limited to the English language. While the suggested annotation protocol seemingly fits other languages, the step in which words are highlighted might prove problematic for morphologically rich languages, in which a single word includes many pieces of information. A segmentation of the text before annotation might be required. ## Ethics Statement Crowdsourcing To crowdsource the dataset, we used the Amazon Mechanical Turk6(MTurk) platform. To participate in the first stage of recruitment, workers were required to possess the following MTurk qualifications: - NumberHITsApproved greater than 10000 - PercentAssignmentsApproved greater than 98% - WorkerLocale in US, CA, AU, GB, NZ 6https://worker.mturk.com/ Workers were paid $0.3 for each sentence union annotation assignment, as well as a $1.25 bonus for every 100 assignments, and $0.4 for each evaluation assignment, as well as a $1 bonus for every 50 assignments. Overall, by an average approximation of 1.8 minutes for the first assignment, and 2.4 minutes for the second assignment, their wage is expected to start from $10 per hour and increase as the workers are more familiar with the task and start receiving bonuses. Workers were informed that the ratings they will provide will be used to evaluate artificial intelligence models which were trained on the data they annotated. Dataset The texts that workers write that are included in our dataset are limited to the information expressed in the source sentences. The source sentences originate from the datasets mentioned in §4.1, which include only texts available in public news sources and were previously made available by Weiss et al. (2021). Our dataset does not contain information that would make it possible to reconstruct the original documents, or any human annotations, such as the summary or coreference resolution annotation, from the original datasets. ## Acknowledgments The work described herein was supported in part by grants from One AI, the Israel Science Foundation 2827/21 and the Israel Ministry of Science and Technology. We would like to thank the workers who have annotated this dataset and we appreciate their dedication in ensuring a high level of quality. We express our gratitude to Dr. Kapil Thadani for assisting us in retrieving his data from an earlier research endeavor. ## References Raksha Agarwal and Niladri Chatterjee. 2022. Improvements in multi-document abstractive summarization using multi sentence compression with word graph and node alignment. *Expert Systems with Applications*, 190:116154. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In *Second Joint* Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA. Association for Computational Linguistics. Regina Barzilay, Kathleen R. McKeown, and Michael Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 550–557, College Park, Maryland, USA. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew Peters, Arie Cattan, and Ido Dagan. 2021. CDLM: Cross-document language modeling. In *Findings* of the Association for Computational Linguistics: EMNLP 2021, pages 2648–2662, Punta Cana, Dominican Republic. Association for Computational Linguistics. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 136–158, Prague, Czech Republic. Association for Computational Linguistics. Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussallem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional WebNLG+ shared task: Overview and evaluation results (WebNLG+ 2020). In Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), pages 55–76, Dublin, Ireland (Virtual). Association for Computational Linguistics. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. 2020. Big self-supervised models are strong semi-supervised learners. Advances in neural information processing systems (NeurIPS). Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In *Proceedings of the Ninth* International Conference on Language Resources and Evaluation (LREC'14), pages 4545–4552, Reykjavik, Iceland. European Language Resources Association (ELRA). Hal Daume III and Daniel Marcu. 2004. Generic sentence fusion is an ill-defined summarization task. In Text Summarization Branches Out, pages 96–103, Barcelona, Spain. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Alexander R. Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. Multi-news: a large-scale multi-document summarization dataset and abstractive hierarchical model. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Tanvir Ahmed Fuad, Mir Tafseer Nayeem, Asif Mahmud, and Yllias Chali. 2019. Neural sentence fusion for diversity driven abstractive multi-document summarization. *Computer Speech & Language*, 58:216– 230. Mor Geva, Eric Malmi, Idan Szpektor, and Jonathan Berant. 2019. DiscoFuse: A large-scale dataset for discourse-based sentence fusion. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3443–3455, Minneapolis, Minnesota. Association for Computational Linguistics. John Giorgi, Luca Soldaini, Bo Wang, Gary Bader, Kyle Lo, Lucy Lu Wang, and Arman Cohan. 2022. Exploring the challenges of open domain multi-document summarization. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. Emiel Krahmer, Erwin Marsi, and Paul van Pelt. 2008. Query-based sentence fusion is better defined and leads to more preferred results than generic sentence fusion. pages 193–196. Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, and Fei Liu. 2020. Understanding points of correspondence between sentences for abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 191–198, Online. Association for Computational Linguistics. Logan Lebanoff, Bingqing Wang, Zhe Feng, and Fei Liu. 2021. Modeling endorsement for multi-document abstractive summarization. In Proceedings of the Third Workshop on New Frontiers in Summarization, pages 119–130, Online and in Dominican Republic. Association for Computational Linguistics. Kathleen McKeown, Sara Rosenthal, Kapil Thadani, and Coleman Moore. 2010. Time-efficient creation of an accurate sentence fusion corpus. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–320, Los Angeles, California. Association for Computational Linguistics. A Ian McLeod. 2005. Kendall rank correlation and mann-kendall trend test. *R Package Kendall*, 602:1– 10. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browserassisted question-answering with human feedback. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Jekaterina Novikova, Ondˇr ej Dušek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics. ## Openai. 2023. Gpt-4 Technical Report. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, and Ido Dagan. 2020. Controlled crowdsourcing for high-quality QA-SRL annotation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7008– 7013, Online. Association for Computational Linguistics. Aviv Slobodkin, Paul Roit, Eran Hirsch, Ori Ernst, and Ido Dagan. 2022. Controlled text reduction. Kapil Thadani and Kathleen McKeown. 2011. Towards strict sentence intersection: Decoding and evaluation strategies. In *Proceedings of the Workshop on Monolingual Text-To-Text Generation*, pages 43–53, Portland, Oregon. Association for Computational Linguistics. Kapil Thadani and Kathleen McKeown. 2013. Supervised sentence fusion with single-stage inference. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1410– 1418, Nagoya, Japan. Asian Federation of Natural Language Processing. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, ChungChing Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022. Lamda: Language models for dialog applications. Daniela Brook Weiss, Paul Roit, Ayal Klein, Ori Ernst, and Ido Dagan. 2021. Qa-align: Representing crosstext content overlap by aligning question-answer propositions. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5245–5263, Dublin, Ireland. Association for Computational Linguistics. ## A Skip Guidelines In Section 4.2, it was noted that there are cases where generating a union from a pair of sentences | Category | Count | |------------------------------|---------| | No information consolidation | 19 | | Unnatural union | 7 | | Mistake | 3 | | Missing context | 1 | Table 5: An analysis of 30 cases that were skipped by workers during the annotation process. Among these, some were categorized as mistakes, meaning that they should not have been skipped. is not suitable, and workers were given the option to skip the annotation for such examples. This section outlines the specific scenarios in which workers were directed to skip examples. Eventually, our annotators skipped 458 sentence pairs from the original dataset that we used as input, as shown in Table 1. An analysis of a sample of 30 such cases is presented in Table 5, categorized based on the criteria below. In conclusion, we found that the dataset we used as the source of our sentence pair instances, which was originally developed by Weiss et al. (2021) for aligning predicate-argument structures (represented as question-answer pairs), includes a significant number of instances where information consolidation in the form of sentence union is mostly irrelevant. No information consolidation. One case in which workers were directed to skip examples during annotation is when there is no partially overlapping information to consolidate from two related sentences, hence their union would simply be a concatenation of the two. This case is referred to as "No information consolidation". An example of this scenario is when sentence 1 mentions that *"Acupuncture is the ancient Chinese medical therapy technique of inserting thin, sharpened* needles into specific nerve junction points of the body," and sentence 2 mentions a study that found "53.8 percent of the subjects who had needles inserted in four acupuncture "zones" in the ear five times a week tested free of cocaine at the end of the eight-week study period." In this case, there is no need to consolidate the information from the two sentences as they provide distinct pieces of information. Sentence 1 explains what is acupuncture while sentence 2 discusses a study about it. Unnatural union. An example of an "Unnatural union" scenario is when unifying two input sentences would form an awkward or unnatural sentence. For instance, if the first sentence is written in the past tense and the second one in the future tense, unifying them could lead to an unnatural sentence union. As an example, consider the following sentences: *"Fannie Mae's board met Sunday night* to discuss Raines' future" and "The directors of Fannie Mae, the big mortgage finance company, will meet Sunday to consider the fate of two senior executives who signed off on financial statements that violated accounting rules, people close to the company said Friday." Here, the first sentence uses the past tense while the second sentence uses the future tense. It would be more natural to use the past tense in the sentence union since the event occurred in the past. However, incorporating the information that someone said something on Friday before the event could result in an awkward sentence union. Missing context. This case happens when two sentences need to be interpreted in the broader text context, which is missing in our annotation scenario, for example when there is a dangling reference to an entity that is not specified in the given sentence. This is often not problematic, unless understanding the identity of the entity is necessary to create the union. For instance, one sentence quotes a person, while the other sentence does not mention the speaker. An example of this scenario is the following: *"Sadly, because Magic Leap seldom* hires and does not actively recruit female candidates, the company loses competitive advantage to products like Microsoft's Hololens." and *"When* Tannen Campbell was hired by Magic Leap in 2015, the Florida company had no women in leadership roles and its only idea to make its product femalefriendly was to release a pink version, according to Forbes." Merging these two sentences is not straightforward due to the lack of context. Disagreements. Sometimes, there are two statements that contradict or disagree with one another. For example, sentence 1 is *"Video of Brooklyn* Mother of 13 Zurana Horton shot and killed in a gang shooting was revealed Thursday ." and sentence 2 is "A shocking video released for the first time Thursday captures the moment a Brooklyn mother of 12 was killed in a gang shootout as she picked her daughter up from school .". Sentence 1 mentions that the child is 13 years old while sentence 2 mentions that the child is 12 years old. | Category | Count | |------------------------------------|---------| | Attribution | 12 | | Relative dates | 4 | | World knowledge | 2 | | Before and after an event | 0 | | No subtle case of above categories | 34 | ## B Subtle Annotation Cases In Section 4.2 we noted that certain special cases arose when generating a union from a pair of sentences, and were included in the instructions for annotators. This section outlines the specific instructions provided to workers, with an analysis of 50 cases (Table 6), categorized based on various criteria as described below. Attribution. One potential issue is when the source sentences make attributions to a specific source, such as a news agency. An example of this can be seen in sentence 1 *"Video of Brooklyn Mother Zurana Horton being shot and killed* was revealed Thursday, according to the N.Y. Daily News." and sentence 2 "A shocking video released for the first time Thursday captures the moment a Brooklyn mother was killed as she picked her daughter up from school.", where the new information in sentence 2 is attributed to the video content, rather than to the N.Y. Daily News. Another example is when a sentence contains quotes, as changing a quote to contain more information would create an unfaithful sentence union. In such cases, the workers were allowed, whenever it seemed reasonable, to attribute combined pieces of information originating from the two sentences to a reported source, even if only parts of the combined information were explicitly attributed to this source, in one of the sentences. Relative dates. Some sentences may mention a specific time relative to when the sentence was written, such as "yesterday" or "Monday", which implies that the sentence was written in the same week of the event. Workers were instructed to assume that the date of publication is known, so there is no difference between the mention of "yesterday" and "Monday", but, for example, that "yesterday" is more specific than "earlier this month". World knowledge. In some cases, sentences may mention the same piece of information in different levels of specificity, which requires world knowledge to identify. Workers were instructed to assume common world knowledge when creating the sentence union. An example is given for Paris, which is both a city in Texas and the capital of France. Before and after an event. For sentences referring to events, some may differ in their time of publication compared to the event itself. Workers were instructed to use the past tense, as the sentence union is written after the event. For example, sentence 1 mentions an event that has already happened "After leaving Alderson at 12:30 a.m. on March 3, 2005, Martha Steward declared the 5-month experience as "life altering and life affirming."", while sentence 2 was written before the event *"US lifestyle guru Martha Stewart is expected to leave jail on Friday after a five-month* sentence for a stock scandal that reinvigorated her career rather than dooming it.". In this case, the sentence union should be written in the past tense, as it refers to an event that has already occurred. ## C Annotation Process Screenshots of the entire annotation process are depicted in Figure 6. Guidelines for creating sentence unions7include writing one coherent sentence, ordering the information in a stand-alone manner (as if the sentence would have been written from scratch), meaning that the writing process should not be distracted by the original split and ordering of information in the two input sentences. To the extent possible, the sentence union should preserve the original wording of the information, but phrasing may be *minimally* adjusted to create a coherent sentence union. Each piece of information should appear only once in the sentence union. When there is a redundancy across the two sentences, the more specific phrasing should be chosen. The interface helps the workers to avoid making common mistakes. For example, in order to reduce redundancies of information in the union, if a highlighted word already exists in the base sentence, both word mentions will be marked to draw the worker's attention. Another example is warning the worker when the sentence union contains nonhighlighted words from the base sentence. Also, when integrating highlighted words into the sentence union, the worker will see yellow highlights turn into green highlights. If the worker tries to submit the annotation with yellow highlights, the system will raise an alert. To ensure the quality in annotators' judgements, our process follows the controlled crowdsourcing approach (Roit et al., 2020), which includes a recruitment phase, two training phases accompanied by extensive guidelines, and ongoing monitoring during the annotation of the production task. Workers were allowed to participate in primary tasks 7The complete guidelines file used for training will be published upon publication. only if they had completed the entire process. Only workers who performed well on the recruitment phase were accepted to the next training phases. The training phases were created manually, including subtle annotation cases. After each annotation, workers were shown gold target highlights and sentence unions8for comparison with their own output. ## D Cleaning Annotations Disjoint sentences Following the skip guidelines (see App. A), we automatically identified examples which their sentences are mutually exclusive and their sentence union is a concatenation of the source sentences. We find these instances by comparing content words only, since connecting the two sentences sometimes involves non-semantic lexical changes (e.g., adding a semicolon or a comma). Due to the fact that there is no consolidation of information in such examples, we see them unfit for a union, as mentioned in §4.1, and they were not included in the dataset. We leave the automatic categorization of sentences into whether or not they are suitable for sentence unions to future work. Quotes Following the attribution discussion in App. B, we manually reviewed examples where the union contained a quote that was not in any of the source sentences, as well as any example that had a sentence which used a first-person perspective (e.g., "I", "we", "mine", "ours", ...). ## E In-Context Learning For the in-context learning approach, we used a temperature value of 0.4 and the following prompt: In this task, you will be presented with two sentences that overlap in information, and you are tasked to merge the information of the two into a single unifying sentence without redundancies. Important: Do not omit information. Important: Do not repeat information. Here is an example of a correct union and a wrong union: Sentence 1: The February assassination of former Lebanon Prime Minister Hariri put Syria under renewed pressure from the international community to abide by U.N. Security Council Resolution 1559 and withdraw its troops from Lebanon. Sentence 2: Foreign ministers from all 8Some of the authors of the paper annotated a small set of reference gold target highlights and sentence unions. ![14_image_1.png](14_image_1.png) ![14_image_0.png](14_image_0.png) ![14_image_3.png](14_image_3.png) ![14_image_2.png](14_image_2.png) European Union (EU) member states, who gathered here for a meeting, on Wednesday urged Syria to withdraw its troops completely from Lebanon. Correct union: The February assassination of former Lebanon Prime Minister Hariri put Syria under renewed pressure from foreign ministers from all European Union (EU) member states gathered for a meeting, on Wednesday to abide by U.N. Security Council Resolution 1559 and withdraw its troops from Lebanon. Wrong union: The international community, including the European Union (EU), has put renewed pressure on Syria to abide by U.N. Security Council Resolution 1559 and withdraw its troops from Lebanon following the February assassination of former Lebanon Prime Minister Hariri. The union is wrong, because it does not mention that foreign ministers gathered for a meeting on Wednesday. Please generate a correct union to the following sentences : Sentence 1: <sentence 1 goes here> Sentence 2: <sentence 2 goes here> Correct union: ## F Training Details We fine-tuned T5targe and Primera models for 20 epochs on a Tesla V100-SXM2-32GB GPU. We used a hyperparameter random search strategy. The learning rate was tuned within the range [1e - 8, 5e - 5], while the batch size varied between [8, 16, 32]. We also explored the weight decay range of [0,0.5] and warump step range of [0, 300]. The best model was selected based on ![15_image_0.png](15_image_0.png) the *ROUGE*1 metric.9 The best T5 model was obtained with a learning rate of 4.3e−6, no weight decay, no warmup steps, batch size of 32, after 18 epochs. For the best-performing PRIMERA model, we used a learning rate of 3.5e − 6, weight decay of 0.5, warmup steps of 80, batch size of 16 and selected the best checkpoint after 9 epochs. The training time for T5*large* and PRIMERA models were approximately 1 hour and 10 minutes each. Input structure When concatenating the two source sentences to insert as input for the model, we add special separator tokens to make the model aware of the sentence boundaries. For T5*large*, we separated between the source sentences in the input using a newly created special token, while for PRIMERA, we used the *<doc-sep>* token, which was used in the pre-training phase to separate between input source documents. ## G Learning Curves To assess the adequacy of our dataset size, we evaluated the baseline models on different subsets of our training data ([25%, 50%, 75%, 100%]) and various model sizes (T5*base* and T5*large*). Based on our findings (Figure 7), it appears that enhancing the model size from T5base to T5*large* results in performance improvement. However, the marginal benefit of increasing training dataset size may be limited, and further gains may not be significant. ## H Evaluation Process As explained in Section 7, the evaluation process involves a comparative approach, whereby all the unions of system-generated sentences are evaluated simultaneously, as shown in Figure 8. The evaluation is conducted separately for four criteria. To assess the content differences between the reference union and the system union, including coverage and faithfulness, a single sentence is designated as the base sentence, and the worker is asked to evaluate the other sentence based on the amount of missing content. The reference union serves as the base sentence for evaluating coverage, while the system union is used as the base sentence for evaluating faithfulness since any information present in the system union but absent in the reference union is deemed unfaithful. In evaluating redundancy and fluency, the evaluator is only presented with the system union without the reference union. To assess the coverage and faithfulness criteria, the workers are required to compare the generated union with the reference union, aided by red strikethroughs on words that are not included in the generated union and green highlights on words that are not included in the reference union, as illustrated in Figures 8a and 8b. For redundancy and fluency criteria, the reference union is not needed, as demonstrated in Figures 8c and 8d. ## I Example Sentence Unions See Table 7 for examples of sentence unions, including the sentence unions from each predicted system. ## J Error Analysis In order to perform an error analysis, we analyzed 20 examples that were rated less than perfect for all metrics based on the human evaluation (see §8.1). The findings are presented in Table 8, with one representative example from each subcategory included in Table 9. Our key observation is that models make various coverage errors as they fail to identify the uni-directional entailment correctly in the dataset. Furthermore, models make multiple coverage and faithfulness errors by incorrectly combining information and attaching it to the wrong entity or predicate. 9We used the HuggingFace package (Wolf et al., 2020) for both fine-tuning the models and automatically evaluating them. ![16_image_1.png](16_image_1.png) ![16_image_0.png](16_image_0.png) ![16_image_2.png](16_image_2.png) e | Sentence 1 | French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso work, which the Centre Georges Pompidou realized was missing from its storerooms in 2001 following a loan request; it was then valued at more than $2.5 million. | | |--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------| | Sentence 2 | The canvas had been smuggled out of a storeroom of the Centre Georges Pompidou, the Paris museum and arts center, and its whereabouts had not been known. | | | Gold union | French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso canvas smuggled out of a storeroom of the Centre Georges Pompidou, the Paris museum and arts center, which realized it was missing in 2001 following a loan request; it was then valued at more than $2.5 million. | | | T5large | French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso work, which the Centre Georges Pompidou realized was missing from its storerooms in 2001 following a loan request; it was then valued at more than $2.5 million, and its whereabouts had not been known. | Coverage: 2.0 Faithfulness: 4.0 Repetition: 3.0 Fluency: 3.0 Consolidation: 3.0 | | PRIMERA | French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso work, which the Centre Georges Pompidou realized was missing from its storerooms in 2001 following a loan request; it was then valued at more than $2.5 million. | Coverage: 2.0 Faithfulness: 4.0 Repetition: 4.0 Fluency: 5.0 Consolidation: 3.3 | | GPT3 | French museum officials traveled to New York last month and confirmed that the canvas, which had been smuggled out of a storeroom of the Centre Georges Pompidou in Paris and its whereabouts had not been known since 2001 following a loan request, is indeed the missing Picasso work, valued at more than $2.5 million. | Coverage: 3.0 Faithfulness: 4.0 Repetition: 4.0 Fluency: 2.0 Consolidation: 3.7 | | Coverage | Faithfulness | Repetition | Subcategory Explanation | | |-----------------------------------------------------------------------------------------------------------------------------------|----------------|--------------|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| | Subcategorization Uni-directional entailment | 17 | 2 | 5 | This includes cases where either the entailing part | | is missing and the entailed part is present in the sentence or both the entailing and entailed parts are present in the sentence. | | | | | | Wrong attachment | 13 | 13 | 1 | This includes cases where an argument is attributed to the wrong predicate or entity. | | Lexical similar but different information | 8 | 0 | 0 | This includes cases where information is omitted, and the omitted information had a phrase that was lexically similar to a phrase in the other sentence. | | Ignores prefix | 4 | 0 | 0 | This includes cases where the prefix to the sentence in the source is omitted from the union. | | Related new information | 2 | 0 | 0 | This includes cases where the source sentences contain related | | but different information, and one of them is not included in the union. | | | | | | Paraphrase | 1 | 1 | 5 | This includes cases where paraphrased information from the source is repeated in the union. | | External hallucination | 0 | 3 | 0 | This includes cases where there is information | | in the union that does not originate from the source sentences. | | | | | Table 8: Error analysis based on a sample of 20 erroneous examples, each example analyzed for the 3 system outputs. For each metric, we report the frequency of a subcategory that we suspect is the cause for the error. One representative example from each subcategory is included in Table 9. | Prediction | Explanation | | |------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Subcategorization External hallucination | Peter Capaldi was revealed as the 12th Doctor of the Doctor Who series during a special live broadcast, with the announcement being made that he had been cast as the 12th Time Lord. | The mention of a live broadcast is not part of the source sentences. Interestingly, this is true, which indicates that the model knows this story. | | Lexical similar but | Sgt. Tim Shields and Attorney-General Wally Oppal announced Wednesday | | | different | informa | | | tion | that the RCMP arrested two Bountiful residents, Winston K. Blackmore, 52, and James Oler, 44, on charges of polygamy. | Source sentence mentioned "and leaders of a polygamist group". This was possibly skipped due to the model incorrectly recognizing "polygamy" later as a paraphrase. | | Uni-directional entailment | A strong 6.1-magnitude earthquake which hit the Indonesian northwestern province of Aceh on Tuesday killed a child, injured dozens and destroyed buildings, sparking panic in a region devastated by the quake-triggered tsunami of 2004. | Sentence 2 mentions "injuring at least 50 people" which entails "dozens injured" in sentence 1, but it is not mentioned in the union. | | Ignores prefix | The 55-year-old Scottish actor Peter Capaldi is officially set to replace exiting star Matt Smith, who announced in June that he was leaving the sci-fi show later this year, as the TARDIS leader, as producer Steven Moffat announced on the live BBC special Doctor Who Live: The Next Doctor Sunday. | Ignores the information about it being the 12th doctor, which was mentioned in a sentence prefix: "Doctor Who has finally selected its 12th doctor: Peter Capaldi is officially set to ...". | | Related new information | Industry analysts contacted by eWEEK generally say they believe that HewlettPackard's $13.9 billion acquisition of Electronic Data Systems, which was officially announced on May 13 and is currently being negotiated, is a good move for both companies, although there will be the usual integration snafus such as vendor neutrality issues, business lines, culture shock and layoffs. | "good move for both companies" and "a deal that could help the world's largest personal computer maker snap up more data management and consulting contracts" are different, and both should be mentioned in the union. | | Paraphrase | In France, art dealers are obliged by law to register all purchased art, except | Sentence 1 mentions "art dealers ... purchases", and sentence 2 mentions "dealers ... purchased art". Since these are | | those bought at public auction. | paraphrases, the union which repeates both "art dealers" and "purchased art" is repetitive. | | | Wrong attachment | The flight recorder was recovered on November 9 and revealed that the autopilot was disconnected, the descent appeared "controlled," the cockpit turned off both engines, and the elevators were out of unison, something experienced pilots would not do. | "something experienced pilots would not do" refers to turning out both engines, not elevators out of unison. This is usually caused by an incorrect merge of the sentences. | | Table 9: Examples for the subcategories we devised during the model error analysis, which we suspect are are the | | | Table 9: Examples for the subcategories we devised during the model error analysis, which we suspect are are the cause for the error. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? "Limitations" section ✗ A2. Did you discuss any potential risks of your work? Our work aims to improve discourse understanding for multi-text generation tasks, thus we don't see any potential risks in our work ✓ A3. Do the abstract and introduction summarize the paper's main claims? "Abstract" and "1 Introduction" sections ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 Dataset ✓ B1. Did you cite the creators of artifacts you used? 4.1 Dataset sources ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? "Ethics statement" section and github repo license ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? "Ethics statement" section ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? "Ethics statement" section ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? "Ethics statement" section ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5. Dataset Quality Evaluation ## C ✓ **Did You Run Computational Experiments?** 6. Baseline Models ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix F - Training Details and Appendix G - Learning curves The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix F - Training Details ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 8. Results ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 Dataset Analysis and Assessment, F Training Details ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4.2 Annotating Unions , 8.1 Human Evaluation Of The Models ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C - Annotation Process, Appendix G - Evaluation Process ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? "Ethics statement" section ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? "Ethics statement" section D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? "Ethics statement" section
shridhar-etal-2023-distilling
Distilling Reasoning Capabilities into Smaller Language Models
https://aclanthology.org/2023.findings-acl.441
Step-by-step reasoning approaches like chain of thought (CoT) have proved to be very effective in inducing reasoning capabilities in large language models. However, the success of the CoT approach is fundamentally tied to the model size, and billion parameter-scale models are often needed to get CoT to work. In this paper, we propose a knowledge distillation approach that leverages the step-by-step CoT reasoning capabilities of larger models and distills these abilities into smaller models. In this work, we propose an alternative reasoning scheme, Socratic CoT that learns a decomposition of the original problem into a sequence of subproblems and uses it to guide the intermediate reasoning steps. We use Socratic CoT to train a combination of two small distilled models: a problem decomposer and a subproblem solver. In practice, given a new problem, the two distilled models work in sync to decompose and solve complex problems. On multiple reasoning datasets (GSM8K, StrategyQA, and SVAMP), our proposed distillation strategies boosts the performance of smaller models over 70{\%} compared to the baselines. Finally, we investigate when Socratic CoT is an effective alternative to CoT, demonstrating cases where a much smaller model (GPT-2 large) can outperform a 10X larger model (GPT-3 6B). Our code is available: \url{https://github.com/kumar-shridhar/Distiiling-LM}.
# Distilling Reasoning Capabilities Into Smaller Language Models Kumar Shridhar∗ Alessandro Stolfo∗ **Mrinmaya Sachan** Department of Computer Science, ETH Zurich ¨ {shkumar, stolfoa}@ethz.ch ## Abstract Step-by-step reasoning approaches like chain of thought (CoT) have proved to be very effective in inducing reasoning capabilities in large language models. However, the success of the CoT approach is fundamentally tied to the model size, and billion parameter-scale models are often needed to get CoT to work. In this paper, we propose a knowledge distillation approach that leverages the step-by-step CoT reasoning capabilities of larger models and distills these abilities into smaller models. In this work, we propose an alternative reasoning scheme, SOCRATIC COT that learns a decomposition of the original problem into a sequence of subproblems and uses it to guide the intermediate reasoning steps. We use SO-CRATIC COT to train a combination of two small distilled models: a *problem decomposer* and a *subproblem solver*. In practice, given a new problem, the two distilled models work in sync to decompose and solve complex problems. On multiple reasoning datasets (GSM8K, StrategyQA, and SVAMP), our proposed distillation strategies boost the performance of smaller models over 70% compared to the baselines. Finally, we investigate when SOCRATIC COT is an effective alternative to CoT, demonstrating cases where a much smaller model (GPT-2 large) can outperform a 10X larger model (GPT-3 6B). Our code is available here. ## 1 Introduction Large language models (LLMs) have demonstrated strong performance on a variety of reasoning tasks (Brown et al., 2020; Hoffmann et al., 2022; Chowdhery et al., 2022, *inter alia*). One particularly interesting strategy for prompting these models is chainof-thought (CoT), which has been shown to elicit reasoning abilities in LLMs by asking the model to incorporate intermediate reasoning steps while solving a problem (Nye et al., 2021; Wei et al., ∗ Equal contribution. ![0_image_0.png](0_image_0.png) Annotation **CoT:** How many bolts of white fiber does it take? It takes 2/2=<<2/2=1>>1 bolt of white fiber. How many bolts in total does it take? So the total amount of fabric is 2+1=<<2+1=3>>3 bolts of fabric. **Socratic CoT:** 2022b; Wang et al., 2022). However, CoT has been shown to work primarily on models with hundreds of billions of parameters (Wei et al., 2022b,a) or those tuned on a wide range of tasks (Chung et al., 2022; Iyer et al., 2022). Due to the significant computational resources or expensive API calls required to access CoT-capable LLMs, we ask whether it is possible to elicit such reasoning capabilities in smaller models.1 1Following Li et al. (2022), we argue that *small* and *large* models are relative terms and context-dependent. We consider models with billions of parameters to be large, and models with millions of parameters to be small. Small-sized, non-fine-tuned language models are known to be poor reasoners (Stolfo et al., 2023). Therefore, a possible approach to induce CoT-like reasoning abilities in smaller models would be finetuning them on step-by-step examples. In our work, we propose a framework for leveraging the reasoning capabilities of LLMs to supervise the training of smaller models. This approach can be thought of as a form of *knowledge distillation* (Hinton et al., 2015), where a larger teacher model transfers knowledge to a smaller student model. However, unlike standard knowledge distillation, our method transfers the reasoning abilities of the teacher model only using its generated solutions as a proxy, i.e., we do not assume access to the teacher model parameters. Our approach consists of prompting an LLM to produce step-by-step annotations leading to the answer for a set of problems. This annotation is then used as supervision to finetune the student model. A high-level illustration of the process is provided in Figure 1. Within this framework, we study three different types of *annotation structure* for supervising our distillation approach: (i) We consider fine-tuning on the *gold* step-by-step solution procedure for datasets where the step-by-step solutions are available. (ii) We study whether procedural supervision, coming from the chain of thought (CoT) of the teacher model can improve upon the baseline. (iii) We propose a third type of supervision structure, which we call SOCRATIC COT. This approach relies on learning a semantic decomposition of the original problem into a sequence of subproblemsolution pairs using two models - a) a question generator that learns to decompose the problem into a sequence of subproblems, and b) a questionanswering model that solves the various generated subproblems (more details are in section 3.2). This approach can be thought of as an extension of the typical chain of thought reasoning where, unlike CoT, the intermediate steps are now decomposed into subquestion-solution pairs; the subquestions guide the generation of intermediate steps that lead to the final answer to the problem. We train distilled student models with the various annotation structures mentioned above. Depending on the annotation available for the given data, we use the teacher model to generate either a CoT-like solution to a problem or, if the step-bystep annotation is available, a set of subquestions leading to the solution of the problem, or both (examples of different annotations are shown in Figure 2). We perform our analyses on three multi-step reasoning datasets: GSM8K (Cobbe et al., 2021), StrategyQA (Geva et al., 2021), and SVAMP (Patel et al., 2021). We consider data with various types of annotation to cover a range of realistic data scenarios. Our results show that supervision by CoT-decomposed examples helps smaller models perform better, and subquestioning introduced by SOCRATIC COT can provide further improvement. We observe performance gains of up to 40% with LLM-generated step-by-step annotations - this validates the effectiveness of our distillation framework (detailed analysis in Section 5). ## 2 Related Work Decomposing Multi-Step Reasoning Tasks Solving multi-step reasoning tasks like MWPs has been a popular area of research for the last couple of years (Kushman et al., 2014; Hosseini et al., 2014; Roy et al., 2015; Amini et al., 2019; Zhang et al., 2020; Shridhar et al., 2022; Opedal et al., 2023). However, the majority of the modern approaches for these problems are shifting towards using large language models, often relying on approaches involving prompting or in-context learning (Cobbe et al., 2021; Kojima et al., 2022; Wei et al., 2022b; Chowdhery et al., 2022; Lewkowycz et al., 2022; Srivastava et al., 2022). One such prompting approach is the chain of thought prompting (Wei et al., 2022b), which prompts the language model to generate a series of intermediate steps that improve the reasoning capabilities in LLMs. Wang et al. (2022) took another step forward and sampled multiple reasoning paths and selected the most relevant output using majority voting. Huang et al. (2022) used the most voted outputs to further finetune the model for better performance. Kojima et al. (2022) further improved the reasoning of LLM in a zero-shot manner by appending "Let's think step by step" to the prompt. In contrast, our work does not propose prompting solutions; instead, we explicitly guide the student model reasoning using sub-questions at each step. Most similar to our work is the work by Zhou et al. (2022) which decomposes questions into sub-questions and asks the language model to solve each sub-question sequentially. However, this work is also restricted to prompting and only works with LLMs with billions of parameters. Knowledge Distillation Our approach is reminiscent of knowledge distillation (Ba and Caruana, 2014; Hinton et al., 2015) in that we use a student network to mimic the large teacher language model. Snell et al. (2022) demonstrated the usefulness of providing instruction that can help models achieve better reasoning skills. Similar to our hypothesis, Eisenstein et al. (2022) argued that question-answering systems should focus not only on the final answer, but also on the rationale that justifies their reasoning, to help them reason better. We go beyond this; in our work, in addition to the question-answering system, we also focus on what questions need to be asked at each step that can help to learn that reasoning step better. Finally, similar to our hypothesis of injecting reasoning capabilities into smaller models, Li et al. (2022) used CoT-like reasoning from LLMs to train smaller models on a joint task of generating the solution and explaining the generated solution. We, on the other hand, use the LLM to generate subquestions and solution pairs and use them together to inject reasoning capabilities into smaller models. Subquestioning as supervision The idea of inquiring or asking information-seeking questions for discovery learning has been studied well in the past (Bruner, 1961). Rao and Daume III ´ generated clarification questions based on Stack Exchange questions as supervision, Klein and Nabi (2019) used a joint question answering model to ask questions from a given span of text and later answer them, and (Rajani et al., 2019; Shwartz et al., 2020) asked questions to improve common sense QA models. In contrast, our work focuses on multistep reasoning tasks where intermediate clarifying questions and reasoning steps may not always be available and may need to be extracted from a teacher model. ## 3 Methodology The setting we consider consists of a data set D, where each problem Piis accompanied by a final answer aithat can be reached by several steps of reasoning. The task of solving the problem using a model ψ is to predict an answer aˆ = ψ(P) such that aˆ = a. We consider different data scenarios where intermediate annotations of the solution may be available in different forms (e.g., step-by-step, as a semantic decomposition by subquestions) or may not be present. Depending on the availability of annotations, we propose different approaches to augment the training of a small model on D by Reasoning Problem ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) ## Using Llms. 3.1 Distilling Step-By-Step Reasoning Via Cot A data set may present an annotation that contains intermediate reasoning steps that lead to the answer ai (i.e., a chain-of-thought annotation). This intermediate annotation can be used directly to fine-tune a small model. However, in cases where such stepby-step information is not available, we use a LLM to generate the reasoning steps that might improve the performance of the small model. To achieve this, we consider a small subset of the dataset D and decompose each problem Piinto ni intermediate reasoning steps. We construct these intermediate reasoning steps manually, since we only need a few examples as prompts (examples are provided in Appendix Table 6). For each remaining problem P ∈ D, we then prompt a large language model M to generate the intermediate reasoning steps. We make sure that the chain of reasoning steps is meaningful by checking whether the last solution matches the ground truth answer, i.e. whether a (ni) i = ai, where a (ni) i represents the answer corresponding to the last reasoning step. If this is not the case, we discard the problem and sample a new chain by prompting the model again (for a maximum of 3 times). In this way, we obtain an augmented dataset D∗in which a subset of problems is paired with a sequence of reasoning steps leading to the correct result. Finally, we can distill the reasoning capabilities into smaller models by fine-tuning them with the generated intermediate steps. ## 3.2 Distilling Step-By-Step Reasoning Through S**Ocratic** Cot In this section, we describe how CoT can be enhanced through subquestioning. An illustration of our approach is shown in Figure 3. ## 3.2.1 Extracting The Reasoning Capability From The Teacher In Section 3.1, we detailed how an LLM can be used to generate the intermediate annotation of a problem Pi as a chain of steps leading to the answer ai. We now extend this procedure to include a subquestion at each step of the solution. Following a similar procedure as described in Section 3.1, we prompt the LLM with few exemplars of problems decomposed as a set of intermediate subquestionsolution pairs (the prompts are reported in Appendix Table 6). This way, we obtain an intermediate annotation that includes subquestioning. In particular, each of the ni steps constituting the overall solution is a subquestion-solution pair, denoted q (j) i, s (j) i, j ∈ {1*, . . . , n*i} (an example is shown in Figure 2). We refer to the ordered list of subquestion-solution pairs for problem Pi as (q (1) i, s (1) i)*, . . . ,*(q (ni) i, s (ni) i). ## 3.2.2 Transferring The Reasoning Capability Into The Student We present two strategies to distill the reasoning annotation provided by the LLM into smaller models. In the first strategy, a single *unified* student is trained to generate the subquestion-solution pairs simultaneously, while in the second strategy, the question generation and question-answering tasks are assigned to two separate models. We call this second strategy *iterative* because the questionanswering model is trained to solve each subquestion iteratively. Unified. Using the problems in D that contain the chain of intermediate questions and solutions, we train a *unified* student model Muni that learns to generate the sequence of subquestion-solution pairs {(q (1), s(1)),(q (2), s(2))*, . . .* } that lead to the solution of a given problem. We use a pre-trained transformer-based model (Vaswani et al., 2017) and train it on the chain of subquestion-solution pairs for each problem P. Given a step j of problem P (i.e., the concatenation of q (j)and s (j)) consisting of a sequence of mj tokens {x (1) j*, . . . , x* (mj ) j}, we use a typical auto-regressive language modeling loss, L: Lj (P) = − Xmj k=1 log Puni (x (k) j|x :(k−1) j, P) (1) where Puni(x|c) is the probability assigned by Muni to token x given context c, and x :(y)indicates the sequence {x (1)*, . . . , x*(y)}. The loss Lj is computed for each problem Pi and for each pair (q (j), s(j)) leading to the final answer ai. Iterative. The *iterative* version of the student separates the tasks of generating the subquestions and providing an intermediate answer to each subquestion into two distinct models: a question generation (QG) model and a question answering (QA) model. Both the QG and QA models are implemented using a Transformer-based language model (Vaswani et al., 2017). In particular, the QA model Mqa is iteratively trained to answer the teacher-generated sub-questions. The learning objective is computed at the token level for each intermediate solution: $${\mathcal{L}}(P,s^{(j)})=-\sum_{k=1}^{l_{j}}\log\mathbb{P}_{\mathcal{Q},4}\left(y_{j}^{(k)}|y_{j}^{:(k-1)},q^{:(j)},s^{:(j-1)},P\right)$$ where lj and the yj 's represent, respectively, the length and the tokens of the intermediate solution s (j). s :(j−1) consists of the previous solution generated by the QA model iteratively in the past iterations. Similarly, the QG model is trained to acquire the ability of the teacher model to decompose the problem's main question into a series of sub-steps, each of which corresponds to a subquestion. The loss for this model is analogous to Equation 1, with the only difference being that the intermediate solutions are not considered for the QG model. During training, the previous intermediate solutions generated by the QA model are replaced with the teacher-generated solutions using teacher forcing (Cho et al., 2014). However, the intermediate solutions generated by the model are used at inference time. ## 3.3 Inference-Time Predictions Given an unseen problem P, the unified student model can directly predict a solution as a sequence ![4_image_0.png](4_image_0.png) of subquestions and answers. In the iterative approach, we first generate the subquestions conditioning the generation of the QG model on P. After these questions are generated, they are provided to the QA model one by one, decoding the intermediate solution sˆ (j)at step j token by token according to the model's probability distribution over its vocabulary: $$\mathbb{P}_{\mathcal{Q}\mathcal{A}}\;(y_{j}^{(k)}|y_{j}^{:(k-1)},{\hat{q}}^{:(j)},{\hat{s}}^{:(j-1)},P),\quad\quad(2)$$ where y (k) jis the k-th token being decoded in greedy fashion. After the last solution sˆ (n) has been generated, the numerical prediction aˆ (n)is parsed from the text using simple heuristics. ## 4 Empirical Analysis 4.1 Datasets We study how smaller models can learn to reason better on three multi-step reasoning datasets: GSM8K (Cobbe et al., 2021), StrategyQA (Geva et al., 2021), and SVAMP (Patel et al., 2021). GSM8K consists of 8.5K grade school math word problems, each requiring 2 to 8 steps of reasoning to solve. The solutions primarily involve a sequence of elementary calculations using basic arithmetic operations (+, −, ×, ÷). The dataset is divided into 7.5K training problems and 1K test problems. To evaluate the model on SVAMP, we train the model on 761 multi-step math word problems taken from the ASDiv (Miao et al., 2020) training set and evaluate it on 237 multi-step SVAMP problems. For StrategyQA, the test set with facts is not available, so we split the data into 80% training, 10% as validation data, and the last 10% as test data. We do not shuffle the data to maintain reproducibility. ## 4.2 Experimental Setup We use three kinds of annotation, corresponding to the three datasets that we consider. Step-by-step solution. The GSM8K dataset falls into this category and includes a Socratic version where intermediate subquestion-solution pairs are provided for each MWP. While the intermediate step-by-step solutions were manually annotated, the authors report that the subquestions were generated by prompting GPT-3. We reproduced a subset of these subquestions using a GPT-3 model with prompts, and we observed a high similarity between the questions provided and the ones gen- | Unified | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | Input: | Output: | | A robe takes 2 bolts of blue fiber and half that much white | How many bolts of white fiber does it take? It takes 2/2 | | fiber. How many bolts in total does it take? | = <<2/2=1>> 1 bolt of white fiber. How many bolts in total does it take? So the total amount of fabric is 2+1 = <<2+1=3>> 3 bolts of fabric. The answer is 3. | | Iterative Iteration 1 | | | Input: | Output: | | A robe takes 2 bolts of blue fiber and half that much white | QG: How many bolts of white fiber does it take? | | fiber. How many bolts in total does it take? | QA: It takes 2/2 = <<2/2=1>> 1 bolt of white fiber. | | Iteration 2 | | | Input: | Output: | | A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? How many bolts of white fiber does it take? It takes 2/2 = <<2/2=1>> 1 bolt of white fiber. | QG: How many bolts in total does it take? QA: So the total amount of fabric is 2+1 = <<2+1=3>> 3 bolts of fabric. The answer is 3. | Table 1: Example demonstraing the input-output format for unified vs iterative setup. QG represents the question generation model and QA is the question answerer model. Note that the QA model uses the QG output to answer it as shown in Figure 3. erated by us (BERT F1 score of 95%). For SO-CRATIC COT, we thus use the subquestioning annotation already provided. Supporting facts. We study the StrategyQA dataset, which falls in this category. Strategy QA consists of a factual question with binary True/False as the final answer. Additional supporting facts and decomposed questions are provided. However, the set of facts and the decomposed questions provided with a given question are not always aligned (i.e., a fact is not necessarily the answer to one subquestion). Therefore, having a setup similar to the one for GSM8K is not possible. We thus consider two versions of the data. One in which the supporting facts are used as CoT and the corresponding questions are generated by prompting a GPT-3 model, and a second in which we take the provided questions and generate the facts (this time aligned with the questions) using GPT-3. Final answers only. AsDiv/SVAMP falls in this category and for training, we use GPT-3 to generate both intermediate subquestions and solutions. Intermediate solutions are used as CoT and the generated subquestion-solution pairs for SOCRATIC COT. ## 4.3 Implementation Details We use GPT-2 variants (Radford et al., 2019) as student models. GPT-3 175B (Brown et al., 2020) served as the teacher model for decomposing complex problems into a series of simpler substeps (we report the prompts used in Appendix Table 6). All models were trained using the Huggingface library (Wolf et al., 2020) on an NVIDIA Tesla A100 GPU with 40 GB of memory. Each experiment was run for the same number of iterations to ensure fairness with periodic evaluation over the validation set. Teacher forcing was used during training to replace the generated responses with ground truth answers from the training dataset. Evaluation Metric. To evaluate the questionanswering performance on the GSM8K, SVAMP, and StrategyQA datasets, we compute the accuracy based on the final answer provided by the student model. ## 5 Results And Discussion Can our framework improve the reasoning capabilities of smaller models? Table 2 demonstrates that leveraging LLMs reasoning capabilities using our framework can improve the reasoning results for all dataset types. Step-by-Step Solution. When human-annotated step-by-step solutions are available, training smaller models with LLM-generated CoT is not advantageous, as shown on GSM8K. This is to be expected since the annotation generated by an LLM is likely to be noisier and of lower quality than human-annotated data. However, the groundtruth step-by-step annotation can be leveraged to prompt an LLM to generate subquestions for the SOCRATIC COT approach, giving a performance | Iterative | Unified | | | | | | | | |---------------|---------------|-------------|----------|----------|-------|---------------|---------------|--------| | Dataset | Model | Answer Only | GT Steps | GT Facts | CoT | SocCoT | SocGT | SocCoT | | Small (124M) | 1.45 | 5.05 | - | 4.70 | 5.98 | 6.44 (↑ 20%) | 5.10 | | | GSM8K | Medium (355M) | 2.90 | 7.88 | - | 7.10 | 11.57 | 12.74 (↑ 38%) | 7.90 | | Large (774M) | 4.62 | 14.10 | - | 12.85 | 17.89 | 21.08 (↑ 33%) | 13.25 | | | GPT-3 (6B) | - | 21.00 | - | - | - | - | - | | | Medium (355M) | 54.10 | - | 52.02 | 55.01 | 52.05 | 60.31 (↑ 13%) | 52.05 | | | StrategyQA | Large (774M) | 61.10 | - | 62.80 | 55.90 | 61.32 | 66.40 (↑ 5%) | 59. 45 | | XL (1.5B) | 60.51 | - | 66.30 | 58.07 | 62.30 | 63.56 (↓ 4%) | 62.05 | | | Small (124M) | 2.15 | - | - | 5.35 | 6.79 | - | 5.82 | | | SVAMP | Medium (355M) | 4.80 | - | - | 17.30 | 18.99 | - | 17.62 | | Large (774M) | 7.40 | - | - | 23.60 | 18.14 | - | 17.45 | | boost of up to 38% when the LLM-generated subquestions are used at inference time. When the subquestions are learned by the QG model (Iterative SocCoT ), the accuracy of the student model decreases slightly but still improves over the stepby-step annotation without subquestions (17.89 vs. 14.10). Figure 5 shows a comparison of predictions generated by SocCoT models and a model trained on the GT step-by-step annotation. Unified SO-CRATIC COT performs similarly to training with the step-wise ground-truth annotation. We additionally include the score produced by GTP-3 6B to show that training with SOCRATIC COT can help a small model (GPT-2 large with 774M parameters) perform as well as a nearly 10x larger model fine-tuned with human annotated data. Supporting facts. On StrategyQA, we observe that the inclusion of ground-truth supporting facts in the fine-tuning procedure improves the performance of the small models. However, surprisingly, when the supporting facts are generated by GPT-3, their inclusion actually hurts performance (58.07 vs 60.51 for GPT-2 Large). We hypothesize that this is likely due to the imperfect factual knowledge provided by the LLM, which mars the quality of the supervision. We have observed that the GT supporting facts provided often do not represent a logical sequence of propositions leading to the final answer. This is likely the reason why decomposing ![6_image_0.png](6_image_0.png) the problem through subquestions based on such facts actually harms accuracy (see SocCoT column in Table 2). Instead, using the provided subquestions and using an LLM to generate the answers (representing coherent facts leading to the final answer) proves to be an effective strategy (60.31 vs. 52.02 for GPT-2 Medium). A more detailed comparison between our proposed approaches is presented in Figure 4. However, GPT-2 XL mod- ![7_image_0.png](7_image_0.png) | Models | Methodology | Accuracy | |----------------|---------------|--------------| | GPT-3 (1-shot) | CoT | 27.5 | | (175B) | Sub-ques | 47.1 (↑ 41%) | els perform well when trained on facts as unlike smaller models, larger models can encode more facts at once in their parameters, which assists in answering a factual question. Answers only. On the SVAMP dataset, which includes only final answers and no intermediate annotation, LLMs can be used to generate both the intermediate steps and the subquestions. Both the consideration of intermediate solutions without subquestions (CoT) and the consideration of intermediate solutions with subquestions (SocCoT ) lead to an improvement in performance. The trend here is similar to what was observed for StrategyQA, with SOCRATIC COT being more effective for the two smaller models but falling back to CoT for the larger model. Can SOCRATIC COT **be used as a prompting strategy?** We experimented with SOCRATIC COT as a prompting strategy. First, we prompted GPT-3 (175B) to decompose the main problem into simpler steps by formulating subquestions. Then, GPT-3 is used again to solve the sequence of subproblems in a single-shot setting with a problem decomposed into intermediate subquestions and solutions included in the prompt. The introduction of subquestioning boosts accuracy by over 40% compared to standard CoT prompting (Table 3). Other work (e.g., Wei et al. 2022b) has used a larger number of exemplars in the few-shot prompt, achieving higher overall accuracy. We limited our experiments to single-shot prompts due to budget constraints. ## 6 Ablation Studies In this Section, we describe additional analyses regarding specific components of the framework we propose, as well as negative results that we obtained with alternative strategies. $$\cdot\,{\bar{47.1}}\;{\overline{{(}}}\uparrow{\overline{{41}}}{\overline{{\%)}}}$$ ## How Good Are The Sub-Questioning Capabilities of a smaller model? We investigate in more detail the ability of a small model to decompose a problem by generating meaningful subquestions. We fine-tuned GPT-2 Large on the GPT-3 generated subquestions provided in the GSM8K dataset. We then evaluated the quality of the generated questions in terms of BLEU score (Post, 2018), BERT F1 score (Zhang et al., 2019), and by measuring for how many problems the number of questions generated by GPT-2 (\#Q) matches the number of GPT-3 annotated questions for a given problem. We found that the fine-tuned GPT-2 predicted an incorrect number of subquestions for the majority of problems (see Table 4, first row). Thus, following previous work on subquestion generation (Shridhar et al., 2022), we introduced a *guidance* mechanism that conditions the generation of subquestions for a problem P on the equations describing the intermediate solutions of P. This strategy improved the quality of the generated questions for all three metrics considered (Table 4, second row). To avoid the dependence on the step-by-step annotation of the equations for each problem P at inference time, we train an additional sequenceto-sequence model to predict, given P, the set of equations that lead to the solution of the problem. At inference time, the predictions for the guidance model are used to condition the generation by the QG model. Although the predicted equations often do not lead to the correct solution of the problem, they help the QG model to generate more meaning- | Methodology | BLEU | BERT F1 | # Q | |---------------|--------|-----------|-------| | No-guidance | 51.5 | 0.78 | 0.42 | | Guidance | 58.8 | 0.81 | 0.80 | ![8_image_1.png](8_image_1.png) Table 4: BLEU, BERT F1 and the number of questions (\# Q) comparison between the question generator model and the Socratic subquestions present in the GSM8K dataset using GPT2-large model. ![8_image_3.png](8_image_3.png) ful sub-questions. Figure 6 shows the overall accuracy of the GPT-2 student models (QA + QG) finetuned with SOCRATIC COT on the GSM8K data with and without equation conditioning provided by the guide model. We have extended this guidance mechanism to StrategyQA and SVAMP, where the generation of subquestions is conditioned on the number of facts (StrategyQA) or steps (SVAMP) needed to answer the problem. Eliminating the need for a subquestion module. We have experimented with an alternative training solution that does not involve a question-generation model. This strategy aims to improve the supervision for fine-tuning a small model through subquestioning, but without relying on the presence of subquestions at test time. The procedure consists of training the student model to generate the entire chain of steps leading to an intermediate answer. That is, when the sub-question q (1) is asked, the model is trained to generate the answer s (1), but when q (j)is asked, the model is trained to generate the chain of thought reasoning {s (1), s(2)*, . . . , s*(j)} (instead of just s (j)). This eliminates the need for the intermediate subquestions at inference time, as the model is trained to *implicitly* decompose the main problem into smaller reasoning steps. However, this method ![8_image_0.png](8_image_0.png) ![8_image_2.png](8_image_2.png) leads to significant performance degradation (results are reported in Table 5), highlighting the need for subquestions at inference time. Example outputs In Figures 5 and 7, we report example outputs predicted by GPT-2 models for a set of GSM8K and SVAMP problems. ## 7 Conclusion The chain-of-thought style of step-by-step reasoning has proven to be very effective for reasoning in LLMs. In this work, we propose ways to distill these reasoning capabilities into smaller models and suggest ways to further improve them by explicitly asking stepwise questions. We demonstrate the effectiveness of our proposed methodology on three popular multi-step reasoning datasets, and discuss cases where one method should be preferred over the other for different datasets. ## Limitations In our work, we use only one solution from the LLM to distill information into the student model, and according to Wang et al. (2022), multiple subquestion-solution pairs can be sampled, and using majority voting, all pairs leading to the most frequent answer can be used to distill knowledge into the student models. Also, due to computational budget, we used a single prompt to compare the CoT and SOCRATIC COT and using more prompts (up to 8) might lead to a fairer comparison and better results (Wei et al., 2022b). We leave these experiments for the future. ## Ethical Considerations Although this work improves the reasoning capabilities of smaller models, the models are still not powerful enough to be used in sensitive settings such as education. We plan to release our code and model checkpoints, but the models must be used carefully by users, as many generative models, including ours, are prone to hallucination. ## Acknowledgements Alessandro Stolfo is supported by Armasuisse Science and Technology through a CYD Doctoral Fellowship. ## References Aida Amini, Saadia Gabriel, Peter Lin, Rik KoncelKedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. arXiv preprint arXiv:1905.13319. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? *Advances in neural information* processing systems, 27. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Jerome S Bruner. 1961. The act of discovery. *Harvard* educational review, 31:21–32. Kyunghyun Cho, Bart Van Merrienboer, Caglar Gul- ¨ cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. *arXiv preprint* arXiv:1406.1078. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Jacob Eisenstein, Daniel Andor, Bernd Bohnet, Michael Collins, and David Mimno. 2022. Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model. *arXiv preprint arXiv:2210.02498*. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. *Transactions of the Association for Computational Linguistics (TACL)*. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531, 2(7). Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. arXiv preprint arXiv:2210.11610. Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shus- ´ ter, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. 2022. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017. Tassilo Klein and Moin Nabi. 2019. Learning to answer by learning to ask: Getting the best of gpt-2 and bert worlds. *arXiv preprint arXiv:1911.02365*. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271–281, Baltimore, Maryland. Association for Computational Linguistics. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. *arXiv* preprint arXiv:2206.14858. Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, et al. 2022. Explanations from large language models make small reasoners better. arXiv preprint arXiv:2210.06726. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing English math word problem solvers. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online. Association for Computational Linguistics. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. *arXiv preprint arXiv:2112.00114*. Andreas Opedal, Niklas Stoehr, Abulhair Saparov, and Mrinmaya Sachan. 2023. World models for math story problems. In *Findings of the Association* for Computational Linguistics: ACL 2023, Toronto, Canada. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Sudha Rao and Hal Daume III. ´ Answer-based Adversarial Training for Generating Clarification Questions. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. *Transactions of the Association for Computational Linguistics*, 3:1–13. Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan. 2022. Automatic generation of socratic subquestions for teaching math word problems. *arXiv preprint* arXiv:2211.12835. Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4615–4629, Online. Association for Computational Linguistics. Charlie Snell, Dan Klein, and Ruiqi Zhong. 2022. Learning by distilling context. *arXiv preprint* arXiv:2209.15189. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adria Garriga-Alonso, et al. 2022. Beyond the ` imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615. Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Scholkopf, and Mrinmaya Sachan. 2023. ¨ A causal framework to quantify the robustness of mathematical reasoning with language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *ArXiv*, abs/2203.11171. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-totree learning for solving math word problems. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675. Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Leastto-most prompting enables complex reasoning in large language models. *ArXiv*, abs/2205.10625. Let's generate sub-questions for these problems. Use exactly one operation per step. — Q: Zoe was unboxing some of her old winter clothes . She found number0 boxes of clothing and inside each box there were number1 scarves and number2 mittens . How many pieces of winter clothing did Zoe have total ? SQ1: How many pieces of winter clothing did Zoe have in each box? A1: Zoe had <<+ number1 number2>> pieces of winter clothing in each box. SQ2: How many pieces of winter clothing did Zoe have total ? A2: Zoe had <<* number0 + number1 number2>> pieces of winter clothing in total. — Q: Katie picked number0 tulips and number1 roses to make flower bouquets . If she only used number2 of the flowers though , how many extra flowers did Katie pick ? SQ1: How many flowers did Katie pick in total? A1: Katie picked <<+ number0 number1>> flowers in total. SQ2: How many extra flowers did Katie pick ? A2: Katie picked <<- + number0 number1 number2>> extra flowers. — Q: Conner has number0 dollars in his bank account . Every month he spends number1 dollars . He does not add money to the account . How much money will Conner have in his account after number2 months ?, SQ1: How much money does Conner spend in total? A1: Conner spends <<* number1 number2>> dollars. SQ2: How much money will Conner have in his account after 8.0 months ? A2: After 8.0 months, Conner will have ¡¡- number0 * number1 number2>> dollars. For each of the following topics, generate intermediate answers to the subquestions leading to the final answer. — Topic: Albany, Georgia (City in Georgia, United States) Will the Albany in Georgia reach a hundred thousand occupants before the one in New York? Albany, GA has around 75,000 people. Albany, NY has almost 100,000 people. The difference is 100,000-75,000=25,000 The difference is 100,000-100,000=0 No, 25,000 is not smaller than 0. The final answer is NO. — Topic: The Police (English rock band) Could the members of The Police perform lawful arrests? Only law enforcement officers can perform lawful arrests. No, the members of The Police (rock band) are not law enforcement officers. The final answer is NO. — Topic: Wonder Woman (2017 film) (American superhero film directed by Patty Jenkins) Is a Boeing 737 cost covered by Wonder Woman (2017 film) box office receipts? The average cost of a US Boeing 737 plane is 1.6 million dollars. Wonder Woman (2017 film) grossed over 800 million dollars at the box office. Yes, 800 is larger than 1.6. The final answer is YES. Table 6: Exemplars included in the few-shot prompt for the decomposition of the problems from the ASDiv (upper row) and StrategyQA (lower row) datasets. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation ✓ A2. Did you discuss any potential risks of your work? Ethical considerations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, Methodology ✓ B1. Did you cite the creators of artifacts you used? Section 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Conclusion ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our models are free to be used by anyone. We mention the limitations of our approach ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used standard open source datasets ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.2 ## C ✓ **Did You Run Computational Experiments?** Section 4.3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.3, Table 1 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-alignsts
{A}lign{STS}: Speech-to-Singing Conversion via Cross-Modal Alignment
https://aclanthology.org/2023.findings-acl.442
The speech-to-singing (STS) voice conversion task aims to generate singing samples corresponding to speech recordings while facing a major challenge: the alignment between the target (singing) pitch contour and the source (speech) content is difficult to learn in a text-free situation. This paper proposes AlignSTS, an STS model based on explicit cross-modal alignment, which views speech variance such as pitch and content as different modalities. Inspired by the mechanism of how humans will sing the lyrics to the melody, AlignSTS: 1) adopts a novel rhythm adaptor to predict the target rhythm representation to bridge the modality gap between content and pitch, where the rhythm representation is computed in a simple yet effective way and is quantized into a discrete space; and 2) uses the predicted rhythm representation to re-align the content based on cross-attention and conducts a cross-modal fusion for re-synthesize. Extensive experiments show that AlignSTS achieves superior performance in terms of both objective and subjective metrics. Audio samples are available at \url{https://alignsts.github.io}.
# Alignsts: Speech-To-Singing Conversion Via Cross-Modal Alignment Ruiqi Li1, Rongjie Huang1, Lichao Zhang1, Jinglin Liu2**, Zhou Zhao**1∗ 1Zhejiang University {ruiqili,rongjiehuang,zju_zlc,zhaozhou}@zju.edu.cn 2ByteDance liu.jinglin@bytedance.com ## Abstract The speech-to-singing (STS) voice conversion task aims to generate singing samples corresponding to speech recordings while facing a major challenge: the alignment between the target (singing) pitch contour and the source (speech) content is diffcult to learn in a textfree situation. This paper proposes AlignSTS, an STS model based on explicit cross-modal alignment, we 1) adopt a novel rhythm adaptor to predict the target rhythm representation to bridge the modality gap between content and pitch, where the rhythm representation is disentangled in a simple yet effective way and is quantized into a discrete space; and 2) leverage the cross-modal aligner to re-align the content features explicitly according to the predicted rhythm and conduct a cross-modal fusion for re-synthesis. Experimental results show that AlignSTS achieves superior performance in terms of both objective and subjective metrics. Audio samples are available at https://alignsts.github.io. ## 1 Introduction Speech-to-singing (STS) voice conversion (Saitou et al., 2004; Cen et al., 2012; Parekh et al., 2020) aims to transfer speech samples into the corresponding singing samples, with the timbre identity and phoneme information unaltered. An STS system takes a speech sample and a target melody as conditions and then generates a high-quality singing sample following the target musical melody. Speechto-singing research is important for human voice study and useful for practical applications such as computer-aided music production or musical entertainment. Researchers have developed three major STS approaches: 1) Model-based approaches(Saitou et al., 2004, 2007) use phone-score synchronization information to manually align the phonemes and the target musical notes with artifcial control ∗* Corresponding author models. 2) Template-based STS (Cen et al., 2012; Vijayan et al., 2017, 2018) requires an available high-quality reference vocal input, which will be aligned with the input speech for subsequent musical feature extractions. The alignment is the key part and is mostly based on dynamic time warping (DTW). 3) Style transfer approach (Parekh et al., 2020) views STS as a style-transfer problem. This class of methods considers the specifc properties that are transformed during conversion as "style". Parekh et al. (2020) stretch the input speech to the same length as the target F0 contour and concatenate the latent features to fuse the asynchronous representations. Wu and Yang (2020) serve as a continuation and extension of their prior work by leveraging boundary-equilibrium GAN (BEGAN) (Berthelot et al., 2017). Despite their recent success, however, the style information is complex and is composed of multiple entangled features in the time domain (the duration information and the temporal length) and the frequency domain (the pitch information). Simply stretching the representations temporally or applying implicit self-attention can cause alignment problems. In a larger sense, human voice information (such as speech (Huang et al., 2023b) and singing (Huang et al., 2021, 2022a)) is composed of several variance information, each controlling a specifc sensory modality. Plenty of arts focus on the decomposition and resynthesis of speech signal (Qian et al., 2020; Chan et al., 2022; Choi et al., 2021, 2022; Huang et al., 2022b; Huang et al.). They managed to roughly decompose speech signals into components like linguistic content, pitch, rhythm, timbre identity, etc. By manipulating any of these components, one can resynthesize a customized speech waveform. In the context of STS, the components that will be manipulated are 1) the frequency modality, namely the pitch information; and 2) the duration modality, or rhythm. The challenge remains, however, since the rhythm information can be indeterminate given only the content and the target pitch information. In this paper, we present a novel approach AlignSTS based on modality disentanglement and crossmodal alignment. To tackle the alignment problem, we introduce a novel rhythm representation to bridge the modality gap between content and pitch. A rhythm adaptor is designed to predict the target rhythm representation, which is used to guide the content features to perform the alignment and the cross-modal fusion. Further, we explore speech-to-sing conversion in zero-shot scenarios, where we train the model with singing data in a self-supervised manner and test it on unseen speech data. We categorize AlignSTS as one of the style transfer approaches since we only need a source speech and a target pitch contour for conversion. Experimental results demonstrate signifcant performance improvements over baseline models. The main contributions of this work are summarized as follows: - We leverage the temporal modality, rhythm, to bridge the modality gap between the speech content and the target pitch. The rhythm representation is carefully designed and quantized into discrete space to model temporal dynamic states. - We propose AlignSTS, an STS model based on cross-modal alignment, which predicts the target rhythm representation and uses it to conduct an explicit cross-modal alignment to re-align the content information. - Experimental results demonstrate that AlignSTS achieves state-of-the-art results in both objective and subjective evaluations. Over baseline models, AlignSTS reaches an absolute improvement of 0.39 in MOS for overall quality and 0.36 for prosody naturalness. ## 2 Related Works 2.1 Voice Conversion Voice conversion (VC) focuses on changing the timbre identity of an utterance to a target speaker while keeping the content information (phoneme sequence intact). Inspired by image style transfer, CycleGAN-VC (Kaneko and Kameoka, 2018) combines CycleGAN (Zhu et al., 2017) with gated CNN and identity-mapping loss to capture sequential and hierarchical structures. Similarly, StarGAN-VC (Kameoka et al., 2018) adapts StarGAN (Choi et al., 2018) and manages to train without parallel utterances and pay more attention to real-time processing. Apart from GANs, conditional variational autoencoders (CVAE) are also an important class of approaches. VAE-VC (Hsu et al., 2016) uses the encoder of a VAE to learn the speaker-independent phoneme representations, thus disentangling the timbre identity. ACVAEVC (Kameoka et al., 2019) notices that VAEs easily ignore the attribute class label input, i.e. the speaker identity, which therefore utilizes an auxiliary speaker classifer. AutoVC (Qian et al., 2019) carefully designs a bottleneck mechanism within a simple autoencoder to achieve zero-shot many-tomany voice conversion with non-parallel data. ## 2.2 Speech Representation Disentanglement Human speech (Huang et al., 2023a, 2022d) is a severely complicated and comprehensive information stream, where a number of latent units entangled with each other such as content, pitch, timbre, etc. The disentanglement of the speech signal is an attempt to learn factorized and even interpretable generative factors (Bengio et al., 2013) for further application, like style transfer or domain adaptation. NANSY (Choi et al., 2021) manipulates disentangled factors such as content, pitch, and speed of speech signal, where content and pitch are decomposed by wav2vec 2.0 (Baevski et al., 2020) and Yingram, respectively. Information bottleneck (Qian et al., 2019) is also a popular way to disentanglement. Following AutoVC, SpeechSplit (Qian et al., 2020) introduces three carefully designed information bottlenecks to improve decomposition. VoiceMixer (Lee et al., 2021) leverages a similarity-based information bottleneck and adversarial feedback to disentangle content and voice style. SpeechSplit 2.0 (Chan et al., 2022) alleviates the bottleneck tuning in SpeechSplit by applying effcient signal processing techniques on encoder inputs. ## 3 Alignsts In this section, we frst defne and formulate the problem of speech-to-singing (STS) voice conversion. We then present the information perturbation methods, the rhythm modality modeling, and the cross-modal fusion mechanism. Finally, we introduce the overall architecture of AlignSTS and the training/inference procedure. ![2_image_0.png](2_image_0.png) ## 3.1 Problem Formulation Let Ssp and Ssg denote the spectrograms extracted from speech and singing signals. We assume that vocal signals are a comprehensive fusion of several variance information, i.e., content, pitch, rhythm, and timbre. Therefore, this process can be formally defned in Equation 1, where csp, isp, fsp, and rsp denote the representations of content, timbre identity, pitch, and rhythm information of speech data. g(·) denotes the multi-modality fusion. $$\begin{array}{l}{{S_{\mathrm{sp}}=g(c_{\mathrm{sp}},i_{\mathrm{sp}},f_{\mathrm{sp}},r_{\mathrm{sp}})}}\\ {{\widehat{S_{\mathrm{sg}}}=f_{\theta}(c_{\mathrm{sp}},i_{\mathrm{sp}},f_{\mathrm{sg}},r_{\mathrm{sg}}(c_{\mathrm{sp}},f_{\mathrm{sg}}))}}\end{array}\tag{1}$$ The problem of STS is to learn a neural network fθ, such that if we switch the pitch component fsp to target pitch feature fsg, the corresponding singing spectrogram Scsg will be generated while preserving the content and timbre intact, as shown in Equation 2. Note that the rhythm modality implies temporal information, which will also be infuenced and should be adapted according to the pitch and content. rsg is the adapted rhythm representation and is generated conditioned on fsg and csp. ## 3.2 Method Overview AlignSTS treats speech and singing signals as a comprehensive fusion of several variance information, which can be further regarded as different sensory modalities. Pitch and rhythm features are the main modalities to convert in STS, which can be disentangled in parallel. However, the synthesis logic of variance information needs to be carefully designed during conversion. A phoneme sequence and a pitch contour seem to be uncorrelated with each other at frst glance, yet it is highly possible for a human to create a suitable alignment and produce a singing melody. The mechanism behind the human behavior is: (1) fnd an appropriate sequence of onset and offset timings of phonemes and notes, or as known as rhythm, according to the lyrics and the melody; and (2) place the time-stretched phonemes in order according to the rhythm and combine them with the melody to produce the singing result. Inspired by this mechanism, AlignSTS: 1. Decompose the input speech signal into several disentangled variance information. 2. According to the altered speech component, i.e., the pitch contour, adapt the speech component that controls temporal duration information, namely the rhythm representation. 3. Perform a cross-modal alignment to re-align the content according to the adapted rhythm representations and carry out a modality fusion to combine the variance information. ## 3.3 Information Perturbation Speech and singing samples are fully complex and can be decomposed into variance information such as content, pitch, rhythm, and timbre. Each feature needs to be extracted and disentangled from the other, which can be achieved by certain information perturbations. - **Linguistic Content.** We use a wav2vec 2.0 (Baevski et al., 2020) model pre-trained and fnetuned on 960 hours of Librispeech on 16kHz sampled speech audio * to extract the linguistic content. It is shown that extracted features from speech SSP models such as wav2vec 2.0 can be applied to downstream tasks like ASR, indicating that the extracted features should provide rich isolated linguistic information. Prior works like SpeechSplit (Qian et al., 2020) and AutoPST (Qian et al., 2021) utilize random sampling to perturb the rhythm information within the content, given their input and output audios are the same speech. Using the paired speech and singing data, we do not need extra random sampling but leverage the natural discrepancy between speech and singing to perturb the rhythm within the content information. - **Pitch.** We extract the fundamental frequency contour F0 of singing data as pitch information. The fundamental frequency contour is then quantized to 256 possible values uniformly. The F0 contour contains minimum rhythm information, given a common singing situation where one phoneme corresponds to several musical notes (or vice versa), making the rhythm theoretically indeterminate. - **Rhythm.** Rhythm is a crucial speech component that controls the overall speed, the duration of each phoneme, and the pattern of the onset and offset of syllables. Therefore, the rhythm modality provides the duration information for both content and pitch. A good rhythm representation creates a "fll in the blank" mechanism (Qian et al., 2020) for content information to realign. Meanwhile, the patterns of the rhythms *https://huggingface.co/facebook/wav2vec2-base-960h of speech and singing voices differ signifcantly, the intensity of singing voices is generally more fuctuating and distinct than speech. Therefore, we directly utilize the time-domain energy contour et of singing samples, which is computed by taking the L2-norm of all the frequencies for each time step. To eliminate the relative fuctuation and leave only the rhythm information, we further normalize the energy contour et using the Sigmoid function σ(·): $$\mathbf{r}_{t}=\sigma\left(\beta\times{\frac{\mathbf{e}_{t}-\operatorname*{mean}(\mathbf{e})}{\operatorname*{std}(\mathbf{e})+\epsilon}}\right)\qquad{\mathrm{(3)}}$$ where ϵ is used to avoid division-by-zero error and β is a hyperparameter used to control the normalizing effect. rtis the resulting rhythm representation. We further stabilize the representation by applying Gaussian fltering with a standard deviation σ of 1.0. This scalar representation disentangles the rhythm information the most thoroughly while preserving the duration information. A visualization is shown in Figure 2. ![3_image_0.png](3_image_0.png) - **Timbre.** We leverage an open-source speaker identity encoding API Resemblyzer †to extract the timbre representations. Resemblyzer generates a 256-dimensional vector that summarizes the characteristics of the given voice spoken. ## 3.4 Rhythm Modality Modeling The rhythm modality of the singing signal can be considered as a series of discrete temporal dynamic states, such as attack, sustain, transition, silence, etc. In automatic music transcription (AMT), the recognition of states refers to onset/offset detection.(Chang and Lee, 2014; Fu and Su, 2019) Similarly, traditional ASR methods like Kaldi (Povey †https://github.com/resemble-ai/Resemblyzer et al., 2011) model the intra- and inter-states of phonemes to improve distinguishability. From this perspective, the rhythm modality is a "softened" version of duration with more intermediate states. The onset and offset states (intermediate states) of singing may last longer than that of speech (consider the fade-out effect). Inspired by this, we adopt a **Vector Quantization** (VQ) module to quantize the continuous rhythm features to model these temporal states and form an information bottleneck (Van Den Oord et al., 2017). The designed discrete latent embedding space can be denoted as e ∈ R K×D where K is the number of clustered categories and D is the dimension of each latent embedding vector ek ∈ R D, k ∈ 1, 2*, ..., K*. A commitment loss (Van Den Oord et al., 2017) is used to constrain the input representation to commit to a discrete embedding: $${\mathcal{L}}\mathbf{c}=\|z_{e}(x)-\mathbf{s}\mathbf{g}[e]\|_{2}^{2}$$ 2(4) where ze(·) denotes the VQ module and sg denotes the stop-gradient operator. To model the target rhythm representations conditioned on both input content information and target pitch contour, we design a **Cross-Attention** module by adopting the Scaled Dot-Product Attention (Vaswani et al., 2017). The encoded pitch representation XP is used as the query, and the encoded content representation XC is used as both the key and the value. A positional encoding embedding is added to both representations before the attention module. The attention mechanism can be formulated as: $${\mathrm{Attention}}(Q,K,V)$$ $$\begin{array}{l}{{\mathrm{{\bf~r(\varphi,\Pi,\nu)}~}}}\\ {{\mathrm{{\bf~=Attention(}X_{P},X_{C},X_{C})~}}}\\ {{\mathrm{{\bf~=Softmax\left(\frac{X_{P}X_{C}^{T}}{\sqrt{d}}\right)X_{C}~}}}}\end{array}\quad\mathbf{(5)}$$ where XP and XC are frst projected to the query, the key, and the value representations in practice. Since the target rhythm features are not available during inference, we only activate the VQ module and use the discrete embeddings generated from it for downstream modules during the training stage. The cross-attention module and a stack of convolution layers are combined to serve as the Rhythm Predictor, which takes the generated discrete rhythm embeddings from the VQ module as the training target. A cross-entropy (CE) loss is applied to train the rhythm predictor so it can generate the desired rhythm embeddings during inference: $${\mathcal{L}}_{\mathrm{R}}=-{\frac{1}{T}}\sum_{t=1}^{T}\sum_{c=1}^{K}y_{t,c}\log({\hat{y}}_{t,c})\qquad{\mathrm{(6)}}$$ where T denotes the number of time frames. yt,c denotes the rhythm embedding indices generated from the VQ module, where yt,c = 1 if c = kt and ktis the rhythm index at time step t. yˆt,c denotes the predicted rhythm indices. ## 3.5 Cross-Modal Alignment $$(4)$$ With the target rhythm sequence generated, we design a cross-modal aligner to place the linguistic features along the time axis according to the target rhythm. This aligner uses rhythm information to bridge the gap between content and pitch modalities. We simply use a stack of two cross-attention layers mentioned before to complete this task. The rhythm embedding XR generated from the rhythm adaptor is used as the query, and the encoded content representation XC is again used as both the key and value. The alignment using the cross-attention mechanism can be considered a soft-selection operation over the linguistic content representations XC along the time axis at each target time step. Ideally, the resulting attention weight matrix should show a monotonic pattern and the alignment path should be concentrated and nearly diagonal. This requires extra constraints and regulations to make sure the model does not bypass the attention mechanism and simply interpolate the input content to the same length as the required rhythm representation. We mainly apply two techniques: windowing (Chorowski et al., 2015) and guided attention loss (Tachibana et al., 2018), which is described in detail in Appendix B. With each variance information re-aligned, the content, the rhythm, and the pitch representations should have the same temporal length. We apply the inter-modality fusion across these features by element-wise vector addition. Furthermore, we involve the timbre information by adding the timbre embeddings extracted earlier. ## 3.6 Architecture AlignSTS mainly consists of four modules: the encoders, the rhythm adaptor, the cross-modal aligner, and the mel decoder. The overall architecture of AlignSTS is presented in Figure 1. The detailed description and the overall hyperparameter setting are listed in Appendix A. To accelerate the re-synthesis process and improve the audio quality, we adopt the teacher model of ProDiff (Huang et al., 2022c), a 4-step generatorbased diffusion model, to be the mel decoder. A generator-based diffusion model parameterizes the denoising model by directly predicting the clean data, instead of estimating the gradient of data density. Therefore, the generator-based method bypasses the diffculty of predicting sample xt using a single network at different diffusion steps t, which allows us to train the decoder with a reconstruction loss. We use two objectives to be the reconstruction loss: - **Mean Absolute Error (MAE).** We apply MAE at each random term of the diffusion step t: $${\mathcal{L}}_{\mathrm{MAE}}=\left\|{\mathbf{x}}_{\theta}\left(\alpha_{t}{\mathbf{x}}_{0}+{\sqrt{1-\alpha_{t}^{2}}}{\boldsymbol{\epsilon}}\right)-{\mathbf{x}}_{0}\right\|\tag{7}$$ where αt denotes a derived constant that αt = i=1 √1 − βi, in which βtis the predefned fxed Q t noise schedule at diffusion step t. ϵ is randomly sampled and ϵ ∈ N (0, I). x0 denotes the clean data and xθ denotes the denoised data sample predicted by the denoising neural networks θ. - **Structural Similarity Index (SSIM) Loss.** We adopt SSIM (Wang et al., 2004), one of the stateof-the-art perceptual metrics to measure image quality, to tackle the problem of over-smoothness (Ren et al., 2022): $$\mathcal{L}_{\mathrm{SSIM}}=1-\tag{8}$$ $$\mathrm{SSIM}\left(\mathbf{x}_{\theta}\left(\alpha_{t}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}^{2}}\mathbf{\epsilon}\right),\mathbf{x}_{0}\right)$$ where SSIM(·) is the SSIM function and is between 0 and 1. ## 3.7 Training And Inference The fnal loss of AlignSTS consists of the following loss terms: 1) the commitment loss of VQ module LC; 2) the CE loss of rhythm predictor LR; 3) the guided loss for cross-attention Lattn; 4) the MAE reconstruction loss LMAE; and 5) the SSIM loss LSSIM. During the training stage, the crossmodal aligner takes the rhythm embeddings generated from the VQ module as input directly. At the same time, the rhythm predictor is trained to predict the correct indices of quantized vectors for rhythm embeddings using CE loss. During the inference stage, the VQ module is deactivated and the predicted rhythm indices from the rhythm predictor are used to look up the embeddings for subsequent modules. ## 4 Experiments 4.1 Experimental Setup 4.1.1 Dataset We utilize a subset of the PopBuTFy database (Liu et al., 2022) as our dataset. PopBuTFy is originally used for the singing voice beautifying (SVB) task, which consists of paired amateur and professional singing recordings. Additionally, we collected and annotated the speech version of a subset of PopBuTFy to create a paired speech and singing dataset. In all, the dataset consists of 152 English pop songs (∼5.5 hours in total) and the respective speech recordings (∼3.7 hours in total) from 16 singers. More details are listed in Appendix C. ## 4.1.2 Implementation Details We use mel-spectrograms extracted from singing samples to be the training target. We transform the raw waveform with the sampling rate of 24000 Hz into mel-spectrograms with window size 512 and hop size 128. We extract the fundamental frequency contour F0 using Parselmouth (Jadoul et al., 2018; Boersma and Weenink, 2021) as pitch information. In addition, we remove all the silent frames to accelerate the training process. The output mel-spectrograms are transformed into audio waveforms using a HiFi-GAN vocoder (Kong et al., 2020) trained with singing data in advance. More details are listed in Appendix A. ## 4.1.3 Training And Evaluation We train the proposed model on a single NVIDIA GeForce RTX 3090 GPU with a batch size of 20 sentences for 200k steps. The performance evaluation consists of two parts, objective and subjective evaluations, respectively. - **Objective evaluation**. Following (Parekh et al., 2020), we use log-spectral distance (LSD) and F0 raw chroma accuracy (RCA) using mir_eval (Raffel et al., 2014) as the objective metrics. LSD is computed by taking the average of the Euclidean distance between the predicted log-spectrogram and the ground truth recordings. For RCA, we set the maximum tolerance deviation as 50 cents. In addition, we design a new evaluation metric, rhythm representation distance (RRD), to measure rhythm reconstruction performance by computing the Euclidean distance between the rhythm representations described in Equation 3. - **Subjective evaluation**. We conducted crowdsourced mean opinion score (MOS) listening tests for subjective evaluation. Specifcally, MOS-Q indicates the overall quality of the audio and MOS-P indicates the naturalness and coherence of prosody. The metrics are rated from 1 to 5 and reported with 95% confdence intervals. ## 4.1.4 Baseline Models We compare the quality of the generated audio samples of AlignSTS with other approaches, including 1) *GT Mel*, in which we frst convert the reference audio into mel-spectrograms and then convert them back to audio using HiFi-GAN; 2) (Parekh et al., 2020), an STS model based on encoder-decoder framework; 3) *SpeechSplit 2.0 (w/o SE)* (Chan et al., 2022), where we train the model only conditioned on the target pitch contour, while the rhythm input is implemented by interpolating the source spectral envelope (from speech) to match the length of pitch contour (in both SpeechSplit 2.0 baselines, we interpolate the content input to match the target length); 4) *AlignSTS (zero-shot)*, where we conduct zero-shot STS and test the model on unseen speech samples; and 5) *AlignSTS (GAN)*, where we change the diffusion mel decoder to the decoder of FastSpeech 2 (Ren et al., 2020), one of the SOTA approaches of non-autoregressive text-to-speech (NAR-TTS), combining a multi-window discriminator (Wu and Luan, 2020). In addition, since SpeechSplit 2.0 is not originally designed for STS tasks, we conduct extra experiments on baseline SpeechSplit 2.0 (w/ SE) in Appendix D, where we involve the target rhythm representations. ## 4.2 Main Results The results are shown in Table 1. The quality of GT Mel is the upper limit of STS systems. In both objective and subjective evaluations, AlignSTS outperforms the baseline systems by a large margin. In both objective and subjective evaluations, the results of RCA and MOS-P are better than LSD and MOS-Q, which indicates that the condition of the target pitch contour possesses rich melody and prosody information, making the melody transfer much more effortless than phoneme modeling. The main challenge remains in the coherence and recognizability of phonemes. The results of RRD show the reconstruction performance of the rhythm representation, demonstrating that AlignSTS with explicit rhythm modeling does the best job. In addition, the results indicate that RRD can be a valid metric for rhythm modeling. A visualization of output mel-spectrograms is shown in Figure 3. The effectiveness of rhythm modality modeling is clearly demonstrated: 1) (Parekh et al., *2020)* barely expresses the dynamic of the melody and the phonemes are scarcely distinguished; 2) the voiced parts are clustered temporally in *SpeechSplit 2.0 (w/o SE)*, but the phonemes are still unrecognizable and the formants are disordered; and 3) AlignSTS successfully re-align the phonemes in order. However, all of them reconstruct the pitch information in various degrees. We set an additional baseline *AlignSTS (GAN)* by changing the diffusion decoder to a GAN-based decoder. Results indicate superior performance of diffusion models in singing voice synthesis. ## 4.3 Ablation Study As shown in Table 2, we conduct ablation studies to demonstrate the effectiveness of several designs in AlignSTS, including the rhythm adaptor, the crossmodal alignment, and the skip-connection of pitch representation. We conduct CMOS-Q (comparative mean opinion score of quality) and CMOS-P (comparative mean opinion score of prosody) evaluations on each setting: 1) *w/o RA*: we stretch the speech rhythm representations defned in Equation 3 using linear interpolation and use it for the subsequent cross-modal alignment. 2) *w/o CM*: we drop the cross-modal alignment operation and stretch the content representation to the same length as the pitch contour and the adapted rhythm and simply fuse them together using element-wise addition. 3) *w/o F0*: we cut off the skip-connection of pitch representation in the fusion, i.e., we only combine the adapted rhythm and the aligned content to the mel decoder. The results demonstrate a signifcant loss of performance when dropping any module. Specifcally, it indicates that the rhythm adaptor plays an important role in modeling phonemes, replacing the rhythm adaptor will end up with a singing melody with unintelligible syllables. Removing the cross-modal alignment operation witnesses the degradation of both the audio quality and the prosody naturalness, but the pres- ## 5 Conclusion 4.4 Zero-Shot Speech-To-Singing Conversion 6 Limitations Method LSD ↓ RCA ↑ RRD ↓ MOS-Q ↑ **MOS-P** ↑ ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) GT Mel 2.8974 0.9959 0.3693 4.04±0.10 4.18±0.17 (Parekh et al., *2020)* 7.3613 0.9218 0.7865 2.86±0.12 2.91±0.11 SpeechSplit 2.0 (w/o SE) 5.7681 0.9870 0.8262 3.19±0.06 3.45±0.13 AlignSTS (GAN) 5.4926 0.9875 0.5709 3.41±0.11 3.76±0.17 AlignSTS (ours) 5.0129 0.9934 0.5366 3.58±0.19 3.81±**0.09** AlignSTS (zero-shot) 5.6607 0.9871 0.5693 3.29±0.08 3.46±0.11 Figure 3: Visualization of the output mel-spectrogram of each system. (a) (Parekh et al., *2020)*; (b) *SpeechSplit 2.0* ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_4.png](7_image_4.png) ![7_image_5.png](7_image_5.png) (w/o SE); (c) *AlignSTS (ours)*; (d) *GT Mel*. Table 1: The Objective and Subjective evaluation results of STS systems. In this work, we presented AlignSTS, a speechto-singing model based on modality disentanglement and cross-modal alignment. To achieve better voice quality and improve interpretability, we decomposed the input speech waveform into four variance information and proposed a novel cross-modal fusion mechanism. Specifcally, we designed a rhythm adaptor to adjust the rhythm representation to deal with the altered pitch modality, and a crossmodal aligner to re-align the content representation. Finally, we conduct a cross-modal fusion to combine the different components together. Experimental results demonstrated that AlignSTS achieved superior quality and naturalness compared with several baselines. Ablation studies demonstrated that each design in AlignSTS was effective. Extensive experiments showed the great potential of AlignSTS in zero-shot STS. We envisage that our work could serve as a basis for future STS studies. Table 2: Ablation study results. RA denotes the rhythm adaptor, CM denotes the cross-modal alignment, F0 denotes the fnal skip-connection of pitch representation. | Setting | CMOS-Q | CMOS-P | |-----------|----------|----------| | AlignSTS | 0.00 | 0.00 | | w/o RA | -0.34 | -0.25 | | w/o CM | -0.27 | -0.22 | | w/o F0 | -0.37 | -0.76 | ence of the adapted rhythm representation allows the mel decoder to implicitly re-align the content as thoroughly as possible, resulting in a slightly smaller quantity of loss compared to *w/o RA*. As expected, The removal of F0 skip-connection drastically drops the prosody naturalness. We conduct extensional experiments on zero-shot STS given only the singing samples and test the model on unseen speech samples. Specifcally, we use the identical model architecture and the training pipeline, but carry out a singing-to-singing task and train the model to reconstruct singing samples from the proposed dataset. During the inference stage, we input the unseen speech signal and the target pitch contour to generate the corresponding singing samples. The results are also shown in Table 1 and the task is denoted as *AlignSTS (zeroshot)*. AlignSTS demonstrates great potential in zero-shot STS. Research on the speech-to-singing conversion is important for human voice study and useful for practical applications such as computer-based music productions or entertainment. However, current STS approaches require an input condition of a fne-grained target F0 contour, which is always unavailable. In addition, the F0 contour of a singing utterance often possesses rich speaker-related information, which still needs further disentanglement. Finetuning F0 contours in real applications brings signifcant extra work. One of our future directions is to simplify the input conditions, such as musical scores. Furthermore, the preliminary attempt at the zero-shot STS task may lead to a better perspective. Except for positive applications, STS systems may face ethical concerns. With the development of speech/singing voice synthesis technology, the cost of faking an utterance of a specifc individual is gradually declining. Researchers need further consideration of the regulation and recognition of speech/singing voice synthesis. ## 7 Acknowledgement This work was supported in part by the National Key R&D Program of China under Grant No.2022ZD0162000, National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397. ## References Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449–12460. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828. David Berthelot, Thomas Schumm, and Luke Metz. 2017. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717. Paul Boersma and David Weenink. 2021. Praat: doing phonetics by computer [Computer program]. Version 6.1.38, retrieved 2 January 2021 http://www.praat. org/. Ling Cen, Minghui Dong, and Paul Chan. 2012. Template-based personalized singing voice synthesis. In *2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pages 4509–4512. IEEE. Chak Ho Chan, Kaizhi Qian, Yang Zhang, and Mark Hasegawa-Johnson. 2022. Speechsplit2. 0: Unsupervised speech disentanglement for voice conversion without tuning autoencoder bottlenecks. In *ICASSP* 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6332–6336. IEEE. Sungkyun Chang and Kyogu Lee. 2014. A pairwise approach to simultaneous onset/offset detection for singing voice using correntropy. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 629–633. IEEE. Hyeong-Seok Choi, Juheon Lee, Wansoo Kim, Jie Lee, Hoon Heo, and Kyogu Lee. 2021. Neural analysis and synthesis: Reconstructing speech from selfsupervised representations. *Advances in Neural Information Processing Systems*, 34:16251–16265. Hyeong-Seok Choi, Jinhyeok Yang, Juheon Lee, and Hyeongju Kim. 2022. Nansy++: Unifed voice synthesis with neural analysis and synthesis. arXiv preprint arXiv:2211.09407. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan: Unifed generative adversarial networks for multidomain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8789–8797. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. *Advances in neural information processing systems*, 28. Zih-Sing Fu and Li Su. 2019. Hierarchical classifcation networks for singing voice segmentation and transcription. In *Proceedings of the 20th International* Society for Music Information Retrieval Conference (ISMIR 2019), pages 900–907. Chin-Cheng Hsu, Hsin-Te Hwang, Yi-Chiao Wu, Yu Tsao, and Hsin-Min Wang. 2016. Voice conversion from non-parallel corpora using variational auto-encoder. In *2016 Asia-Pacifc Signal and Information Processing Association Annual Summit and* Conference (APSIPA), pages 1–6. IEEE. Rongjie Huang, Feiyang Chen, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2021. Multi-singer: Fast multi-singer singing voice vocoder with a largescale corpus. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 3945– 3954. Rongjie Huang, Chenye Cui, Feiyang Chen, Yi Ren, Jinglin Liu, Zhou Zhao, Baoxing Huai, and Zhefeng Wang. 2022a. Singgan: Generative adversarial network for high-fdelity singing voice generation. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2525–2535. Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023a. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. *arXiv preprint arXiv:2301.12661*. Rongjie Huang, Max WY Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022b. Fastdiff: A fast conditional diffusion model for high-quality speech synthesis. *arXiv preprint arXiv:2204.09934*. Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. 2023b. Audiogpt: Understanding and generating speech, music, sound, and talking head. *arXiv* preprint arXiv:2304.12995. Rongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech. In Advances in Neural Information Processing Systems. Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren. 2022c. Prodiff: Progressive fast diffusion model for high-quality text-to-speech. In *Proceedings of the 30th ACM International Conference on Multimedia*, pages 2595–2605. Rongjie Huang, Zhou Zhao, Jinglin Liu, Huadai Liu, Yi Ren, Lichao Zhang, and Jinzheng He. 2022d. Transpeech: Speech-to-speech translation with bilateral perturbation. *arXiv preprint arXiv:2205.12523*. Yannick Jadoul, Bill Thompson, and Bart de Boer. 2018. Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71:1–15. Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo. 2018. Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks. In *2018 IEEE Spoken Language Technology Workshop (SLT)*, pages 266–273. IEEE. Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, and Nobukatsu Hojo. 2019. Acvae-vc: Non-parallel voice conversion with auxiliary classifer variational autoencoder. *IEEE/ACM Transactions on Audio,* Speech, and Language Processing, 27(9):1432–1443. Takuhiro Kaneko and Hirokazu Kameoka. 2018. Cyclegan-vc: Non-parallel voice conversion using cycle-consistent adversarial networks. In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2100–2104. IEEE. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hif-gan: Generative adversarial networks for effcient and high fdelity speech synthesis. *Advances in* Neural Information Processing Systems, 33:17022– 17033. Sang-Hoon Lee, Ji-Hoon Kim, Hyunseung Chung, and Seong-Whan Lee. 2021. Voicemixer: Adversarial voice style mixup. Advances in Neural Information Processing Systems, 34:294–308. Jinglin Liu, Chengxi Li, Yi Ren, Zhiying Zhu, and Zhou Zhao. 2022. Learning the beauty in songs: Neural singing voice beautifer. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7970– 7983, Dublin, Ireland. Association for Computational Linguistics. Jayneel Parekh, Preeti Rao, and Yi-Hsuan Yang. 2020. Speech-to-singing conversion in an encoder-decoder framework. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 261–265. IEEE. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In *IEEE 2011 Workshop on Automatic Speech Recognition and Understanding*. IEEE Signal Processing Society. IEEE Catalog No.: CFP11SRW-USB. Kaizhi Qian, Yang Zhang, Shiyu Chang, Mark Hasegawa-Johnson, and David Cox. 2020. Unsupervised speech decomposition via triple information bottleneck. In International Conference on Machine Learning, pages 7836–7846. PMLR. Kaizhi Qian, Yang Zhang, Shiyu Chang, Jinjun Xiong, Chuang Gan, David Cox, and Mark HasegawaJohnson. 2021. Global prosody style transfer without text transcriptions. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pages 8650–8660. PMLR. Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and Mark Hasegawa-Johnson. 2019. Autovc: Zeroshot voice style transfer with only autoencoder loss. In *International Conference on Machine Learning*, pages 5210–5219. PMLR. Colin Raffel, Brian McFee, Eric J Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, Daniel PW Ellis, and C Colin Raffel. 2014. mir_eval: A transparent implementation of common mir metrics. In In Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR. Citeseer. Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020. Fastspeech 2: Fast and high-quality end-to-end text to speech. arXiv preprint arXiv:2006.04558. Yi Ren, Xu Tan, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2022. Revisiting over-smoothness in text to speech. arXiv preprint arXiv:2202.13066. Takeshi Saitou, Masataka Goto, Masashi Unoki, and Masato Akagi. 2007. Speech-to-singing synthesis: Converting speaking voices to singing voices by controlling acoustic features unique to singing voices. In *2007 IEEE Workshop on Applications of Signal* Processing to Audio and Acoustics, pages 215–218. IEEE. Takeshi Saitou, Naoya Tsuji, Masashi Unoki, and Masato Akagi. 2004. Analysis of acoustic features affecting" singing-ness" and its application to singingvoice synthesis from speaking-voice. In *Eighth International Conference on Spoken Language Processing*. Hideyuki Tachibana, Katsuya Uenoyama, and Shunsuke Aihara. 2018. Effciently trainable text-to-speech system based on deep convolutional networks with guided attention. In *2018 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP), pages 4784–4788. IEEE. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. *Advances in neural* information processing systems, 30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Karthika Vijayan, Minghui Dong, and Haizhou Li. 2017. A dual alignment scheme for improved speech-tosinging voice conversion. In 2017 Asia-Pacifc Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pages 1547– 1555. IEEE. Karthika Vijayan, Xiaoxue Gao, and Haizhou Li. 2018. Analysis of speech and singing signals for temporal alignment. In *2018 Asia-Pacifc Signal and Information Processing Association Annual Summit and* Conference (APSIPA ASC), pages 1893–1898. IEEE. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. *IEEE transactions on image processing*, 13(4):600–612. Da-Yi Wu and Yi-Hsuan Yang. 2020. Speech-tosinging conversion based on boundary equilibrium gan. *arXiv preprint arXiv:2005.13835*. Jie Wu and Jian Luan. 2020. Adversarially trained multi-singer sequence-to-sequence singing synthesizer. *arXiv preprint arXiv:2006.10317*. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232. ## A Architecture The architecture and hyperparameters are listed in Table 4. We use three encoders EP, ER, and EC to encode the target F0, the target rhythm, and the source content features: $$\mathbf{x_{P}}=\operatorname{E_{P}}(\mathbf{f}_{0}),\ \mathbf{x_{R}}=\operatorname{E_{R}}(\mathbf{r}),\ \mathbf{x_{C}}=\operatorname{E_{C}}(\mathbf{x})\quad(9)$$ where all encoders are stacks of several convolution layers. These encoded features are fed into the rhythm adaptor, to both generate the discrete rhythm embeddings and train the rhythm predictor. The rhythm predictor consists of the cross-attention module and several convolutional layers, predicting the target discrete rhythm embeddings conditioned on the content and pitch features. The target rhythm embeddings are then used to re-align the source content representations. Finally, we carry out an inter-modality fusion across four different representations for re-synthesis. Each encoder (ER, EP, and EC) consists of two 1-D convolutional layers, where the kernel sizes are 7, 5, and 3, respectively. The linguistic hidden features extracted from wav2vec 2.0 are 32dimensional, which are widely used for downstream tasks. The hidden size of all the model components is 256. The size of the codebook in the VQ module is set to 6. The cross-attention layer is implemented with one multi-head layer and a feed-forward layer, where the latter consists of a 1-D convolution layer and a fully-connected layer. ## B Attention And Alignment Regulation B.1 Attention Windowing Attention windowing is a widely used technique that controls the "feld of view" at each time step. Only a subsequence of key xˆ = [xpt−w*, ...,* xpt+w] are considered at each query time step t, where w is the window width and ptis the middle position of the window along the key time axis. Specifcally, we replace all the values outside the window with −108 before Softmax(·) so that the contribution outside the window is reduced signifcantly. ## B.2 Guided Attention Loss To make sure that the attention weight matrix is nearly diagonal and monotonic, we adopt guided attention loss. Let αt,n denotes the attention weight at query time step t that attends to key time step n, a guided attention loss can be defned as: $${\mathcal{L}}_{\mathrm{attn}}={\frac{1}{T N}}\sum_{t=1}^{T}\sum_{n=1}^{N}\alpha_{t,n}w_{t,n},{\mathrm{~where~}}\quad\quad(10)$$ $$w_{t,n}=1-\exp\left(-{\frac{\left({\frac{n}{N}}-{\frac{t}{T}}\right)^{2}}{2g^{2}}}\right)\quad\quad\quad(11)$$ where T and N are the lengths of the query and the key, respectively. wt,n is the weight distribution of the constraint. g is a hyperparameter used to control the concentration degree, which is set g = 0.1 in practice. If *αt, n* is far from the diagonal, meaning that the key representations (like the content Method LSD ↓ RCA ↑ MOS-Q ↑ **MOS-P** ↑ GT Mel 2.8974 0.9959 4.04±0.10 4.18±0.17 SpeechSplit 2.0 (w/o SE) 5.7681 0.9870 3.19±0.06 3.45±0.13 SpeechSplit 2.0 (w/ SE) **4.4871** 0.9848 3.65±0.14 3.88±**0.05** AlignSTS (ours) 5.0129 **0.9934** 3.58±0.19 3.81±0.09 Table 3: Experimental results involving *SpeechSplit 2.0 (w/ SE)*. ![11_image_0.png](11_image_0.png) ## B.3 Visualization A visualization of the attention weights is shown in Figure 4. All the attention weights are extracted from the last layer of the cross-modal aligner and averaged across attention heads. Comparing subfgure (a) the attention weights without alignment regulation and (b) that with regulation, the importance of attention path regulation is clearly demonstrated. Without alignment regulation, skips and non-monotonic situations occur, causing disordered or even indistinguishable phonemes. As for the zero-shot scenario, we compare the attention weights during the training stage and during the inference stage, which are shown in subfgures (c) and (d), respectively. Since the training is conducted in a self-supervised manner, the attention pattern demonstrates a perfectly linear pattern, as expected. The cross-modal aligner learns how to uniformly interpolate and stretch the input wav2vec 2.0 features to the same length as the target representations. However, in inference, the aligner is still capable of predicting the specifc duration information of each linguistic unit of unseen speech data to a certain degree. As shown in subfgure (d), AlignSTSdemonstrates its generalizability to unseen speech data and the ability to explore modality interaction in self-supervised pre-training. ## C Dataset We collected and annotated the speech version of a subset of PopBuTFy to create a paired speech and singing dataset. During the collection, the private information of the speakers was protected. The qualifed speakers are requested to read the lyrics of songs that have been sung by themselves. The personal vocal timbre is kept still during the recording process. We carefully select a subset of the collected recordings to create a high-quality dataset. In all, the dataset consists of 152 English pop songs (∼5.5 hours in total) and the respective speech recordings (∼3.7 hours in total) from 16 singers. All the audio fles are recorded in a professional recording studio by professional singers, male and female. The recordings are sampled at 22050 Hz with 16-bit quantization. We randomly pick 111 pieces for validation and testing. ## D Extensional Experiments SpeechSplit 2.0 is originally designed for aspectspecifc voice conversion, not STS tasks. Only manipulating the pitch component of SpeechSplit 2.0 input for STS may cause severe alignment problems, since the latent rhythm information is infuenced. To lower the training diffculty and explore the importance of rhythm information, we add a new baseline *SpeechSplit 2.0 (w/ SE)* that involves the target ground truth rhythm information, i.e., the perturbed spectral envelope, in the training and inference procedure. The performances are listed in Table 3. The results show a great improvement brought by this "information leak", in that the target rhythm information should not be available in a real situation. Also, the perturbed spectral envelope may still carry residual linguistic information for phoneme reconstruction. | Hyperparameter | AlignSTS | | |----------------------------------------------|------------------|-----| | Encoder Kernel | 5 | | | Pitch | Encoder Layers | 3 | | Encoder | Encoder Hidden | 256 | | Encoder Kernel | 3 | | | Content | Encoder Layers | 2 | | Encoder | Encoder Hidden | 256 | | Encoder Kernel | 7 | | | Encoder Layers | 2 | | | Encoder Hidden | 256 | | | Rhythm Encoder | Attention Hidden | 256 | | Attention Heads | 1 | | | Attention Layers | 1 | | | Attention Window Size | 0.5 | | | Attention Guided g | 0.1 | | | Attention FFN Kernel | 9 | | | Conv1D Kernel | 3 | | | Conv1D Layers | 2 | | | Conv1D Hidden | 256 | | | Conv1D Dropout | 0.8 | | | VQ Embeddings | 6 | | | VQ Hidden | 256 | | | Rhythm Adaptor | Attention Hidden | 256 | | Attention Heads | 2 | | | Attention Layers | 2 | | | Attention Dropout | 0.1 | | | Attention Window Size | 0.4 | | | Attention Guided g | 0.1 | | | Attention FFN Kernel | 9 | | | CrossModal Aligner | Denoiser Layers | 20 | | Denoiser Hidden | 256 | | | Time Steps | 4 | | | Noise Schedule Type | VPSDE | | | Diffusion Decoder Total Number of Parameters | 26M | | | Table 4: Hyperparameters of AlignSTSmodules. | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 6, after the conclusion ✓ A2. Did you discuss any potential risks of your work? section 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? The introduction is in section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And 4 ✓ B1. Did you cite the creators of artifacts you used? section 3 and 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The license of a GPU can be easily found online and is known to all. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? section 4. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The documentation of a GPU can be easily found online and is known to all. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 3 and 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** section 4 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? It's not relevant ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? It's not relevant and this information is redundant. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? There is no space for that, but yes, our instructions explain how the data would be used and the participants agreed with the terms. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We hire a company to collect the data. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? section 4
ran-etal-2023-new
A New Task and Dataset on Detecting Attacks on Human Rights Defenders
https://aclanthology.org/2023.findings-acl.443
The ability to conduct retrospective analyses of attacks on human rights defenders over time and by location is important for humanitarian organizations to better understand historical or ongoing human rights violations and thus better manage the global impact of such events. We hypothesize that NLP can support such efforts by quickly processing large collections of news articles to detect and summarize the characteristics of attacks on human rights defenders. To that end, we propose a new dataset for detecting Attacks on Human Rights Defenders (HRDsAttack) consisting of crowdsourced annotations on 500 online news articles. The annotations include fine-grained information about the type and location of the attacks, as well as information about the victim(s). We demonstrate the usefulness of the dataset by using it to train and evaluate baseline models on several sub-tasks to predict the annotated characteristics.
# A New Task And Dataset On Detecting Attacks On Human Rights Defenders Shihao Ran Di Lu Joel Tetreault Aoife Cahill Alejandro Jaimes Dataminr Inc. {sran,dlu,jtetreault, acahill,ajaimes}@dataminr.com ## Abstract The ability to conduct retrospective analyses of attacks on human rights defenders over time and by location is important for humanitarian organizations to better understand historical or ongoing human rights violations and thus better manage the global impact of such events. We hypothesize that NLP can support such efforts by quickly processing large collections of news articles to detect and summarize the characteristics of attacks on human rights defenders. To that end, we propose a new dataset for detecting **Attack**s on Human Rights Defenders (HRDsAttack) consisting of crowdsourced annotations on 500 online news articles. The annotations include fine-grained information about the type and location of the attacks, as well as information about the victim(s). We demonstrate the usefulness of the dataset by using it to train and evaluate baseline models on several sub-tasks to predict the annotated characteristics. ## 1 Introduction It is essential for human rights organizations to track, analyze and summarize attacks on human rights defenders over time and across locations for better personnel protection and situational analysis. To do so, multiple event attributes denoting different aspects of the attacking event need to be extracted from textual sources. However, this would be a time-consuming process if done manually. Figure 1 gives an example of the kinds of information that such organizations need to extract. In order to train and evaluate an NLP model to extract this information automatically, a relevant dataset is necessary. The ideal dataset requires accurate annotations for both the breadth (the number of extracted event attributes) and depth (the levels of granularity for each event attribute) of the events. However, all existing Event Extraction (EE) datasets (e.g. ACE05 (Doddington et al., 2004), ERE (Song et al., 2015), ACE05-E (Wadden et al., ![0_image_0.png](0_image_0.png) 2019), ACE05-E+ (Lin et al., 2020)) do not contain annotations at a sufficiently fine-grained level. Although some existing ontologies and datasets do include annotations related to attacking events, e.g. the ATTACK event type in the ACE05 dataset along with the associated AGENT attribute, they are incomplete with respect to many of the details of interest to human rights organizations and do not contain annotations relevant to victim characteristics or the time/location of the attacking event. As a result, existing open-source EE models trained on these datasets (Honnibal et al., 2020; Wadden et al., 2019; He et al., 2019) are unable to predict the complete set of relevant information. To mitigate the gap in existing resources, we present HRDsAttack, a new dataset containing 7089 crowdsourced annotations on 500 online news articles (including article title, article body text, and publication time). Each news article is annotated with 13 different event attributes to capture critical information about attacks on human rights defenders, including the type and location of the attacks, as well as information about the victim(s) and the perpetrator. With HRDsAttack, we hope to support more research opportunities for including NLP in applications related to human rights, as well as for broader AI for Social Good (AI4SG) efforts. To summarize, our contributions are threefold: 1. We present a new dataset (HRDsAttack) that includes annotations for fine-grained event details on attacks on human rights defenders. By focusing on expanding the breadth and depth of the attacking event relative to existing EE ontologies, we aim to address the limited scope of existing NLP resources. The complete ontology for our dataset is shown in Table 1; 2. We propose a new NLP task to extract finegrained event details on attacks on human rights defenders. 3. We demonstrate the usefulness of HRDsAttack with a strong baseline model based on Question Answering (QA) using the T5 model (Raffel et al., 2020) as the backbone in a multi-task setting. The HRDsAttack dataset along with the code for model training and evaluation is available at https://github.com/dataminr-ai/ HRDsAttack. ## 2 Related Work 2.1 Event Extraction Event Extraction (EE) is an NLP task that aims to extract key information such as *who, what, where,* and when from a text. The most commonly used dataset for EE is the ACE05 English corpus (Doddington et al., 2004) which consists of 33 event types and 22 event argument roles across 599 documents from newswires, web blogs, and broadcast conversations. While the ACE ontology covers a large range of event types, only two of them are related to attacking events: the LIFE.INJURE event and the CONFLICT.ATTACK event. Some of the other datasets that focus on extracting event triggers or event arguments are based on the ACE05 ontology (Wadden et al., 2019; Lin et al., 2020), and only cover limited aspects of the information that HRDsAttack covers, e.g. the ATTACKER and TARGET attributes in the LIFE.INJURE and CON-FLICT.ATTACK events. The Armed Conflict Location and Event Data (ACLED) dataset (Raleigh et al., 2010) covers political violence and protest events with annotations for event type, actors and targets, but it does not cover victim-dependent attributes. In comparison, HRDsAttack focuses on attacking events on human rights defenders and provides more event attributes for the attacks, along with more granular information regarding each event attribute. In terms of modeling approaches, early work on EE formulated the task as a token-based classification problem which leveraged different types of features (Ahn, 2006; Liao and Grishman, 2010a,b; Li et al., 2013). More recent approaches focus on applying neural models to EE tasks, such as CNNs (Chen et al., 2015), RNNs (Liu et al., 2019), and other advanced model structures (Nguyen and Nguyen, 2019; Zhang et al., 2019). ## 2.2 Nlp Research For Human Rights Existing NLP research resources around event detection and extraction related to Human Rights are extremely limited. Previous work has focused on identifying potential human rights abuse incidents from social media posts (Alhelbawy et al., 2020; Pilankar et al., 2022), alongside more general applications such as detecting abusive language (Golbeck et al., 2017; Djuric et al., 2015; Aroyo et al., 2019), or procedure-focused applications (e.g. data modeling processes for human rights data (Miller et al., 2013; Fariss et al., 2015)), or predicting judicial decisions of the European Court of Human Rights using NLP (O'Sullivan and Beel, 2019). To our knowledge, there are no event extraction datasets which target human rights issues, which makes HRDsAttack a first in this research area. ## 3 Dataset In this section, we describe the construction of the HRDsAttack dataset, which contains 500 annotated news articles, including article title, article body text, and publication time. We select news articles as the data source rather than other data sources (such as social media posts) since online news articles generally have higher accessibility, better trustworthiness of the source, and longer content | Category | Event Attribute | Labels | Label Definitions | |-------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------| | Yes | There is one or more explicit mention of the perpetrator in the news article. | | | | Perpetrator | No | There is no explicit mention of the perpetrator in the news article. | | | Mention | State Security Forces | Anyone employed by or representing a state institution. | | | Other State Actors | Other actors that are a part of the state or other non-military authorities of a state. | | | | Other non-state | Other actors /Private actors that are not a part of the state and act without the state's permission, support, or acquiescence. | | | | actors Other actors with | Armed actors that are not a part of the state but act with the state's permission, support | | | | permissions | or acquiescence. | | | | Other actors without permissions | Other actors that are not a part of the state. | | | | Regional Organizations | Person or group working for a regional or international organization. | | | | Insufficient Information | There is insufficient information available to determine one of the categories described above. | | | | None | Not applicable, when Perpetrator mention is No. | | | | Perpetrator | Perpetrator Type | Arbitrary Detention | Arrest or detention not in accordance with national laws. | | Enforced Disappearance | Unlawful deprivation of liberty enforced or authorized by the state, that is not acknowledged by the state or the location of the victim is kept secret. | | | | Killing | Unlawful death inflicted upon a person with the intent to cause death or serious injury. | | | | Kidnapping | Deprivation of liberty that is not enforced or authorized by the state. | | | | Torture | The action or practice of inflicting severe pain or suffering on someone as a punishment or in order to force them to do or say something. | | | | Other | Sexual violence or other acts causing or intending to cause harm, such as coercion or discrimination. | | | | Unknown | No harmful acts were conducted or there is insufficient information to determine the harmful acts. | | | | Violation | Violation Type Victim Name | - | Name of the victim. | | Human Rights Defender | A person exercising their right, to promote and strive for the protection and realization of human rights and fundamental freedoms. | | | | Victim Type | Trade Unionist | A person exercising their right to form and join trade unions to protect their interests. | | | Journalist | A person observing events, statements, policies, etc. that can affect society, with the purpose of systematizing such information to inform society. | | | | Insufficient Information | There is insufficient information available to make select one of the categories described above. | | | | Victim Population | Individual | A named individual victim. | | | Type | Multiple | Multiple unnamed individuals. | | | Victim | Adult | Age >= 18. | | | Child | Age <17. | | | | Victim Age Group | Other | A mixture of age groups, when Victim Population Type is Multiple. | | | Unknown | There is insufficient information available to determine the age group. | | | | Man | Male. | | | | Victim Sex Group | Woman | Female. | | | Other | Other gender types. | | | | Unknown | There is insufficient information available to determine the sex group. | | | | Country | - | Country in which the attack occurred | | | Location | Region | - | Region in which the attack occurred, such as a state or a province | | City | - | City in which the attack occurred | | | Year | - | Year the attacking event occurred | | | Month | January, ..., December | Month the attacking event occurred | | | Time | Day | 1, 2, 3, ..., 31 | Day (of the month) the attacking event occurred | | Table 1: Labeling ontology of HRDsAttack. | | | | ## Length. In our work, we sample online news articles from the GDELT database1, which we discuss in more detail in Section 3.2. ## 3.1 Annotation Labels To ensure the comprehensiveness of the annotations regarding capturing event details, we first identify the event attributes or labels required for annotation. As shown in Table 1, according to the UN Human Rights SDG 16.10.1 Guidance Note2, we identify 1https://www.gdeltproject.org/ 2https://www.ohchr.org/Documents/ Issues/HRIndicators/SDG_Indicator_16_ 10_1_Guidance_Note.pdf the following 5 categories of attributes: PERPE-TRATOR, VIOLATION, VICTIM, LOCATION, and TIME. Each category has one or more associated event attributes, all denoting key information about the primary event described in the original article3. If there are multiple events mentioned in the article, only the primary event (i.e. the event that happened closest to the publication time) is annotated. We also specify that the VICTIM category could have multiple entries per article, while other categories can only have one entry per article (i.e. only one entry for the primary attack event). The ontology 3All label values for each event attribute are prescribed by the SDG 16.10.1 Guidance Note. for the annotation labels is shown in Table 1. ## 3.2 Data Sampling To build HRDsAttack, we first scrape 80,112 online news articles in the time range of 2019/09/01 to 2022/05/01 from the GDELT database following the CAMEO codebook (Schrodt, 2012), a standard framework for coding event data (Yuan, 2016). These scraped news articles are identified as relevant to human rights defenders by an existing human rights monitoring workflow. During our pilot studies, we identified a data imbalance issue from the annotations under random sampling. Specifically, we observed significantly skewed label distributions in event attributes VI-OLATION TYPE and VICTIM TYPE, the minority classes being TORTURE and KIDNAPPING for VIO-LATION TYPE, and HUMAN RIGHTS DEFENDERS and TRADE UNIONISTS for VICTIM TYPE. To address this issue, we apply keyword filtering and targeted sampling to ensure HRDsAttack is wellbalanced across classes in each event attribute. To include more samples with a higher probability of containing events associated with these minority attributes, we first reduce the original 80,112 samples into four smaller, targeted sample sets. Each targeted sample set corresponds to the articles that contain the keyword for each of the minority classes. We then randomly sample 25 articles from each targeted sample set to form a batch of 100 samples for each round of full annotation. Table 2 shows the keywords used for minority class targeted sampling. | Minority Event Attribute | Keyword | |----------------------------|-------------| | Torture | torture | | Kidnapping | kidnapping | | Human Rights Defenders | human right | | Trade Unionists | trade union | Table 2: Keyword for each minority class used in keyword filtering and targeted sampling. ## 3.3 Annotation Process The annotation is done by qualified workers (Turkers) on Amazon Mechanical Turk (AMT). We design and implement a separate qualification task to recruit top-performing Turkers, and we only release the full annotation tasks to the Turkers that surpass a predefined performance bar based on the qualification tasks. ## 3.3.1 Qualification Tasks For the qualification task, all US-based Turkers that have a HIT (Human Intelligence Task 4) approval rate greater than 90% and a total number of HITs approved greater than 500 are able to participate. In the qualification task, we sample three different news articles and ask all participant Turkers to annotate every event attribute for each news article through three questionnaires (each HIT contains three questionnaires, one for each news article). We then evaluate their performance on this annotation task. All three news articles are also annotated by domain experts, and we use their annotations as the ground truth answers for calculating the Turker accuracy. We only recruit Turkers who have 75% or higher average accuracy across all three news articles. We launched three rounds of qualification tasks with 50 assignments in total, and ten Turkers passed the qualification tasks. The instructions and the task interface for the qualification tasks are shown in Figures 4 to 11 in Appendix A. ## 3.3.2 Full Tasks In the full task, each HIT only contains a single news article. The instructions and the annotation interface are identical to the qualification task. We launched all 500 samples in 5 batches, each batch containing 100 HITs. During our pilot studies, we did not observe a significant quality improvement with replication factor 3 due to relatively high agreement scores between the Turkers (Table 8 in Appendix C). We hypothesize that this is because the annotation task itself is highly objective. Therefore, we did not apply replication factors during the full task. We compensate each Turker with $7.50 per assignment in the qualification task (three news articles per assignment) and $2.00 per assignment in the full task (one news article per assignment). We also provide an additional bonus to all participant Turkers of $0.5 per assignment. The final pay rate is $15.00 per hour, which is over the US national minimal wage of $7.505. The annotation instructions and the task interface for the full tasks are shown in Figures 12 to 15 in Appendix A. 4A HIT represents a single, self-contained, virtual task that a Turker can work on, submit an answer, and collect a reward for completing. 5https://www.dol.gov/general/topic/ wages/minimumwage ## 3.4 Data Statistics To create a benchmark dataset from HRDsAttack, we randomly split the 500 annotated samples into train, dev, and test set with a 3:1:1 ratio. Table 3 shows the statistics of the splits. A breakdown of the label-level statistics for each event attribute can be found in Table 7 in Appendix B. | Train | Dev | Test | Total | | |----------------------|---------|--------|----------|----------| | No. of Articles | 300 | 100 | 100 | 500 | | Total No. of Tokens | 287,911 | 97,038 | 124,658 | 509,607 | | Avg. No. of Tokens | 959.70 | 970.38 | 1,246.58 | 1,019.21 | | Total No. of Victims | 687 | 272 | 204 | 1,163 | | Avg. No. of Victims | 2.29 | 2.72 | 2.04 | 2.33 | Table 3: Textual statistics of HRDsAttack splits. The average number of tokens and victims is averaged per news article. ## 4 Our Model With the construction of HRDsAttack, we now turn to developing a model for the task. We noted earlier that existing state-of-the-art EE models are not suitable as baselines, as they rely on extensive human annotations based on token-level annotations, hence cannot easily be re-trained and evaluated on this dataset. For instance, AMR-IE (Zhang and Ji, 2021) and GraphIE (Qian et al., 2018) are trained on the ACE05 dataset and ERE dataset. Some recent research casts the EE task as QA tasks or Seq2seq tasks, such as RCEE_ER (Liu et al., 2020) and Text2Event (Lu et al., 2021). In this section, we propose a new model for extracting fine-grained details regarding attacks on human rights defenders. ## 4.1 Overall Framework Given the limited amount of training data and the range and variety of event attributes, we propose using a single Seq2Seq Question Answering (QA) model. Training a unified model has the advantageous property that it shares the training data across all the sub-tasks thus potentially leading to better performance for each sub-task. Figure 2 shows the overall framework of our proposed baseline model. We formulate all of the subtasks as a generation task following T5 (Raffel et al., 2020), which proposes reframing all NLP tasks into a unified text-to-text format. The input to the T5 model is a natural language sentence composed of (1) a task prefix (e.g. *'extract victims'*), (2) an attributeoriented question (e.g. *'Who is the victim of the* violation?'), and (3) a context which is the original article. The output is a text string which explicitly refers to the value of the concerned event attribute (e.g. *'Abdelhakim Setouane'*). ## 4.2 Input-Output Design We group the event attributes into three categories: general article-dependent attributes, victim-dependent attributes, and publication timedependent attributes, and we design input and output formats for them respectively. For all of the three categories, the output is a text string that explicitly refers to the value of the relevant event attribute, e.g. *'Yes'* for PERPETRATOR MENTION, or *'state security forces'* for PERPETRATOR TYPE. The input formats for the three categories have minor differences 6: ## - **General Article-Dependent Attributes:** Most of the event attributes depend on the general information contained within the article (i.e. do not rely on additional input other than article's body text). These include PERPETRATOR MENTION, PERPETRATOR TYPE, and VIOLATION TYPE. For these attributes, the input is the concatenation of a task prefix, an attribute-oriented question, and the original article (e.g. the top three examples in Figure 2). ## - **Victim-Dependent Attributes:** Some Event attributes, such as VICTIM SEX TYPE, depend on the information related to a specific victim. Thus we incorporate the victim name into the input question, as exemplified in the fourth and fifth examples in Figure 2. ## - **Publication Time-Dependent Attributes:** In some cases, the YEAR, MONTH, and DAY attributes related to the attack event are not explicitly present in the article, and we need to infer them based on a combination of the article publication time and the relevant time mentioned in the article (e.g. last month, two weeks ago, yesterday). The article publication time is available as metadata in the GDELT dataset (e.g. *2021-03-29 00:00:00*). For these attributes, we add publication time information into the input, as shown in the last example of Figure 2. 6The complete lists of input and output formats are provided in Table 9 in Appendix D. ![5_image_0.png](5_image_0.png) Task Prefix. Following the multi-task setting in the original T5 work, we add a task prefix at the beginning of the input text. The task prefix is used to instruct the T5 model to perform a particular task. It could be any arbitrary text. In our work, we use a brief task description as the task prefix for each event attribute, e.g. *'detect perpetrator'* for PER-PETRATOR MENTION or *'extract violation type'* for VIOLATION TYPE (Figure 2). The complete list of all the task prefixes is shown in Table 10 in Appendix D. ## 4.3 Long Document Resolution The maximum input length allowed by the T5 model is 512 tokens, but around 75% of the articles from the GDELT dataset exceed that length limit. We explore two options to deal with articles with more than 512 tokens: **Truncation** and Knowledge Fusion. Additional methods for handling long documents are discussed in Appendix E. fi Truncation. We only use the first 512 tokens of the input text. The articles from GDELT are news articles, and the first several sentences from a news article usually contain the most important information. Thus a simple solution is to truncate the article and ignore the cut content. Knowledge Fusion. To mitigate the information loss in the Truncation method, we adopt a split-fuse approach (Figure 3) by (1) splitting the documents into short paragraphs using the spaCy (Honnibal et al., 2020) tokenizer7; (2) applying the model to each of the paragraphs; and then (3) merging the results from each paragraph to obtain the final results for the 7We use the en_core_web_sm spaCy pipeline. fi ![5_image_1.png](5_image_1.png) original article. For event attributes that allow more than one value (e.g. VICTIM NAMES), we keep all of the unique results, and for other attributes, we only keep the one with the highest confidence score (beam search score). ## 5 Experiments 5.1 Evaluation Metrics We consider the following metrics for evaluating different event attributes: - **Precision, Recall, and F1 Score**: we use Precision, Recall, and F1 score to evaluate the model performance on PERPETRATOR MEN-TION and VIOLATION TYPE. - **Accuracy**: we use accuracy (i.e. percentage correct) to evaluate the model performance on PERPETRATOR TYPE, VICTIM TYPE, VICTIM SEX TYPE, VICTIM AGE GROUP, COUNTRY, REGION, CITY, YEAR, MONTH, and DATE. - **Fuzzy Match Precision, Recall, and F1** Score: For the VICTIM NAME attribute, we use precision, recall, and F1 score based on exact matching and fuzzy matching, respectively. For exact matching, one predicted victim name is counted as correct only if it exactly matches with a victim name in the ground truth. For fuzzy matching, one predicted victim name is counted as correct if it has overlapping tokens with a victim name in the ground truth. For example, a predicted victim name *Jordan* is counted as correct when it matches with a ground truth name Michael Jordan. ## 5.2 Baseline Models We consider the following models in our evaluation: - **DyGIE++** (Wadden et al., 2019): a joint Information Extraction (IE) model and we use the checkpoint trained on the ACE05 dataset. It requires mapping from the ACE event ontology8to HRDsAttack. As a result, it only covers two attributes: PERPETRATOR MEN-TION and VICTIM NAME as there is no available mapping for the other event attributes in HRDsAttack. - **T5 w/ Truncation**: our proposed T5-based model with truncation. - **T5 w/ Knowledge Fusion**: our proposed T5based model with knowledge fusion. - **Hybrid (final model)**: a hybrid model based on T5 w/ Truncation and T5 w/ Knowledge Fusion. The model only applies knowledge fusion to PERPETRATOR MENTION, VICTIM NAME, and VICTIM AGE GROUP attributes. This hybrid strategy is decided based on the evaluation results on the dev set. 8The ACE ontology covers event types such as ATTACK and INJURE. We recognize that it would be ideal to have more baseline models for comparison, such as a retrained version of DyGIE++ on HRDsAttack. However, many existing EE models are trained on token-level annotations and are not designed for the additional event attributes that HRDsAttack covers (e.g. VIC-TIM TYPES). Therefore, we had to design a specialized model for this task. We plan to benchmark more Sequence-to-Sequence based models on HRDsAttack in future work. ## 5.3 Training Implementation We use the T5-large checkpoint 9 provided by Huggingface (Romero, 2021) to initialize the model and all experiments are run on a single AWS g5.xlarge instance. The AWS g5.xlarge instance is equipped with a single NVIDIA A10G GPU with 24 GB of GPU memory. Table 4 shows the hyperparameters we use to train the model. | Hyperparameter | Value | |-----------------------------|---------| | Learning rate | 1e-4 | | Learning rate decay | 1e-5 | | Epoch | 20 | | Batch size | 4 | | Gradient accumulation steps | 16 | Table 4: Hyperparameter settings for model training. ## 5.4 Overall Performance Table 5 shows the performance of the four models on the test set: the DyGIE++ baseline, T5 w/ Truncation, T5 w/ Knowledge Fusion, and the Hybrid model. Both T5-based models significantly outperform the DyGIE++ baseline, except for the precision of PERPETRATOR MENTION. In addition, we get further improvement from the Knowledge Fusion method for the PERPETRATOR MENTION, VICTIM NAME, and YEAR attributes. For other attributes, we get results that are slightly worse than those without Knowledge Fusion. This aligns with our assumption that violation events may be elaborated in the later parts of the news articles with specific victim names and violation types. So by applying the Knowledge Fusion method, we can significantly improve the recall of some event attributes. But for other information such as violation time and location, they usually appear in the first several sentences of the news article. The 9https://huggingface.co/t5-large | Event Attribute | Metric | DyGIE++ | T5 w/ Truncation | T5 w/ Knowledge Fusion | Hybrid | |-----------------------|-----------|-----------|--------------------|--------------------------|----------| | Precision | 100.00 | 93.68 | 93.81 | 93.81 | | | Perpetrator Mention | Recall | 36.54 | 97.80 | 100.00 | 100.00 | | F1 | 53.52 | 95.70 | 96.81 | 96.81 | | | Perpetrator Type | Accuracy | - | 62.00 | 60.00 | 62.00 | | Exact Match Precision | 9.41 | 75.61 | 59.30 | 59.30 | | | Exact Match Recall | 9.19 | 24.03 | 39.53 | 39.53 | | | Exact Match F1 | 9.30 | 36.47 | 47.44 | 47.44 | | | Fuzzy Match Precision | 17.65 | 85.37 | 63.95 | 63.95 | | | Fuzzy Match Recall | 17.24 | 27.13 | 42.64 | 42.64 | | | Fuzzy Match F1 | 17.44 | 41.18 | 51.16 | 51.16 | | | Victim Type | Accuracy | - | 72.41 | 71.67 | 72.41 | | Victim Sex Type | Accuracy | - | 89.66 | 86.67 | 89.66 | | Victim Age Group | Accuracy | - | 93.10 | 92.50 | 92.50 | | Victim Name | Precision | - | 67.91 | 61.24 | 67.91 | | Violation Type | Recall | - | 75.26 | 81.44 | 75.26 | | F1 | - | 71.39 | 69.91 | 71.39 | | | Country | Accuracy | - | 66.00 | 65.00 | 66.00 | | Region | Accuracy | - | 3.00 | 2.00 | 3.00 | | City | Accuracy | - | 23.00 | 12.00 | 23.00 | | Year | Accuracy | - | 46.00 | 50.00 | 46.00 | | Month | Accuracy | - | 33.00 | 29.00 | 33.00 | | Day | Accuracy | - | 14.00 | 8.00 | 14.00 | Table 5: Overall performance of the baseline models on HRDsAttack test set (%). All experiments are based on a single run with a preset random seed. time and location information appearing in the later parts may not be related to the primary attacking event. So based on the evaluation results on the dev set (Table 11 in Appendix F), we propose a hybrid model as our final baseline model. The hybrid model only applies Knowledge Fusion to PERPETRATOR MENTION, VICTIM NAME, and VICTIM AGE GROUP attributes. We notice that the hybrid model designed based on the dev set does not achieve the best performance for VICTIM AGE GROUP and YEAR attributes on the test set. It might be the fact that the hybrid strategy is overfitted on the dev set. And we leave the optimization of the hybrid model as future work. While the hybrid model outperforms the DyGIE++ baseline in almost all of the event attributes and unlocks the extraction of new attributes, we do see a relatively lower model performance in attributes such as REGION and DAY. We hypothesize that the ambiguity in REGION labels and the large number of classes in DAY labels introduce additional challenges to the model, especially with a limited amount of training data. For instance, some annotators mistakenly put *London* under RE-GION instead of CITY. We acknowledge that the annotation instructions could be further improved to address this issue. | Event Attribute | Metric | Hybrid | |-------------------|----------|----------| | Victim Type | F1 | 22.89 | | Victim Sex Type | F1 | 33.33 | | Victim Age Group | F1 | 46.01 | We also evaluate the end-to-end performance on the victim-dependent attributes with the modelpredicted victim names (Table 6). And we use F1 scores as the evaluation metric. One victimdependent attribute is counted as correct only when both the predicted victim name and the predicted attribute value match with the ground truth. ## 6 Conclusion In this paper, we present a new dataset that supports extracting detailed information about attacks on human rights defenders under a new task setting. Compared with existing event extraction resources, we focus on the human rights domain and expand to more event attributes for capturing event details more comprehensively. Our new dataset (HRDsAttack) contains 500 human-annotated news articles with 13 different event attributes regarding the victim(s), the type of perpetrator and violation(s), as well as the time and location of the attacks. We demonstrate the usefulness of the dataset by developing a Sequence-to-Sequence-based Question Answering model tailored for this task. While it achieves decent performance on some event attributes, there are many where there is much room for improvement. We view this model as a strong baseline for future work. We believe models trained with HRDsAttack could be generalized to detect attacking events in other domains or targeting a different population. And we hope that this work encourages additional research on the development of new AI4SG NLP resources in the future. ## Acknowledgements We would like to thank Jessie End at Dataminr for her support during this project. We also want to thank all the reviewers for their valuable and constructive feedback during the review phase. ## Limitations While HRDsAttack is, to the best of our knowledge, the first dataset on extracting attacks on human rights defenders, there are some limitations. For one, while being the first corpus of its kind, our dataset is English-only. Second, the number of documents is limited. While the sample size of HRDsAttack (500) is on par with some of the other EE datasets, such as ACE05 (599), we do see more samples being beneficial to subsequent model training and supporting other future studies. In addition, despite the effort to balance the class labels in the event attributes, some of the labels still remain imbalanced, such as PERPETRATOR TYPE. ## Ethics Statement The construction of HRDsAttack involves human annotations on AMT. The Turkers are provided with clear annotation instructions and are informed of the conditions where they would be qualified or disqualified. We compensate the Turkers with a final paid rate of $15.00 per hour which is over the US national minimal wage of $7.50. ## References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events. Ayman Alhelbawy, Mark Lattimer, Udo Kruschwitz, Chris Fox, and Massimo Poesio. 2020. An nlppowered human rights monitoring platform. *Expert* Systems with Applications, 153:113365. Lora Aroyo, Lucas Dixon, Nithum Thain, Olivia Redfield, and Rachel Rosen. 2019. Crowdsourcing subjective tasks: the case study of understanding toxicity in online discussions. In *Companion proceedings of* the 2019 world wide web conference, pages 1100– 1105. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In *Proceedings of the 53rd Annual Meeting of the Association* for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In Proceedings of the 24th international conference on world wide web, pages 29–30. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In *Lrec*, volume 2, pages 837–840. Lisbon. Christopher J Fariss, Fridolin J Linder, Zachary M Jones, Charles D Crabtree, Megan A Biek, AnaSophia M Ross, Taranamol Kaur, and Michael Tsai. 2015. Human rights texts: converting human rights primary source documents into data. *PloS one*, 10(9):e0138935. Jennifer Golbeck, Zahra Ashktorab, Rashad O Banjo, Alexandra Berlinger, Siddharth Bhagwan, Cody Buntain, Paul Cheakalos, Alicia A Geller, Rajesh Kumar Gnanasekaran, Raja Rajan Gunasekaran, et al. 2017. A large labeled corpus for online harassment research. In *Proceedings of the 2017 ACM on web science conference*, pages 229–233. Quentin Grail, Julien Perez, and Eric Gaussier. 2021. Globalizing bert-based transformer architectures for long document summarization. In Proceedings of the 16th conference of the European chapter of the Association for Computational Linguistics: Main volume, pages 1792–1810. Xinyu He, Lishuang Li, Xingchen Song, Degen Huang, and Fuji Ren. 2019. Multi-level attention based blstm neural network for biomedical event extraction. *IEICE TRANSACTIONS on Information and Systems*, 102(9):1842–1850. Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spacy: Industrialstrength natural language processing in python. *none*. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In *Proceedings of the 51st Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers). Shasha Liao and Ralph Grishman. 2010a. Filtered ranking for bootstrapping in event extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Shasha Liao and Ralph Grishman. 2010b. Using document level cross-event inference to improve event extraction. In *Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics*. Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999–8009. Jian Liu, Yubo Chen, and Kang Liu. 2019. Exploiting the ground-truth: An adversarial imitation based knowledge distillation approach for event detection. In *Proceedings of the AAAI Conference on Artificial* Intelligence. Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading comprehension. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP). Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2event: Controllable sequence-tostructure generation for end-to-end event extraction. arXiv preprint arXiv:2106.09232. Ben Miller, Ayush Shrestha, Jason Derby, Jennifer Olive, Karthikeyan Umapathy, Fuxin Li, and Yanjun Zhao. 2013. Digging into human rights violations: Data modelling and collective memory. In *2013* IEEE international conference on big data, pages 37–45. IEEE. Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In *Proceedings of the AAAI Conference on Artificial* Intelligence. Conor O'Sullivan and Joeran Beel. 2019. Predicting the outcome of judicial decisions made by the european court of human rights. arXiv preprint arXiv:1912.10819. Yash Pilankar, Rejwanul Haque, Mohammed Hasanuzzaman, Paul Stynes, and Pramod Pathak. 2022. Detecting violation of human rights via social media. In Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference, pages 40–45. Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2018. Graphie: A graph-based framework for information extraction. arXiv preprint arXiv:1810.13083. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Clionadh Raleigh, Andrew Linke, Håvard Hegre, and Joakim Karlsen. 2010. Introducing acled: An armed conflict location and event dataset. *Journal of Peace* Research, 47(5):651–660. Manuel Romero. 2021. T5 (base) fine-tuned on squad for qg via ap. https://huggingface.co/mrm8488/t5base-finetuned-question-generation-ap. P Schrodt. 2012. Conflict and mediation event observations event and actor codebook v. 1.1 b3. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ERE: Annotation of entities, relations, and events. In *Proceedings of the The 3rd Workshop on* EVENTS: Definition, Detection, Coreference, and Representation. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. arXiv preprint arXiv:1909.03546. Yihong Yuan. 2016. Modeling inter-country connection from geotagged news reports: A time-series analysis. arXiv preprint arXiv:1604.03647. Junchi Zhang, Yanxia Qin, Yue Zhang, Mengchi Liu, and Donghong Ji. 2019. Extracting entities and events as a single task using a transition-based neural model. In *IJCAI*. Zixuan Zhang and Heng Ji. 2021. Abstract meaning representation guided graph encoding and decoding for joint information extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 39–49. ## A Annotation Interface We list the screenshots of our annotation interface from Figure 4 through Figure 15, including the annotation guideline (instructions) page for both the qualification tasks and the full tasks, the example page, as well as the task pages. ## Annotation Instructions This is a qualification task for another larger task. In this task, you will read 3 news articles. Each news article is about one or more attack events targeting human rights defenders or a similar population. You will then fill out a questionnaire for each news article using the information from the article. Some important definitions: - Perpetrator: A person who carries out a harmful, illegal, or immoral act. If one or more explicit mentions of perpetrators exist in the news article, you need to select one option from the following five types of Perpetrators for the primary perpetrator: 1. State Actor: Actor that is part of the government, or public servants working for the government. 2. Other actors acting with the State's permission, support or acquiescence: Actors that are not a part of the state but act with the state's permission, support or acquiescence. 3. Private Actors/ Actors not acting with the State's permission, support or acquiescence: Other actors / Private actors that are not a part of the state and act without the state's permission, support or acquiescence. 4. Other actors (State's permission, support or acquiescence status unknown): Actors that are not a part of the state, there is not sufficient information about whether they act with or without the state's permission, support or acquiescence. 5. Insufficient information: There is not sufficient information available to select one of the categories described above. - Violation: The harmful, illegal, or immoral act carried by the perpetrator. You need to select all options that apply from the following six types of Violations: 1. Killing: Unlawful death inflicted upon a person with the intent to cause death or serious injury. 2. Enforced disappearance: Unlawful deprivation of liberty enforced or authorized by the state, that is not acknowledged by the state or the location of the victim is kept secret. 3. Torture: The action or practice of inflicting severe pain or suffering on someone as a punishment or in order to force them to do or say something. 4. Arbitrary detention: Arrest or detention not in accordance with national laws. 5. Kidnapping: Deprivation of liberty that is not enforced or authorized by the state. 6. Other harmful acts: Sexual violence or other acts causing or intending to cause harm, such as coercion or discrimination. Figure 4: Screenshot of the Qualification Task Instructions (1/3). Victim: A person or a group of people harmed, injured, or killed as a result of a crime, accident, or other event or action. You then need to select all options that apply from the following four types of Victims: 1. Journalist: A person observing events, statements, policies, etc. that can affect society, with the purpose of systematizing such information to inform society, including support staff, as well as community media workers and "citizen journalists" when they momentarily play that role. 2. Trade Unionist: A person exercising their right to form and to join trade unions for the protection of their interests. A trade union is an association of workers organized to protect and promote their common interests. 3. Human Rights Defender: A person exercising their right, to promote and to strive for the protection and realization of human rights and fundamental freedoms, including some journalists and trade unionists. 4. Insufficient information: There is not sufficient information available to make select one of the categories described above. You can check all these definitions later on in the questionnaire interface by hovering your mouse cursor on the info icon next to each item. About the questionnaire: . You must answer every question in the questionnaire. If you cannot find the answer to a specific question, either select the answer Insufficient Information, Unknown, Other for selection questions, or type in N/A in the text box. You will not be able to submit the HIT unless you answered all the questions. - For Year, Month, Day, Location related questions, if the event happened across multiple days and/or happened at multiple locations, please fill in with the starting time and/or the starting location. - When answering about victims, you need to first identify if the victim mentioned in the article refers to named individuals (someone with a name) or a group of unnamed individuals (without explicit names, such as "a group of students"). In the latter case, if the group has mixed genders and age groups, select "Other" for Victim Sex Type and Victim Age Group questions. For each victim that has a name mentioned in the article, please select " Named Individual " in the . questionnaire then fill in their name individually. If multiple names are mentioned, click the "Add Another Victim" button to add all victims with their corresponding names. Only select "Multiple (i.e. Group of unnamed individuals)" when multiple victims are mentioned in the article without their specific names. Figure 5: Screenshot of the Qualification Task Instructions (2/3). . 4. Insufficient information: There is not sufficient information available to make select one of the categories described above. You can check all these definitions later on in the questionnaire interface by hovering your mouse cursor on the info icon ❏ next to each item. About the questionnaire: You must answer every question in the questionnaire. If you cannot find the answer to a specific question, either select the answer Insufficient Information, Unknown, Other for selection questions, or type in N/A in the text box. You will not be able to submit the HIT unless you answered all the questions. - For Year, Month, Day, Location related questions, if the event happened across multiple days and/or happened at multiple locations, please fill in with the starting time and/or the starting location. When answering about victims, you need to first identify if the victim mentioned in the article refers . to named individuals (someone with a name) or a group of unnamed individuals (without explicit names, such as "a group of students"). In the latter case, if the group has mixed genders and age groups, select "Other" for Victim Sex Type and Victim Age Group questions. For each victim that has a name mentioned in the article, please select " Named Individual " in the . questionnaire then fill in their name individually. If multiple names are mentioned, click the "Add Another Victim" button to add all victims with their corresponding names. Only select "Multiple (i.e. Group of unnamed individuals)" when multiple victims are mentioned in the article without their specific names. ## Read An Example: On the next page, you will see one example article and one questionnaire with correct answers already filled in. Please read through the example carefully. You will be asked to copy a randomly generated code into one text box in the middle of the example questionnaire, you only need to do this once on the example page. NOTE: Your submission will be REJECTED if the code you pasted in the box does not match the one shown to you. Figure 6: Screenshot of the Qualification Task Instructions (3/3). Note: This is an example. questions on the right hand side of your s ![12_image_1.png](12_image_1.png) 02-May-2019 United Nations hum and to inform CNN repo ![12_image_0.png](12_image_0.png) ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![14_image_0.png](14_image_0.png) Figure 10: Screenshot of the Qualification Task Articles (2/3). By hovering over the information icon next to Victim Type , Turkers can check the definitions of all victim types on this page. Article 3 / 3 Instructions: Read the news article below then answer the questions. ![14_image_1.png](14_image_1.png) ## Annotation Instructions In this task, you will read 1 news article. The news article is about one or more attack events targeting human rights defenders or a similar population. You will then fill out a questionnaire using the information from the article. Some important definitions: Perpetrator: A person who carries out a harmful, illegal, or immoral act. If one or more explicit mentions of perpetrators exist in the news article, you need to select one option from the following seven types of Perpetrators for the primary perpetrator: 1. State Security Forces: Anyone employed by or representing a state institution, including government, courts, parliament, police, military. 2. Other State Actors: Other actors that are a part of the state, e.g. civilian authorities such as Ministry of Interior, Ministry of Defence, or other non-military authority of a state, such as a President or Prime Minister, other ministers, regional governors, and associated staff / civilian administrators. 3. Armed actor on behalf of the State: Armed actors that are not a part of the state but act with the state's permission, support or acquiescence, e.g. mercenaries where there is an agreement in place that they are working for the State. 4. Non-State Armed Actor: Other actors / Private actors that are not a part of the state and act without the state's permission, support or acquiescence. E.g. armed groups who may have control over parts of a State's territory, "terrorist" groups, militias, etc. 5. Other Non-State Actor: Other actors that are not a part of the state, e.g. private security companies, private enterprises. 6. Regional Or International Organizations: Person or group working for a regional or international organisation, e.g. African Union, UN peacekeeping. 7. Insufficient information: There is not sufficient information available to select one of the categories described above. Violation: The harmful, illegal, or immoral act carried by the perpetrator. You need to select all options that apply from the following seven types of Violations: 1. Killing: Unlawful death inflicted upon a person with the intent to cause death or serious injury. 2. Enforced disappearance: Unlawful deprivation of liberty enforced or authorized by the state, that is not acknowledged by the state or the location of the victim is kept secret. 3. Torture: The action or practice of inflicting severe pain or suffering on someone as a punishment or in order to force them to do or say something. Figure 12: Screenshot of the Full Task Instructions (1/2). 3. Human Rights Defender: A person exercising their right, to promote and to strive for the protection and realization of human rights and fundamental freedoms, including some journalists and trade unionists. 4. Insufficient information: There is not sufficient information available to make select one of the categories described above. You can check all these definitions later on in the questionnaire interface by hovering your mouse cursor on the info icon - next to each item. About the questionnaire: - You must answer every question in the questionnaire. If you cannot find the answer to a specific question, either select the answer Insufficient Information, Unknown, Other for selection questions, or type in N/A in the text box. You will not be able to submit the HIT unless you answered all the questions. - For Year, Month, Day, Location related questions, if the event happened across multiple days and/or happened at multiple locations, please fill in with the starting time and/or the starting location. - When answering about victims, you need to first identify if the victim mentioned in the article refers to named individuals (someone with a name) or a group of unnamed individuals (without explicit names, such as "a group of students"). In the latter case, if the group has mixed genders and age groups, select "Other" for Victim Sex Type and Victim Age Group questions. For each victim that has a name mentioned in the article, please select "Named Individual" in the questionnaire then fill in their name individually. If multiple names are mentioned, click the "Add Another Victim" button to add all victims with their corresponding names. Only select "Multiple (i.e. Group of unnamed individuals)" when multiple victims are mentioned in the article without their specific names. If the news article is not about a event targeting human rights defenders, please select the last option for selection questions and put N/A in the text boxes. For example, select "No/Unknown Harmful Acts" for "Violation type" question. ## Read An Example: On the next page, you will see one example article and one questionnaire with correct answers already filled in. Please read through the example carefully. You will be asked to copy a randomly generated code into one text box in the middle of the example questionnaire, you only need to do this once on the example page. NOTE: Your submission will be REJECTED if the code you pasted in the box does not match the one shown to you. Figure 13: Screenshot of the Full Task Instructions (2/2). ![16_image_0.png](16_image_0.png) ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) ## B Label Statistics In Table 7, we list the statistics of all the labels in HRDsAttack, as well as their distributions in the train, dev, and test set. ## C Cohen-Kappa Scores We calculate the average pair-wise Cohen-Kappa scores for each qualified Turker during our pilot study using 100 Hits with replication factor 3. While we did our due diligence to make our annotation instructions as comprehensive as possible, some of the concepts regarding Human Rights were sometimes challenging to distinguish for the Turkers. The relatively lower weighted average of Cohen-Kappa scores for some event attributes (PERPETRATOR MENTION: 0.40, PERPETRATOR TYPE: 0.41) are also potentially due to the imbalanced distributions of these attributes. The weighted averages of Cohen-Kappa scores for other attributes are all higher than 0.61 for violation and victim-related classes (VIOLATION TYPE: 0.67, VICTIM POPULATION TYPE: 0.64, VICTIM TYPE: 0.62), which are generally considered as substantially agree. ## D Input-Output Design Table 9 shows the input questions and answers for all the event attributes covered in HRDsAttack. And Table 10 shows all the task prefixes that we add to the beginning of the input text in our multitask training regime. ## E Long Document Solutions Besides the two solutions we evaluate in the paper (Truncation and Knowledge Fusion), there are two other possible solutions that we describe here. First, some work proposes splitting a long document into shorter sequences, then using a transformer to generate sequence representations for each of them (Grail et al., 2021). Then those sequence representations are fed into another network to generate the final document representation. But in this case, a large number of training examples is required to learn the parameters of the network layers which generate the document representation. There is also the long transformer (Longformer (Beltagy et al., 2020)) approach proposed to handle long documents. But in contrast to the T5 model which is pretrained on many pretraining tasks, it is difficult to reframe all of the subtasks as a unified Sequence-to-Sequence task based on those long transformers. In comparison, the approaches we proposed are all post-processing steps that are less expensive than the aforementioned methods. ## F Model Performances On The Development Set Table 11 shows the performance of the models on the dev set of HRDsAttack. The best baseline model (T5 Hybrid) is chosen based on the model performance on the dev set. | Category | Event Attribute | Labels | Train | Dev | Test | Total | |------------------------------------------|--------------------------|------------------------|---------|-------|--------|---------| | Yes | 272 | 95 | 91 | 458 | | | | Perpetrator Mention | No | 28 | 5 | 9 | 42 | | | State Security Forces | 149 | 60 | 56 | 265 | | | | Other State Actors | 25 | 6 | 10 | 41 | | | | Other non-state actors | 34 | 11 | 9 | 54 | | | | Other actors with permissions | 10 | 5 | 3 | 18 | | | | Other actors without permissions | 41 | 10 | 10 | 61 | | | | Regional Organizations | 4 | 1 | 1 | 6 | | | | Insufficient Information | 9 | 1 | 1 | 11 | | | | Perpetrator | Perpetrator Type | None | 28 | 6 | 10 | 44 | | Arbitrary Detention | 138 | 53 | 55 | 246 | | | | Enforced Disappearance | 27 | 8 | 8 | 43 | | | | Killing | 109 | 33 | 34 | 176 | | | | Violation | Violation Type | Kidnapping | 76 | 21 | 16 | 113 | | Torture | 56 | 19 | 24 | 99 | | | | Other | 131 | 46 | 50 | 227 | | | | Unknown | 20 | 5 | 10 | 35 | | | | Victim Name | - | 463 | 198 | 130 | 791 | | | Human Rights Defender | 145 | 32 | 42 | 219 | | | | Trade Unionist | 59 | 25 | 9 | 93 | | | | Journalist | 195 | 104 | 66 | 365 | | | | Victim Type | Insufficient Information | 356 | 120 | 113 | 589 | | | Individual | 463 | 198 | 130 | 791 | | | | Victim Population Type | Multiple | 224 | 74 | 74 | 372 | | | Adult | 491 | 217 | 131 | 839 | | | | Child | 19 | 5 | 9 | 33 | | | | Other | 34 | 4 | 17 | 55 | | | | Victim Age Group | Unknown | 143 | 46 | 47 | 236 | | | Man | 274 | 115 | 90 | 479 | | | | Woman | 115 | 35 | 33 | 183 | | | | Other | 76 | 15 | 29 | 120 | | | | Victim | Victim Sex Group | Unknown | 222 | 107 | 52 | 381 | | Country | - | 279 | 89 | 89 | 457 | | | Location | Region | - | 69 | 26 | 12 | 107 | | City | - | 163 | 53 | 45 | 261 | | | Year | - | 268 | 83 | 80 | 431 | | | Time | Month | January, ..., December | 183 | 60 | 48 | 291 | | Day | 1, 2, 3, ..., 31 | 109 | 35 | 35 | 179 | | | Table 7: Label statistics of HRDsAttack. | | | | | | | | Worker | Average Pair-wise Cohen-Kappa Score | | | | | | | |------------------|---------------------------------------|---------------------|------------------|----------------|------------------------|-------------|------| | No. of HITs | | | | | | | | | Index | Finished | Perpetrator Mention | Perpetrator Type | Violation Type | Victim Population Type | Victim Type | | | 1 | 85 | 0.48 | 0.41 | 0.74 | 0.65 | 0.66 | 0.59 | | 2 | 51 | 0.22 | 0.43 | 0.63 | 0.53 | 0.61 | 0.48 | | 3 | 51 | 0.60 | 0.53 | 0.74 | 0.82 | 0.66 | 0.67 | | 4 | 47 | -0.04 | 0.31 | 0.72 | 0.74 | 0.57 | 0.46 | | 5 | 38 | 0.45 | 0.37 | 0.70 | 0.46 | 0.63 | 0.52 | | 6 | 15 | 0.71 | 0.49 | 0.45 | 0.71 | 0.38 | 0.55 | | 7 | 9 | 1.00 | 0.24 | 0.32 | 0.40 | 0.80 | 0.55 | | 8 | 2 | 0.50 | 0.50 | 0.00 | 1.00 | 0.00 | 0.40 | | 9 | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | 10 | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | Weighted Average | 0.40 | 0.41 | 0.67 | 0.64 | 0.62 | - | | Table 8: Turker agreement scores for some of the event attributes calculated during the pilot study with 100 HITs, replication factor 3. Table 9: Summary of the predefined questions and answers for event attributes. Gen stands for the category of general article-dependent attributes, Vic stands for the category of victim-dependent attributes, and Tim stands for the category of publication time-dependent attributes. | Category | Event Attribute | Input Question | Output Answer | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------| | Perpetrator Mention | Does it mention any perpetrator? | One of {Yes, No} | | | Perpetrator Type | What is the type of the perpetrator? | One of {state security forces, regional organizations, other actors with permissions, other actors without permissions, other state actors, other non-state actors, insufficient info} | | | Is there any arbitrary detention violation mentioned in the text? Is there any enforced disappearance violation mentioned in the text? Is there any kidnapping violation mentioned in the text? One of {Yes, No} Is there any killing violation mentioned in the text? Is there any torture violation mentioned in the text? Is there any other violation mentioned in the text? | | | | | Victim Name | Who is the victim of the violation? | {VICTIM_NAME1, VICTIM_NAME2, . . . } | | | Country | In which country did the violation happen? | {COUNTRY_NAME} | | | Region | In which region did the violation happen? | {REGION_NAME} | | | City | In which city did the violation happen? | {CITY_NAME} | | | Gen | Violation Type Victim Sex Type | What is the sex of {VICTIM_NAME}? | One of {woman, man, other, unknown} | | Victim Age Group | What is the age group of {VICTIM_NAME}? | One of {adult, child, other, unknown} | | | Victim Population Type | What is the population type of {VICTIM_NAME}? | One of {Individual, multiple} | | | Victim type | Is {VICTIM_NAME} a trade unionist? Is {VICTIM_NAME} a journalist? | One of {Yes, No} | | | Is {VICTIM_NAME} a human rights defender? | | | | | Vic | Year | In which year did the violation happen? | Year (YYYY) | | Month | In which month did the violation happen? | Month (month name) | | | Day | On which day did the violation happen? | Day (D with no leading zeros) | | | Victim type | Is {VICTIM_NAME} a trade unionist? Is {VICTIM_NAME} a journalist? | One of {Yes, No} | | | Is {VICTIM_NAME} a human rights defender? | | | | | Tim | | | | | Class | Task-prefix | |------------------------|--------------------------------| | Perpetrator Mention | detect perpetrator | | Perpetrator Type | extract perpetrator type | | Violation Type | extract violation type | | Victim Name | extract victims | | Victim Sex Type | extract victim sex | | Victim Age Group | extract victim age | | Victim Population Type | extract victim population type | | Victim Type | extract victim type | | Country | extract violation country | | Region | extract violation region | | City | extract violation city | | Year | extract violation year | | Month | extract violation month | | Day | extract violation day | Table 10: Task Prefix for each event attribute. | Event Attribute | Metric | DyGIE++ | T5 w/ Truncation | T5 w/ Knowledge Fusion | Hybrid | |-----------------------|-----------|-----------|--------------------|--------------------------|----------| | Precision | 97.30 | 95.88 | 95.96 | 95.96 | | | Perpetrator Mention | Recall | 37.89 | 97.89 | 100.00 | 100.00 | | F1 | 54.55 | 96.88 | 97.94 | 97.94 | | | Perpetrator Type | Accuracy | - | 68.00 | 68.00 | 68.00 | | Exact Match Precision | 10.37 | 85.07 | 65.08 | 65.08 | | | Exact Match Recall | 7.14 | 29.08 | 41.84 | 41.84 | | | Exact Match F1 | 8.46 | 43.35 | 50.93 | 50.93 | | | Fuzzy Match Precision | 19.26 | 85.07 | 73.02 | 73.02 | | | Fuzzy Match Recall | 13.27 | 29.08 | 46.94 | 46.94 | | | Fuzzy Match F1 | 15.71 | 43.35 | 57.14 | 57.14 | | | Victim Type | Accuracy | - | 81.25 | 66.12 | 81.25 | | Victim Sex Type | Accuracy | - | 85.42 | 80.87 | 85.42 | | Victim Age Group | Accuracy | - | 97.92 | 98.36 | 98.36 | | Victim Name | Precision | - | 64.14 | 57.09 | 64.14 | | Violation Type | Recall | - | 68.65 | 76.22 | 68.65 | | F1 | - | 66.32 | 65.28 | 66.32 | | | Country | Accuracy | - | 62.00 | 59.00 | 62.00 | | Region | Accuracy | - | 9.00 | 2.00 | 9.00 | | City | Accuracy | - | 20.00 | 16.00 | 20.00 | | Year | Accuracy | - | 52.00 | 46.00 | 52.00 | | Month | Accuracy | - | 32.00 | 32.00 | 32.00 | | Day | Accuracy | - | 18.00 | 10.00 | 18.00 | Table 11: Overall performance of the baseline models on HRDsAttack dev set (%). All experiments are based on a single run with a preset random seed. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5 Experiments ✓ B1. Did you cite the creators of artifacts you used? Section 5 Experiments ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We plan on releasing our dataset under the MIT license as well pending legal approval. The LICENSE will be provided alongside the dataset as a text file on GitHub when the paper is published. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The Term of Use for GDELT is: all datasets released by the GDELT Project are available for unlimited and unrestricted use for any academic, commercial, or governmental use of any kind without fee. We plan on releasing our dataset under the MIT license as well pending legal approval. The LICENSE will be provided alongside the dataset as a text file on GitHub when the paper is published. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We included the worker ID for each annotation in the dataset, which is anonymized. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.3 Annotation Process ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.5 Data Statistics The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5 Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5.3 Training Implementation ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5.3 Training Implementation ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5.4 Results and Table 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.4 Long Document Resolution ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3.3 Annotation Process ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 3.3 Annotation Process ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3.3 Annotation Process ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The Term of Use for GDELT is: all datasets released by the GDELT Project are available for unlimited and unrestricted use for any academic, commercial, or governmental use of any kind without fee. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We used a review process internal to our organization with HCI research scientists. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3.3 Annotation Process
herold-etal-2023-improving
Improving Language Model Integration for Neural Machine Translation
https://aclanthology.org/2023.findings-acl.444
The integration of language models for neural machine translation has been extensively studied in the past. It has been shown that an external language model, trained on additional target-side monolingual data, can help improve translation quality. However, there has always been the assumption that the translation model also learns an implicit target-side language model during training, which interferes with the external language model at decoding time. Recently, some works on automatic speech recognition have demonstrated that, if the implicit language model is neutralized in decoding, further improvements can be gained when integrating an external language model. In this work, we transfer this concept to the task of machine translation and compare with the most prominent way of including additional monolingual data - namely back-translation. We find that accounting for the implicit language model significantly boosts the performance of language model fusion, although this approach is still outperformed by back-translation.
# Improving Language Model Integration For Neural Machine Translation Christian Herold Yingbo Gao Mohammad Zeineldeen Hermann Ney Human Language Technology and Pattern Recognition Group Computer Science Department RWTH Aachen University D-52056 Aachen, Germany {herold|ygao|zeineldeen|ney}@cs.rwth-aachen.de ## Abstract The integration of language models for neural machine translation has been extensively studied in the past. It has been shown that an external language model, trained on additional target-side monolingual data, can help improve translation quality. However, there has always been the assumption that the translation model also learns an implicit target-side language model during training, which interferes with the external language model at decoding time. Recently, some works on automatic speech recognition have demonstrated that, if the implicit language model is neutralized in decoding, further improvements can be gained when integrating an external language model. In this work, we transfer this concept to the task of machine translation and compare with the most prominent way of including additional monolingual data - namely backtranslation. We find that accounting for the implicit language model significantly boosts the performance of language model fusion, although this approach is still outperformed by back-translation. ## 1 Introduction Machine translation (MT) is the task of automatically translating text from one language to another. Nowadays, the dominant approach is neural machine translation (NMT), where a neural network is used to predict the probability of a sentence in the target language, given a sentence in the source language (Bahdanau et al., 2014; Vaswani et al., 2017). For this approach to be effective, a large number of bilingual training samples - consisting of sentences and their corresponding translations - is needed. This poses a challenge, especially when we want to build a system for a specific domain, where zero or only limited amounts of in-domain bilingual data are available. In these situations, people turn towards monolingual text data, which is simply text in the source or target language and of which plenty exists for most languages and domains. Before NMT became feasible, the preferred way of incorporating additional monolingual data in the MT system was the usage of an external target-side language model (LM), which is trained on monolingual data to predict the probability of a sentence (Brown et al., 1990; Della Pietra, 1994; Zens et al., 2002). However, with the rise of NMT, it was found that a technique called back-translation outperforms the LM incorporation by a large margin (Sennrich et al., 2016a). Back-translation is a two step process, where we first create synthetic parallel data by automatically translating target side monolingual data into the source language. Then, the final NMT system is trained on the combination of the real and synthetic parallel data. It was argued that the backtranslation approach better suits the NMT framework because the NMT system implicitly learns an internal language model (ILM) as part of the training, which might interfere with an additional external LM (Sennrich et al., 2016a). More recently, for automatic speech recognition (ASR), there have been works focusing on neutralizing this ILM before combination with an external LM and significant improvements were reported (McDermott et al., 2019; Variani et al., 2020; Meng et al., 2021; Zeyer et al., 2021; Zeineldeen et al., 2021). In this work, we adapt the methods for ILM compensation, developed for ASR, and test them for NMT. We compare against back-translation in different settings and find that ILM compensation significantly boosts the performance of LM fusion, although back-translation is still outperforming this approach for NMT. Also, applying ILM compensation on top of back-translation does not result in significant performance improvements. ## 2 Related Work Several approaches to combine an LM and NMT model have been proposed in the past. Shallow fusion (SF) is the most straight forward way, using a weighted log-linear combination of the model output probabilities (Gulcehre et al., 2015, 2017). Deep fusion denotes the concatenation of the hidden states of NMT model and LM and requires joint fine-tuning of both models (Gulcehre et al., 2015, 2017). Simple fusion is similar to shallow fusion, but the NMT model is trained using information from a pre-trained LM (Stahlberg et al., 2018). For the task of ASR, people recently have started to remove the ILM that is implicitly learned. The biggest question there is, how to best approximate the ILM. Approaches include: (1) training an additional LM on the target side of the parallel data (McDermott et al., 2019), (2) removing/averaging encoder information (Variani et al., 2020; Meng et al., 2021; Zeyer et al., 2021) and (3) training a small sub-network while freezing all other parameters (Zeineldeen et al., 2021). As an alternative to LM fusion, back-translation (Schwenk, 2008; Bertoldi and Federico, 2009; Sennrich et al., 2016a) has become the standard method for incorporating additional monolingual data for NMT. Some work has been done to improve this approach, including sampling (Edunov et al., 2018; Graça et al., 2019), tagging (Caswell et al., 2019) and block-BT (Popel et al., 2020). For sake of simplicity, we focus on the standard back-translation approach using beam search in this work. Apart from using an external LM and backtranslation, additional monolingual data can also be utilized by pre-training (Ramachandran et al., 2017; Zhu et al., 2019), multi-task-learning (Zhang and Zong, 2016; Domhan and Hieber, 2017) or post-editing (Junczys-Dowmunt and Grundkiewicz, 2016; Freitag et al., 2019). In principle, all these approaches can also be combined with LM fusion, potentially further improving the performance of the resulting system. ## 3 Internal Lm Estimation During decoding, given a source sentence f J 1 and a model P(e I1|f J 1 ), we want to find the translation eˆ Iˆ 1 that maximizes $$\hat{e}_{1}^{\hat{I}}=\operatorname*{argmax}_{I,e_{1}^{I}}\left\{P(e_{1}^{I}|f_{1}^{J})\right\}.$$ In our framework, P is the combination of three models: $$P(e_{1}^{I}|f_{1}^{J})\propto P_{\sf M T}(e_{1}^{I}|f_{1}^{J})\cdot P_{\sf L M}^{\lambda_{1}}(e_{1}^{I})\cdot P_{\sf I L M}^{-\lambda_{2}}(e_{1}^{I})$$ where PMT, PLM and PILM are the probabilities of the NMT model, external LM (trained on additional monolingual data) and ILM respectively, and λ1, λ2 ≥ 0. Note that the ILM gets a negative weight, because we want to neutralize its impact in this model combination. If λ2 = 0, we fall back to standard shallow fusion. In principle, the ILM can be exactly calculated from the NMT model by marginalizing over all source sentences f J 1 . However, this summation would be intractable. Instead, different ILM approximations have been proposed in the recent past for ASR, which we will briefly recall here. For a more in-depth discussion of the different approximation methods we refer the reader to Zeineldeen et al. (2021). separate LM : The ILM is approximated by training a separate LM on the target side of the parallel training data. h = 0 : The ILM is approximated by taking the fully trained NMT model PMT(e I1|f J 1 ) and setting the encoder outputs h J 1 to 0. h = havg : Instead of setting all encoder outputs h J 1 to 0, we replace the vector hj for each position j with the average havgj , extracted over the whole parallel training data. c = cavg : Instead of h, we replace all context vectors c (the output of the encoder-decoder attention module) with the position-wise average over the whole parallel training data. mini-self-attn : We replace the encoder-decoder attention of the fully trained NMT model with an additional self-attention module (with causal masking), which is then trained on the target side of the parallel training data while the rest of the NMT network is frozen. This is different from the *separate LM* approach because most of the parameters are still shared between NMT model and ILM, which might result in a better overall ILM approximation.1 ## 4 Experiments We perform experiments on four machine translation tasks, representing different data conditions. 1In their work, Zeineldeen et al. (2021) used a miniLSTM network with the same dependencies as our mini-selfattention. ![2_image_0.png](2_image_0.png) The exact data conditions and statistics are provided in the Appendix A. For all tasks, the additional monolingual data, as well as the test sets, are in the news domain. The monolingual data comes from Newscrawl2 where we sample ca. 10M sentences for LM training and back-translation. For IWSLT En→De and **IWSLT En**→It, the parallel training data consists of around 200k sentence pairs and is in the scientific-talks-domain, coming from the IWSLT17 Multilingual Task (Cettolo et al., 2017). For this setting, we expect the biggest improvements from the additional monolingual data, since the parallel data is out-of-domain. For NEWS En→De, the parallel training data (around 300k sentence pairs) is in the news domain, coming from the NewsCommentaryV14 corpus3. Finally, WMT14 En→De is a standard NMT benchmark used by Vaswani et al. (2017) where the parallel training data consists of around 3.9M sentence pairs and is of mixed domain. We tokenize the data using byte-pair-encoding (Sennrich et al., 2016b; Kudo, 2018) with 15k joint merge operations (40k for WMT14). The models are implemented using the fairseq toolkit (Ott et al., 2019) following the transformer base architecture (Vaswani et al., 2017). The details of the training setups can be found in Appendix A. All systems are trained until the validation perplexity no longer improves and the best checkpoint is selected using validation perplexity as well. We use beam-search with beam-size 12 and utilize SacreBLEU (Post, 2018) to calculate BLEU (Papineni et al., 2002) 2https://data.statmt.org/news-crawl/ 3https://data.statmt.org/news-commentary/v14/ | Method | valid-PPL | |---------------------|-------------| | separate LM | 109.9 | | h = 0 | 251.3 | | h = havg | 240.9 | | c = cavg | 244.2 | | mini-self-attention | 108.4 | | ILM | λ1 | λ2 | BLEU | TER | |----------------|------|------|--------|-------| | - | 0 | 0 | 28.9 | 52.8 | | - | 0.15 | 0.0 | 30.0 | 52.3 | | separate LM | 0.5 | 0.3 | 31.2 | 50.9 | | h = 0 | 0.5 | 0.3 | 30.8 | 51.3 | | h = havg | 0.5 | 0.3 | 31.1 | 51.1 | | c = cavg | 0.5 | 0.3 | 30.6 | 51.5 | | mini-self-attn | 0.5 | 0.4 | 31.7 | 50.0 | and TER (Snover et al., 2006). We report BLEU and TER since we are most familiar with these metrics and to be comparable with previous works. However, we acknowledge that these metrics might have some biases and in future work it might be worth utilizing additional metrics like COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020). Additionally, in future work we should separate our test sets for original source and target text to better understand the effect of translationese in both training and test data, as this might very much influence the improvements we see, especially in the case of back-translation (Freitag et al., 2020). ## 4.1 Comparison Of Ilm Approximations We start by analyzing the ILM neutralization approaches on the IWSLT En→De task and then verify the results on the other tasks. We implement and re-train (if applicable) all the different ILM approximation methods discussed in Section 3. The resulting perplexities on the validation set are listed in Table 1. The variants *separate* LM and *mini-self-attention* have been trained directly using the language model objective, so it is no surprise that they exhibit a much lower perplexity than the other approaches. However, it can be argued that a lower perplexity of the ILM does not necessarily correspond to a better approximation | Method | IWSLT En-De | IWSLT En-It | NEWS En-De | WMT14 En-De | | | | | |---------------------------|---------------|---------------|--------------|---------------|-------|------|-------|------| | BLEU | TER | BLEU | TER | BLEU | TER | BLEU | TER | | | baseline external | - | - | - | - | †32.3 | - | ‡27.3 | - | | baseline ours | 28.9 | 52.8 | 24.1 | 58.9 | 32.8 | 49.0 | 27.7 | 56.5 | | +SF | 30.0 | 52.3 | 24.8 | 58.8 | 33.2 | 49.8 | 28.1 | 56.6 | | +ILM (separate LM) | 31.2 | 50.9 | 26.0 | 57.8 | 34.7 | 47.6 | 28.8 | 55.3 | | +ILM (mini-self-attn) | 31.7 | 50.0 | 26.1 | 57.0 | 35.1 | 47.5 | 29.1 | 54.8 | | back-translation | 34.1 | 47.4 | 27.2 | 56.9 | 35.7 | 45.8 | 29.5 | 54.7 | | +SF +ILM (mini-self-attn) | 34.1 | 47.6 | 27.3 | 56.7 | 35.7 | 46.0 | 29.8 | 54.3 | of the implicit language model. In order to effectively use the external LM and the ILM during decoding, we need to optimize the weights λ1 and λ2 (see Section 3). We do this via a grid search over the validation set by optimizing for the highest BLEU score. The resulting grid for the *mini-self-attention* ILM variant on the IWSLT En→De task is shown in Figure 1. The NMT system by itself has a BLEU [%]score of 21.2. By log-linear combination with just the external LM (λ2 = 0, vanilla shallow fusion) we can gain around 1% absolute improvement on the validation set with the best choice of λ1 = 0.15. By including the ILM with a negative weight, we can get further improvements, up to a final score of 23.8 BLEU [%]. 4Interestingly, the best performance is reached when λ1 ≈ λ2 and with the ILM neutralization, the external LM can be assigned a much bigger weight compared to the case λ2 = 0. We find that for all ILM approximation variants, the optimal weights are similar, and that the TER scores on the validation set follow an almost identical pattern. The final performance of each variant on the test set is shown in Table 2. We want to point out, that the improvements we see on the validation set transfer nicely to the test set with the same tuned weights λ1 and λ2. This is because, in our experiments, the validation and test sets are of the exact same domain. In some additional experiments we found that the optimal values for these weights are indeed domain specific and have to be re-tuned if the system were to be optimized for a different domain. All ILM approximation variants lead to a significant performance improvement over simple shallow fusion. Out of all ILM approximations, the *mini-self-attention* approach performs best, which is the same observation that Zeineldeen et al. (2021) made for ASR. ## 4.2 Comparison To Back-Translation For the back-translation experiments, we train NMT systems on the same parallel training data in the reverse direction and then translate a total of 10M sentences from the monolingual target data (the same data used for training the external LM). Afterwards, the final systems are trained on the combination of real and synthetic data. The final results for all four MT tasks are shown in Table 3. We observe the same trend for all four MT tasks. In general, the improvements from the additional monolingual data are getting smaller, when the amount of parallel training data increases. In almost all cases, shallow fusion gives a small improvement over just using the NMT system. ILM neutralization again improves consistently over simple shallow fusion, with the *mini-self-attn* approximation variant always performing the best. Back-translation out-performs language model integration on all four tasks, although the gap is getting smaller the more parallel training data is available. We also combine back-translation with the best ILM approximation approach (*mini-self-attn*). This does not further increase translation quality, with the exception of the WMT14 task, where we see a small improvement. In general, the ILM approach performs the closest to back-translation on the WMT14 task, so it might be worthwhile to apply this concept to an even bigger MT task. 5 Conclusion We re-visit the method of language model integration for neural machine translation. We implement and experiment with a new approach of neutralizing the implicit language model, which has already shown promising result for the task of automatic speech recognition. We find that ILM neutralization significantly improves the translation quality compared to standard shallow fusion. However, back-translation as an alternative way to incorporate additional monolingual data, still outperforms the approaches using an external language model. Therefore, for future work we will focus on scenarios where back-translation can not be applied effectively, e.g. when the quality of the initial NMT system is too bad to create helpful synthetic data. ## Acknowledgements This work was partially supported by the project HYKIST funded by the German Federal Ministry of Health on the basis of a decision of the German Federal Parliament (Bundestag) under funding ID ZMVI1-2520DAT04A, and by NeuroSys which, as part of the initiative "Clusters4Future", is funded by the Federal Ministry of Education and Research BMBF (03ZU1106DA). ## Limitations The approach of language model integration for neural machine translation is analyzed and compared to the de-facto standard method of backtranslation. Due to constrained resources, this work has several limitations. We focus on translation of text in a single domain, namely news-articles. Different domains might exhibit different behaviour. For the back-translation experiments, we use beam search to create the synthetic data, other methods like sampling were not considered. When combining the synthetic and real parallel data, there are additional methods like tagging and block-wise batching, which we did not utilize in this work. Finally, we compare against the most commonly used LM fusion approach, i.e. shallow fusion. There exist other LM fusion techniques which might exhibit different behaviour when used in combination with ILM neutralization. ## References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. *arXiv preprint* arXiv:1409.0473. Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 182–189, Athens, Greece. Association for Computational Linguistics. Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Frederick Jelinek, John Lafferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. *Computational linguistics*, 16(2):79–85. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63. Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Jan Niehues, Sebastian Stüker, Katsuitho Sudoh, Koichiro Yoshino, and Christian Federmann. 2017. Overview of the iwslt 2017 evaluation campaign. In Proceedings of the 14th International Workshop on Spoken Language Translation, pages 2–14. Vincent J Della Pietra. 1994. The mathematics of statistical machine translation: Parameter estimation. Using Large Corpora, page 223. Tobias Domhan and Felix Hieber. 2017. Using targetside monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500–1505. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 489–500. Akhbardeh Farhad, Arkhangorodsky Arkady, Biesialska Magdalena, Bojar Ondˇrej, Chatterjee Rajen, Chaudhary Vishrav, Marta R Costa-jussa, España-Bonet Cristina, Fan Angela, Federmann Christian, et al. 2021. Findings of the 2021 conference on machine translation (wmt21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88. Association for Computational Linguistics. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at scale and its implications on MT evaluation biases. In *Proceedings of the Fourth Conference on* Machine Translation (Volume 1: Research Papers), pages 34–44, Florence, Italy. Association for Computational Linguistics. Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be guilty but references are not innocent. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 61–71, Online. Association for Computational Linguistics. Miguel Graça, Yunsu Kim, Julian Schamper, Shahram Khadivi, and Hermann Ney. 2019. Generalizing back-translation in neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 45– 52, Florence, Italy. Association for Computational Linguistics. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. *arXiv preprint arXiv:1503.03535*. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, and Yoshua Bengio. 2017. On integrating a language model into neural machine translation. *Computer Speech & Language*, 45:137–148. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Log-linear combinations of monolingual and bilingual neural machine translation models for automatic post-editing. In *Proceedings of the First Conference on Machine Translation: Volume 2, Shared* Task Papers, pages 751–758, Berlin, Germany. Association for Computational Linguistics. Yunsu Kim, Duc Thanh Tran, and Hermann Ney. 2019. When and why is document-level context useful in neural machine translation? In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 24–34. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 66–75. Association for Computational Linguistics. Erik McDermott, Hasim Sak, and Ehsan Variani. 2019. A density ratio approach to language model fusion in end-to-end automatic speech recognition. In *2019* IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 434–441. IEEE. Zhong Meng, Sarangarajan Parthasarathy, Eric Sun, Yashesh Gaur, Naoyuki Kanda, Liang Lu, Xie Chen, Rui Zhao, Jinyu Li, and Yifan Gong. 2021. Internal language model estimation for domain-adaptive end-to-end speech recognition. In *2021 IEEE Spoken Language Technology Workshop (SLT)*, pages 243–250. IEEE. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Martin Popel, Marketa Tomkova, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, Ondˇrej Bojar, and Zdenekˇ Žabokrtsky. 2020. Transforming machine transla- ` tion: a deep learning system reaches news translation quality comparable to human professionals. *Nature* communications, 11(1):1–15. Matt Post. 2018. A call for clarity in reporting bleu scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191. Prajit Ramachandran, Peter J Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 383–391. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Holger Schwenk. 2008. Investigations on large-scale lightly-supervised training for statistical machine translation. In Proceedings of the 5th International Workshop on Spoken Language Translation: Papers, pages 182–189, Waikiki, Hawaii. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231. Felix Stahlberg, James Cross, and Veselin Stoyanov. 2018. Simple fusion: Return of the language model. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 204–211. Ehsan Variani, David Rybach, Cyril Allauzen, and Michael Riley. 2020. Hybrid autoregressive transducer (hat). In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6139–6143. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2021. Investigating methods to improve language model integration for attention-based encoderdecoder asr models. In *Interspeech*, pages 2856– 2860. Richard Zens, Franz Josef Och, and Hermann Ney. 2002. Phrase-based statistical machine translation. In *Annual Conference on Artificial Intelligence*, pages 18– 32. Springer. Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, and Hermann Ney. 2021. Librispeech transducer model with internal language model prior correction. *arXiv e-prints*, pages arXiv–2104. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2019. Incorporating bert into neural machine translation. In International Conference on Learning Representations. ## A Appendix All validation and test sets are from the WMT news translation tasks (Farhad et al., 2021). The validation/test sets are WMT newstest2015/newstest2018 for IWSLT En→De and NEWS En→De, newssyscomb2009/newstest2009 for IWSLT En→It and newstest2013/newstest2014 for WMT14 En→De. Data statistics can be found in Table 4. | task | dataset | domain | # sent. | |--------|-----------|------------------|-----------| | IWSLT | train | scientific-talks | 210k | | En→De | valid | news | 2.2k | | test | news | 3k | | | mono. | news | 9.7M | | | IWSLT | train | scientific-talks | 232k | | En→It | valid | news | 500 | | test | news | 2.5k | | | mono. | news | 10.0M | | | NEWS | train | news | 330k | | En→De | valid | news | 2.2k | | test | news | 3k | | | mono. | news | 9.7M | | | WMT14 | train | mixed | 3.9M | | En→De | valid | news | 3k | | test | news | 3k | | | mono. | news | 10.0M | | We use dropout 0.3 and label-smoothing 0.2 for IWSLT En→De, IWSLT En→It and NEWS En→De and dropout 0.3 and label-smoothing 0.1 for WMT14 En→De. The resulting NMT models had ca. 51M parameters for IWSLT En→De, IWSLT En→It and NEWS En→De and ca. 67M parameters for WMT14 En→De. The NMT training took around 24h for IWSLT En→De, IWSLT En→It and NEWS En→De and around 150h for WMT14 En→De on a single NVIDIA GeForce RTX 2080 Ti graphics card. The language models had ca. 26M parameters for IWSLT En→De, IWSLT En→It and NEWS En→De and ca. 41M parameters for WMT14 En→De. All language model trainings took around 150h on a single NVIDIA GeForce RTX 2080 Ti graphics card. Due to computational limitations, we report results only for a single run. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✗ A2. Did you discuss any potential risks of your work? The authors do not foresee potential risks of this work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 Experiments ✓ B1. Did you cite the creators of artifacts you used? Section 4 Experiments ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All artifacts that were used allow such usage for research purposes. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? All artifacts that were used allow such usage for research purposes. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We only use standard datasets which allow usage for research purposes. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 Experiments ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section Appendix ## C ✓ **Did You Run Computational Experiments?** Section 4 Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 Experiments ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 Experiments D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-type
Type Enhanced {BERT} for Correcting {NER} Errors
https://aclanthology.org/2023.findings-acl.445
We introduce the task of correcting named entity recognition (NER) errors without re-training model. After an NER model is trained and deployed in production,it makes prediction errors, which usually need to be fixed quickly. To address this problem, we firstly construct a gazetteer containing named entities and corresponding possible entity types. And then, we propose type enhanced BERT (TyBERT),a method that integrates the named entity{'}s type information into BERT by an adapter layer. When errors are identified, we can repair the model by updating the gazetteer. In other words, the gazetteer becomes a trigger to control NER model{'}s output. The experiment results in multiple corpus show the effectiveness of our method, which outperforms strong baselines.x
# Type Enhanced Bert For Correcting Ner Errors Kuai Li∗, Chen Chen∗**, Tao Yang, Tianming Du, Peijie Yu, Dong Du and Feng Zhang** Machine Learning Platform Department, Tencent {kuaili, chenzchen, rigorosyang}@tencent.com {blackdu, peijieyu, dongdu, jayzhang}@tencent.com ## Abstract We introduce the task of correcting named entity recognition (NER) errors without retraining the model. After a NER model is trained and deployed in production, it makes prediction errors, which usually need to be fixed quickly. To address this problem, we firstly construct a gazetteer containing named entities and corresponding possible entity types. And then, we propose type-enhanced BERT (TyBERT), a method that integrates the named entity's type information into BERT by an adapter layer. When errors are identified, we can repair the model by updating the gazetteer. In other words, the gazetteer becomes a trigger to control the NER model's output. The experiment results in multiple corpus show the effectiveness of our method, which outperforms strong baselines. ## 1 Introduction Named entity recognition (NER) is the task of identifying spans that belong to particular categories, such as person, location, *organization*, etc. The NER task is important in the information extraction area and NER models are widely deployed in real production systems (Yadav and Bethard, 2019). In recent years, many neural-based methods were proposed to push NER accuracy by designing novel network architectures (Lample et al., 2016; Devlin et al., 2018; Straková et al., 2019; Xue et al., 2022) or incorporating external knowledge (Liu et al., 2019; Wang et al., 2021). Unfortunately, all approaches are still far from perfect. When the model is served in production, we may still encounter recognition errors (e.g., bad cases). Typically, to fix those bad cases, model developers need to (1) annotate the input sentences causing errors with correct labels, (2) combine newly annotated sentences with existing training data, (3) train and tune a new model with the new training data *Equal contribution. ![0_image_0.png](0_image_0.png) Figure 1: Two motivating examples and the overall process to fix errors by updating the gazetteer. and held-out evaluation data, and finally (4) deploy the new model in production. As one can tell, the above process is time-consuming, and cannot meet the requirement of fixing urgent errors quickly in a real production environment. Therefore, in this paper, we aim to tackle the problem of how to correct NER errors without retraining models.1 Taking case 1 and 2 from Figure 1 as examples, there are two kinds of common NER errors when we train and evaluate a model in the English Few-NERD (Ding et al., 2021) corpus: (1) the model fails to recognize the span "XJ220" as a named entity; (2) the model correctly identifies the boundary of the named entity "Nicaragua", but assigns a wrong entity type to it. For the first error, we find the span "XJ220" never appears in the training dataset. Therefore, it is difficult for the model to classify this span as a 1One may argue that this task is trivial if we simply construct a database containing sentences with recognition errors, and then always look up the database before requesting the NER model. But this naive approach is not sustainable as the number of bad cases grows, and the database cannot generalize to any unseen cases. named entity with limited context. For the second error, the mention "Nicaragua" is found in the training dataset, but it is labeled with a different type location. Because of the incomplete type information, the model mistakenly classifies the mention as type *location*, though the correct label should be organization_sportsteam. The above examples suggest that if we have proper type information about the span, the model may correct its mistakes, even without re-training. It motivates us to propose the Type Enhanced BERT (TyBERT) method that combines BERT with type information from a gazetteer. As shown in Figure 1, the gazetteer is a list of pairs of spans and possible entity types. During training, we first look up spans from the gazetteer in training examples, and then integrate the matched span's type information into BERT layers by an adapter layer. In the inference stage, the test examples are processed in the same way. In such a manner, the model is tied to the gazetteer, which will play an important role when the model makes predictions. When encountering the aforementioned two kinds of errors, we can update the gazetteer: we insert a new named entity "XJ220" with the expected type *product_car*, and add a new type organization_sportsteam for the existing named entity "Nicaragua". Moreover, we introduce a noise rate parameter λ to randomly add some noise to the gazetteer. This parameter serves as an adjuster to balance the strength of the gazetteer and the generalization ability of the model. To our knowledge, this is the first work to systematically study how to improve NER models without re-training models. When evaluated in four NER corpus in English and Chinese, the proposed method performs well in fixing errors and outperforms strong baselines. Our code and data will be released after publication. ## 2 Related Work Our work is influenced by existing methods which combine both neural networks and lexicons or gazetteers for NER. For example, Zhang and Yang (2018) proposed a lattice-structured LSTM encoding both a sequence of input characters and potential words that match a pre-gathered lexicon. Sui et al. (2019) presented Collaborative Graph Network to solve the challenges of self-matched lexical words and the nearest contextual lexical words. Gui et al. (2019) aimed to alleviate the word ambiguity issue by a lexicon-based graph neural network with global semantics. Lin et al. (2019) designed an attentive neural network to explicitly model the mention-context association and gazetteer network to effectively encode name regularity of mentions only using gazetteers. Li et al. (2020) introduced a flat-lattice Transformer to incorporate lexicon information for Chinese NER. Meng et al. (2021) invented GEMNET to include a Contextual Gazetteer Representation encoder, combined with a novel Mixture-of-Expert gating network to conditionally utilize this information alongside any word-level model. Fetahu et al. (2022) invented an approach of using a token-level gating layer to augment pretrained multilingual transformers with gazetteers from a target domain. Finally, Liu et al. (2021) proposed Lexicon Enhanced BERT (LEBERT) for Chinese sequence labeling, which integrates external lexicon knowledge into BERT layers directly by a Lexicon Adapter layer. It is worth noting that none of the previous works can be directly applied for correcting NER models without re-training. For example, LEBERT requires learning lexicon embeddings in the adapter layer. If we want to add a new span in the lexicon to fix a bad case, the model has to be re-trained to learn the new span's embedding. ## 3 Method 3.1 Gazetteer Construction As noted before, the gazetteer contains a list of named entities and their possible entity types. In this paper, we collect the gazetteer solely from NER annotations in the dataset. For instance, given the following two annotated sentences from the Few-NERD corpus: London[art−music]*is the fifth album by the* British[location−gpe]rock band. He is domiciled in London[location−gpe]. We will construct the following gazetteer: London [art-music, location-gpe] British [location-gpe] We employ this simple approach because it is applicable for NER tasks in any language or domain. One can also use external resources such as Wikipedia to construct a larger gazetteer (Fetahu et al., 2021). We will explore a larger gazetteer in future work because it is not the focus in this paper. Furthermore, although the generated gazetteer is pretty accurate, a downside is that when we integrate such a high-quality gazetteer in the model, the model tends to put too much trust in the gazetteer. In the other way round, it hurts the model's generalization ability. Therefore, we intentionally add some noise to the gazetteer. Specifically, with probability λ, we choose one of the following three strategies to add noise: (1) randomly select a span that is not labeled as named entity, and then add it to the gazetteer with a random entity type; (2) for a labeled named entity span, add it to the gazetteer with a randomly assigned wrong entity type; (3) skip over adding a labeled named entity span to the gazetteer. In practice, we set λ to a small value, so that it gives the gazetteer strong control in making final predictions, while the model's generalization ability is still reserved to some degree. Note that during training, the gazetteer is constructed using training and development data. When we want to fix errors in test data, the gazetteer is updated using test data. ## 3.2 Model Architecture TyBERT is built on standard BERT with two modifications: (1) given a sentence, the input word sequence is converted to a word-type pair sequence that will be the input for TyBERT; (2) a type adapter for integrating type information in BERT is attached between Transformer layers. Word-Type Pair Sequence. Given a gazetteer G and a sentence with a sequence of words sw = {w1, w2*, ..., w*n}, we match the word sequence with G to find out all potential named entities inside the sentence. So we have a word-type pair sequence swt = {wt1, wt2*, ..., wt*n}. When the word wiis not a part of any potential named entity, wtiis wi. Otherwise, wtiis (wi, ti), where tiis all matched entities' types with B- or I- as prefix to indicate whether it begins or inside a named entity. Taking the sentence "London Bridge is famous" for example, the word "London" is a part of two potential named entities, i.e., (1) "London" with type *art-music* and *location-gpe*, and (2) "London Bridge" with type *building*. Therefore, ti for the word "London" is {[B-art-music, B-location-gpe], [B − *building*]}. Formally, we have ti={*T ype*(xij )}. xij is the j th potential named entity that contains the word wi. T ype(x)=[et1, et2*, ..et*k] represents all possible entity types of named entity x based on G, and etiis one of the possible labels, such as B-art-*music*, etc. Type Adapter. Our Type Adapter (TA) is shown ![2_image_0.png](2_image_0.png) ![2_image_4.png](2_image_4.png) ![2_image_2.png](2_image_2.png) ![2_image_1.png](2_image_1.png) ![2_image_3.png](2_image_3.png) Lang. Dataset Type Train Dev **Test** Sent 75.8k 9.4k 9.6k Token 1299k 163k 169k Entity 81k 11k 11k Sent 131k 18.8k 37.6k Token 3227k 463k 921k Entity 340k 48.7k 96.9k Sent 15.7k 4.3k 4.3k Token 491k 200k 208k Entity 13.3k 6.9k 7.6k Sent 13.5k 0.27k 0.27k Token 7.4k 14.6k 14.9k Entity 1.8k 0.38k 0.41k in Figure 2, which is inspired by Lexicon Adapter proposed in Liu et al. (2021). Specifically, as discussed above, ti has a two-level structure, so we propose a two-level attention mechanism. Firstly, at position i, we compute the cross attention between the hidden state hi with the embeddings of possible entity types *T ype*(xij ) for a potential named entity xij to obtain mij . Then we compute another cross attention between the hidden state hi and mij , and finally obtain the new hidden state h˜i. Compared with BERT, the only extra parameters of TyBERT are the embeddings of entity type etk and related weights in two cross attentions, which can be fully learned in training time. Thus, when updating the gazetteer in test time, we don't have to update any parameters in TyBERT. Following Liu et al. (2021), we only insert a TA after the first transformer layer. ![3_image_0.png](3_image_0.png) English **Chinese** OntoNotes V5.0 Few-NERD OntoNotes V4.0 Weibo P R F-1 P R F-1 P R F-1 P R F-1 BERT 89.32 86.94 88.11 69.65 67.19 68.4 83.45 81.39 82.41 72.16 70.09 71.11 BERT+Intersect 97.67 82.86 89.62 95.8 56.14 70.8 92.1 59.66 72.42 91.38 58.37 71.24 BERT+Union 78.12 89.87 87.28 54.69 94.91 69.39 34.99 94.06 51.07 51.01 90.19 65.16 TyBERT(λ=0.05) 94.67 94.82 94.74 86.86 87.76 87.31 86.93 85.03 85.97 **73.79 80.86 77.16** ## 4 Evaluation 4.1 Experimental Setup Datasets. For evaluation, we employ four datasets, two in English and two in Chinese. For English, we employ the commonly used OntoNotes 5.0 corpus (Pradhan et al., 2013) and also the challenging Few-NERD corpus (Ding et al., 2021) with 66 finegrained types. For Chinese, we employ OntoNotes 4.0 corpus (Weischedel et al., 2011) and Weibo corpus (Peng and Dredze, 2015, 2016) from social media domain. The detailed statistics of four corpora are shown in Table 1. Evaluation measures. Following previous NER works, Standard F1-score (F1), Precision (P) and Recall (R) are used as evaluation metrics. Hyperparameter tuning. We tune training related hyper-parameters in the development set and reported results in the test set. The tuned hyperparameter values are shown in Appendix A. Implementation details. The implementation details are explained in Appendix B. ## 4.2 Results Baseline systems. To compare with our proposed method, we use BERT (Devlin et al., 2018) as a baseline. Because standard BERT cannot correct errors without model re-training, we further designed two additional baseline systems. These two baseline systems ensemble BERT and a rule-based method using a gazetteer as follows. We construct the gazetteer using all of training, development and test data. Then the gazetteer is used to match the sentences in test data to identify named entities. When a span has multiple entity types, we randomly assign a type. Depending on whether we intersect or union the output of BERT and the rule-based method, we name two baseline systems BERT+Intersect and BERT+Union respectively. Discussions. Results of BERT, two extra baseline systems and our proposed TyBERT are shown in Table 2. As we can see, compared with BERT, BERT+Intersect improves BERT by a small margin ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) in three corpora, and BERT+Union only improves BERT slightly in Few-NERD corpus. In contrast, with λ=0.05 (tuned on development set), our proposed method TyBERT improves BERT by a large margin, i.e., 6.63% and 18.91% in two English corpus, and 3.56% and 6.05% in two Chinese corpus. We notice that the improvement in Chinese corpus is smaller than in English corpus. The reason is that there are much more named entities with multiple types in Chinese corpus, e.g., the confusion of *location* and gpe have caused many errors. In future work, we plan to consider named entity's context to fix errors. We have separately analyzed the gains brought by our solution on the ontonotes v4.0 datasets are shown in Appendix D. ## 4.3 Impact Of Gazetteer Noise We further conduct experiments to study the impact of gazetteer noise in Chinese OntoNotes corpus. Results are shown in Table 3. For each λ, we show the results of TyBERT before and after updating the gazetteer using test data. A few observations are obtained. When λ is set to 0, the model before updating gazetteer loses generalization ability, and hence performs poorly. After λ is set to a nonzero value, the model before updating gazetteer improves a lot, and many errors are fixed after updating the gazetteer using test data. ## 5 Conclusions We introduced a new task of correcting NER errors without re-training models. We propose TyBERT which extended standard BERT model with an adapter layer to incorporate span's type information stored in a gazetteer. We further introduce a noise rate parameter to balance the strength of the gazetteer and model's generalization ability. Extensive results justified the effectiveness of the proposed method. We hope our work will inspire future studies towards NER error correction without model re-training. ## Limitations A limitation of the proposed method is that our gazetteer is constructed only by dataset annotations. And it affects the gazetteer coverage in unseen cases. Following previous work, such as Lin et al. (2019) and Fetahu et al. (2022), we will construct a larger gazetteer using external resources such as Wikipedia or knowledge bases. As mentioned in Section 3, we will leave this for future work. Another limitation is that the gazetteer contains many spans that are associated with multiple entity types. Taking the running examples in Section 3.1 for example, the span "London" has type *locationgpe* in most cases, while it is sometimes labeled as type *art-music*. However, in our current design, given a named entity, there is no way to explicitly distinguish between different types. In future work, we will consider the context of named entity when fixing errors. ## Ethics Statement We declare that all authors of this work comply with the ACL Ethics Policy as published in https://www.aclweb.org/portal/content/ acl-code-ethics. ## References Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Hai-Tao Zheng, and Zhiyuan Liu. 2021. Few-nerd: A few-shot named entity recognition dataset. *arXiv preprint arXiv:2105.07464*. Besnik Fetahu, Anjie Fang, Oleg Rokhlenko, and Shervin Malmasi. 2021. Gazetteer enhanced named entity recognition for code-mixed web queries. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1677–1681. Besnik Fetahu, Anjie Fang, Oleg Rokhlenko, and Shervin Malmasi. 2022. Dynamic gazetteer integration in multilingual models for cross-lingual and cross-domain named entity recognition. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 2777–2790. Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, and Xuan-Jing Huang. 2019. A lexicon-based graph neural network for chinese ner. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1040–1050. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Xiaonan Li, Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. Flat: Chinese ner using flat-lattice transformer. *arXiv preprint arXiv:2004.11795*. Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, Bin Dong, and Shanshan Jiang. 2019. Gazetteer-enhanced attentive neural networks for named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6232–6237. Tianyu Liu, Jin-Ge Yao, and Chin-Yew Lin. 2019. Towards improving neural named entity recognition with gazetteers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5301–5307. Wei Liu, Xiyan Fu, Yue Zhang, and Wenming Xiao. 2021. Lexicon enhanced chinese sequence labeling using bert adapter. *arXiv preprint arXiv:2105.07148*. Tao Meng, Anjie Fang, Oleg Rokhlenko, and Shervin Malmasi. 2021. Gemnet: Effective gated gazetteer representations for recognizing complex entities in low-context input. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1499–1512. Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548–554. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. arXiv preprint arXiv:1603.00786. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In *Proceedings* of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152. Jana Straková, Milan Straka, and Jan Hajic. 2019. Neu- ˇ ral architectures for nested ner through linearization. arXiv preprint arXiv:1908.06926. Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2019. Leverage lexical knowledge for chinese named entity recognition via collaborative graph network. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 3830–3840. Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021. Improving named entity recognition by external context retrieving and cooperative learning. *arXiv* preprint arXiv:2105.03654. Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium. Linting Xue, Aditya Barua, Noah Constant, Rami AlRfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. Byt5: Towards a token-free future with pre-trained byte-to-byte models. *Transactions of the Association for Computational Linguistics*, 10:291–306. Vikas Yadav and Steven Bethard. 2019. A survey on recent advances in named entity recognition from deep learning models. *arXiv preprint arXiv:1910.11470*. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. *arXiv preprint arXiv:1805.02023*. ## C Corpus License D **Correct And Recall Details In Ontonotes** Datasets A Hyperparameter B Implementation Details Dataset types GPE ORG PER LOC Prediction 3501 1747 1933 313 Incorrect entities 573 417 124 126 In training set 397 181 52 70 Not in training set 176 236 72 56 Corrected by TyBert 35 4 10 9 In training set 8 - - 1 Not in training set 27 4 10 8 New incorrect entities 15 1 2 7 Table 6: The distribution of newly recalled labels by the model Few-NERD corpus is under the CC 821 BY-SA 4.0 license, Weibo corpus is under CC BY-SA 3.0 license and OntoNotes corpus are used under LDC license. These corpus does not contain any personally identifiable information or offensive content. | Dataset types | GPE | ORG | PER | LOC | |------------------------|-------|-------|-------|-------| | Golden label | 3452 | 1877 | 1864 | 491 | | No recall | 485 | 521 | 39 | 276 | | In training set | 344 | 148 | - | 70 | | Not in training set | 141 | 373 | 39 | 206 | | New recalled by Tybert | 197 | 305 | 22 | 71 | | In training set | 121 | 39 | - | 20 | | Not in training set | 76 | 266 | 22 | 51 | | New incorrect entities | 16 | 4 | - | 1 | | English | Chinese | | | | |--------------|-----------|-----------|-------|------| | OntoNotes | FewNERD | OntoNotes | Weibo | | | LR | 1e-4 | 2e-5 | 5e-5 | 3e-4 | | Weight Decay | 0.01 | 0.01 | 0.01 | 0.01 | | #Epoch | 10 | 5 | 20 | 8 | | Batch Size | 16 | 32 | 64 | 32 | Comparing BERT and TyBERT, mainly includes the following aspects: 1. number of errors for each type of entity 2. type of errors for each type of entity (substitution or deletion) 3. number of corrections for unseen data in the training 4. number of corrections for seen data in the training More details can be found in Table 5 and 6. The tuned hyperparemeters are shown in Table 4. We implemented the models using PyTorch. All models are initialized from BERT-base English or Chinese checkpoints(Devlin et al., 2018) which have about 110M parameters. Each experiment is trained on a single V100 GPU for about 1 to 4 hours depending on the corpus size. Table 4: The hyperparameters used in four corpus. Table 5: The distribution of error labels corrected by the model ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4. We use the pre-trained language models including BERT-English and BERT-Chinese. In addition, we used four corpus datasets in our experiments. ✓ B1. Did you cite the creators of artifacts you used? Section 2 and 4. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix C. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 4. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. ## C ✓ **Did You Run Computational Experiments?** Section 4. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
yuan-etal-2023-bridge
Bridge the Gap Between {CV} and {NLP}! A Gradient-based Textual Adversarial Attack Framework
https://aclanthology.org/2023.findings-acl.446
Despite recent success on various tasks, deep learning techniques still perform poorly on adversarial examples with small perturbations. While optimization-based methods for adversarial attacks are well-explored in the field of computer vision, it is impractical to directly apply them in natural language processing due to the discrete nature of the text. To address the problem, we propose a unified framework to extend the existing optimization-based adversarial attack methods in the vision domain to craft textual adversarial samples. In this framework, continuously optimized perturbations are added to the embedding layer and amplified in the forward propagation process. Then the final perturbed latent representations are decoded with a masked language model head to obtain potential adversarial samples. In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (T-PGD). We find our algorithm effective even using proxy gradient information. Therefore, we perform the more challenging transfer black-box attack and conduct comprehensive experiments to evaluate our attack algorithm with several models on three benchmark datasets. Experimental results demonstrate that our method achieves overall better performance and produces more fluent and grammatical adversarial samples compared to strong baseline methods. The code and data are available at \url{https://github.com/Phantivia/T-PGD}.
# Bridge The Gap Between Cv And Nlp! A Gradient-Based Textual Adversarial Attack Framework Lifan Yuan1∗†, Yichi Zhang1∗† , Yangyi Chen2**, Wei Wei**1,3‡ 1Cognitive Computing and Intelligent Information Processing Laboratory, Huazhong University of Science and Technology 2University of Illinois Urbana-Champaign 3Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL) {lievanyuan173, phantivia, yangyichen6666}@gmail.com weiw@hust.edu.cn ## Abstract Despite recent success on various tasks, deep learning techniques still perform poorly on adversarial examples with small perturbations. While optimization-based methods for adversarial attacks are well-explored in the field of computer vision, it is impractical to directly apply them in natural language processing due to the discrete nature of the text. To address the problem, we propose a unified framework to extend the existing optimization-based adversarial attack methods in the vision domain to craft textual adversarial samples. In this framework, continuously optimized perturbations are added to the embedding layer and amplified in the forward propagation process. Then the final perturbed latent representations are decoded with a masked language model head to obtain potential adversarial samples. In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (**T-PGD**). We find our algorithm effective even using proxy gradient information. Therefore, we perform the more challenging transfer black-box attack and conduct comprehensive experiments to evaluate our attack algorithm with several models on three benchmark datasets. Experimental results demonstrate that our method achieves overall better performance and produces more fluent and grammatical adversarial samples compared to strong baseline methods. The code and data are available at https: //github.com/Phantivia/T-PGD. ## 1 Introduction Despite great success in real-world applications, deep neural networks (DNNs) are still vulnerable to adversarial samples, which are crafted by adding small and human-imperceptible perturbations to the inputs and can change the prediction ∗Work done during internship at CCIIP lab. †Equally contribution ‡Corresponding author [⃗ଵ, ⃗ଶ, … , ⃗ே] Continuous Perturbation [⃗ଵ, ⃗ଶ, … , ⃗ே] ![0_image_0.png](0_image_0.png) "George, hire good **real director and good writers please"** Discrete **Continuous** "George, hire great real director and good writers please" MLM Head Previous Work **Our Method** label of the victim model (Szegedy et al., 2014; Goodfellow et al., 2015). In the field of computer vision (CV), numerous adversarial attack methods have been proposed to evaluate the robustness of DNNs (Papernot et al., 2016a; Madry et al., 2019), and corresponding defense methods are also well-explored (Papernot et al., 2016c; Ross and Doshi-Velez, 2018). Adversarial attacks on images are defined as an optimization problem of maximizing the loss function of the model on specific samples, which can be approximated by gradient ascent algorithms. However, the textual adversarial attack is more challenging due to the discrete and nondifferentiable nature of the text space. In Natural Language Processing (NLP), the methods that directly employ the gradients to optimize adversarial samples are not applicable in either the white-box or black-box settings, since they cannot obtain valid discrete texts. For this reason, most works in NLP explore some heuristic methods to produce discrete perturbations, such as manipulating the most important words in the text using corpus knowledge or contextualized information (Ren et al., 2019; Zang et al., 2020; Li et al., 2020). Besides, there are some practices of textual adversarial attacks that employ gradients for first-order approximation to find optimal candidates in vocabulary for word substitution, but the one-off search is less effective and can violate the local linearization assumption (Cheng et al., 2019; Behjati et al., 2019; Xu and Du, 2020). To bridge this gap, we propose a general framework to adapt the existing optimization-based adversarial attack methods to NLP (See Figure 1). Essentially, we succeed in obtaining high-quality adversarial samples from the perturbed embedding space. Specifically, we employ gradients to produce perturbations on token embeddings rather than on the original text, thus transforming the problem of searching for adversarial samples in the discrete text space into searching in the continuous and differentiable embedding space. This provides the basis for applying adversarial attack methods investigated in CV to craft textual adversarial samples. In this paper, we adapt the gradient-based algorithm PGD (Madry et al., 2019) within our framework to perform textual adversarial attacks, denoted as **T-PGD**. Considering that in practical scenarios attackers may not hold the gradient information of the victim model, we explore the possibility of conducting a decision-based transfer attack. To this end, besides the true victim model, we have another model dubbed the local proxy model in the attack process. **Gradient information comes** from the local proxy model and only the decision of the victim model can be accessed. Then the perturbed latent representations should be transferred back to the discrete text. Although there have been some works exploring the feasibility of directly perturbing token embeddings (Sato et al., 2018; Cheng et al., 2019; Behjati et al., 2019), they simply use the first-order approximation of the gradient to select candidate words from vocabulary, which might break the local linearization hypothesis. However, recent work finds that the mask language modeling (MLM) head can reconstruct input sentences from their hidden states with high accuracy, even after models have been fine-tuned on specific tasks (Kao et al., 2021). Inspired by this, we employ an MLM head to decode the perturbed latent representations. With the extensive linguistic knowledge of MLM-head, the coherence and grammaticality of adversarial samples can be guaranteed. We conduct comprehensive experiments to evaluate the effectiveness of our method by performing transfer black-box adversarial attacks, where only the final decisions of victim models are accessible, against three victim models on three benchmark datasets. Experimental results demonstrate the effectiveness of our framework and T-PGD algorithm, with a higher attack success rate and more fluent and grammatical adversarial examples produced. To summarize, the main contributions of this paper are as follows: (1) We propose a general textual adversarial attack framework facilitating NLP researchers to produce adversarial texts using optimization-based methods, bridging the gap between CV and NLP in the study of adversarial attacks. (2) Based on the framework, we propose an effective adversarial transfer attack method called T-PGD, handling the challenge of decision-based black-box attack, which is rarely investigated in NLP. ## 2 Related Work 2.1 Adversarial Attack In Cv In the field of computer vision, adding a small amount of perturbations to input images to mislead the classifier is possible (Szegedy et al., 2014). Based on this observation, various adversarial attack methods have been explored. FGSM (Goodfellow et al., 2015) crafts adversarial samples using the gradient of the model's loss function to the input images. BIM (Kurakin et al., 2017) straightforwardly extends FGSM, iteratively applying adversarial perturbations multiple times with a smaller step size. MIM (Dong et al., 2018) exploits momentum when updating inputs, obtaining adversary samples with superior quality. PGD (Madry et al., 2019) employs uniform random noise as initialization. Both MIM and PGD are variants of BIM. Although well explored in CV, these methods are not directly transferable to NLP due to the discrete nature of the text. A recent work GBDA (Guo et al., 2021) generates adversarial samples by searching an adversarial distribution, optimizing with a gradient-based algorithm that has been previously used in image adversarial attacks (Carlini and Wagner, 2017). In this paper, we propose a general framework enabling the application of adversarial attacks in CV to text without many adaptions. ## 2.2 Adversarial Attack In Nlp Existing textual attacks can be roughly categorized into white-box and black-box attacks according to the accessibility to the victim models. White-box attack methods, also known as gradient-based attack methods, assume that the attacker has full knowledge of the victim models, including model structures and all parameters. There are few application scenarios of white-box attacks in real-world situations, so most white-box attack models are explored to reveal the weakness of victim models, including universal adversarial triggers (Wallace et al., 2019), and fast gradient sign inspired methods (Ebrahimi et al., 2018; Papernot et al., 2016b). Black-box attack models can be further divided into two different attack settings, i.e. score-based and decision-based. The first one assumes the attacker can obtain the decisions and corresponding confidence scores from victim models. Most research works on black-box attacks focus on this setting, exploring different word substitution methods and search algorithms to reduce the victim models' confidence scores (Jin et al., 2020; Ren et al., 2019; Zang et al., 2020; Li et al., 2020; Alzantot et al., 2018). The other attack setting assumes the attackers can only obtain decisions from victim models, which is more challenging and less studied. Maheshwary et al. (2021) first substitutes some words in the input sentences to flip the labels and then conducts a search based on a genetic algorithm, expecting to find the most semantic preserved adversarial samples. Chen et al. (2021) propose a learnable attack agent trained by imitation learning to perform a decision-based attack. Some works also explore sentence-level transformation, including syntax (Iyyer et al., 2018) and text style (Qi et al., 2021), to launch attacks. In this work, we consider the latter setting and show that even with less information, our decision-based attack can still be as effective as score-based ones. ## 3 Framework In this section, we first present an overview of our framework, and next, we will give the details of how to add continuous perturbations and reconstruct the text. ## 3.1 Overview We have two models in the perturbation generation process: (1) a local proxy model which provides gradient information to optimize the adversarial samples, and (2) the true victim model that the attacker attempts to deceive. Specifically, a proxy BERT model fine-tuned on the attacker's local dataset encodes each discrete text instance into "George, hire good **real director** ![2_image_0.png](2_image_0.png) … + … () () … () … () () … () continuous token embeddings and then adds continuous perturbation to it. The perturbation would be iteratively optimized using the gradient of the proxy model, according to the prediction output of the victim model. After perturbation, an MLM head will decode the perturbed latent representation to generate candidate adversarial samples. The overview of the framework is shown in Figure 2. With the help of our proposed framework, it is feasible to perform textual adversarial attacks with various gradient-based methods in CV. In this paper, we examine PGD (Madry et al., 2019) as a case (See Section 4). ## 3.2 Notation We denote each sample as (x ∈ X , y ∈ Y), where x denotes the input text, y denotes its corresponding label. In particular, the embeddings of x is e, the hidden state is h, and final prediction is yˆ. The local neural network is implied by a mapping function f, which consists of three components, f0, f1, and f2, holding: $$f\left(x\right)=f_{2}\left(f_{1}\left(f_{0}\left(x\right)\right)\right),$$ $$(1)$$ where f0 is the embedding layer, f1 denotes the hidden layers from the first layer to m-th layer, and f2 denotes the rest of the neural network. Then the forward propagation process can be described as: $$e=f_{0}\left(x\right),h=f_{1}\left(e\right),{\hat{y}}=f_{2}\left(h\right)\qquad\quad(2)$$ ## 3.3 Latent-Space Perturbation Previous work has shown that the latent representations of transformer-based pre-trained language models are effective in providing semantic and syntactic features (Clark et al., 2019; Jawahar et al., 2019), and thus we use a local BERT model fine-tuned on our local dataset as the encoder for our framework. For each text input, we first calculate the taskspecific loss in the forward propagation process, and then perform backward propagation to obtain the gradients of the loss with respect to the token embeddings of the input text. The generated gradients are viewed as the information for updating the perturbations added to the token embeddings, which can be obtained by solving an optimization problem as follows: $$\delta=\operatorname*{arg}\operatorname*{max}_{\delta:\|\delta\|_{2}\leq\varepsilon}{\mathcal{L}}\left(f_{2}\left(f_{1}\left(f_{0}\left(x\right)+\delta\right)\right),y\right),\tag{3}$$ where δ is the perturbation and L(·) is the loss function. The closed-form solution to the optimization problem is hard to directly obtain (Goodfellow et al., 2015), which is thus relaxed to obtain an approximate solution. For example, various methods in CV usually linearize the loss function with gradient information to approximate the perturbations δ (Goodfellow et al., 2015; Kurakin et al., 2017; Madry et al., 2019). In NLP, most existing gradient-based methods commonly employ first-order approximation to obtain substitution words (Cheng et al., 2019; Behjati et al., 2019; Xu and Du, 2020). However, these one-off approaches may result in large step size perturbations, violating the hypothesis of local linearization (See Figure 3). To ensure the local linearization hypothesis, we consider adjusting the continuous perturbations added to the token embeddings with a minor change at each step, and then iteratively update the token embeddings of the input instance with the perturbations until generating a meaningful adversarial sample for attacking. ## 3.4 Reconstruction Using continuous perturbations, we need to reconstruct the meaningful adversarial text from the optimized token embeddings. The MLM-head is observed to be able to reconstruct input sentences from hidden states in middle layers with high Our Method ![3_image_0.png](3_image_0.png) Information Decision Gradient Information Decision accuracy, even after models have been fine-tuned on specific tasks (Kao et al., 2021). Specifically, MLM-head is a pre-trained H × V linear layer, where H is the size of hidden states and V is the size of the vocabulary. Given continuous input hidden states h, it can predict token IDs by t = hAT + b, where A and b are tuned parameters. The IDs can later be decoded into texts by the tokenizer using a predefined ID-token mapping. Inspired by this, we adopt the MLM head as the decoder for two reasons: 1) MLM-head is capable of interpreting any representation embeddings in the hidden space, which is crucial to search adversarial examples continuously; 2) MLM-head has been fully trained during the pre-trained stage so it acquires linguistic knowledge together with the language model and can reconstruct sentences considering the contextual information. Without loss of generality, we take an example in Figure 3 to illustrate the discrepancy between the one-off-based attack models and our proposed iterative-attack-based model. One-off attack models are prone to choose the token b to serve as the substitute of token a because cos(−→at1, −→ab) < cos(−→at1,−→ac). However, in our framework, the onestep perturbation −→at1 does not cross the decoding boundary, and thus the decoding results remain unchanged if only using one-step perturbation. Based on the iterative search, the perturbations can be accumulated to the extent to cross the decision boundary and reach the transition point t3, which will be decoded as the optimal solution c. Then a is replaced by c to obtain the adversarial sample to query the victim model for its decision. If this adversarial sample fails to fool the victim model, we start the next searching iteration from the current perturbed token embedding, i.e. t3 in Figure 3, but not from the embedding of the decoded token c. By exploiting virtual embeddings as transition points, this iterative attack framework can preserve accumulated gradient information and avoid breaking local linearization assumptions. ## 4 Method 4.1 T-Pgd Algorithm We instantiate our framework with PGD (Madry et al., 2019) algorithm, and name our attack model as Textual-PGD (**T-PGD**). The algorithm flow of T-PGD is shown in Algorithm 1. Algorithm 1 T-PGD Require: Original input x sampled from X Ensure: Adversary of x 1: Randomly mask one word in x 2: *AdvList* = [] 3: *BestSim* = 0 4: for j ∈ [1*, . . . , M axIter*] do 5: e0 = f0 (x) 6: δ0 =1 Ne0 Uniform (−*ε, ε*) 7: for i ∈ [1*, . . . , M axStep*] do 8: ei = ei−1 + δi−1 9: hi = f1(ei) 10: Advi = Dec(hi) 11: Sim = USE (Advi, x) 12: if Advi **not in** *AdvList* and Sim > BestSim **then** 13: Append Advito *AdvList* 14: BestSim = Sim 15: Query victim model with Advi 16: if attack succeed and *Sim >* T hreshold and no antonyms **then** 17: **return** Advi 18: **end if** 19: **end if** $$\begin{array}{c}\textbf{return}\hspace{0.1cm}Adv_{i}\\ \textbf{end if}\\ \textbf{end if}\\ g_{adv}=\nabla_{\delta_{i-1}}\mathcal{L}\left(f_{2}\left(h_{i}\right),y\right)\\ \delta_{i}=Proj_{\|i\|_{\infty}}\leq\left(\delta_{i-1}+\alpha_{\overline{i}}\right)\end{array}$$ 21: δi = *P roj*kδkF ≤ε δi−1 + αgadv kgadvkF 22: **end for** 23: **end for** To solve the optimization problem in Eq. (3), we iteratively search for the optimal solution by adding the gradient-based perturbations to the token embeddings.For each sample, we first pre-defined a maximum iteration of the searching process to avoid the infinite loop problem. In each iteration, we first map input x to the token embeddings and initialize the perturbation by sampling noise from a uniform distribution. In the i-th step, we obtain new embeddings Ei by adding δi−1, the perturbation generated in the last step, to ei−1. Then, ei will be forward propagated to obtain a hidden representation: hi = f1 (ei). Next, the hidden states with perturbations are decoded for reconstructing the crafted adversarial samples, Advi = Dec(hi), where Advi denotes the adversarial sample obtained in this step. We then compute the semantic similarity Simi between Advi and input x using Universal Sentence Encoder (USE) score (Cer et al., 2018). We query the victim model only when Advi satisfying: (1) it varies from all potential adversarial samples that have been queried before; (2) it is more similar to the original sentences, compared to previous potential adversarial samples. If the attack succeeds and Sim is higher than a hyperparameter T hreshold, then Adviis considered as the final adversarial sample of the original input. Otherwise, hi will be forwarded to obtain the prediction of the local model with respect to the input x. We then compute the loss between the predicted label and the golden label y and then calculate the gradient w.r.t. δi, and update the perturbation for next step, with the following formula: $$g_{a d v}=\nabla_{\delta_{i-1}}{\mathcal{L}}\left(f_{2}\left(h_{i}\right),y\right)\tag{4}$$ $$\delta_{i}=Proj_{\|\delta\|_{F}\leq\varepsilon}\left(\delta_{i-1}+\alpha\frac{g_{a d v}}{\|g_{a d v}\|_{F}}\right),$$ where gadv is the gradient of the loss with respect to the continuous perturbation δi−1, α is the step size of δi−1, and i denotes the current iteration step. *P roj* (·) performs a re-initialization when δ reaches beyond the -neighborhood of the original embedding. ## 4.2 Heuristic Strategies Random Masking for Diversity. To enhance the diversity of adversarial samples, we randomly mask one token in each input sentence to randomly initialize the search for a broader search scope. Specifically, we tokenize x to a list of tokens, x*token* = [x0, ..., xi*, ..., x*n]. Then we randomly select i-th index token using the uniform distribution and replace it with a special token *[MASK]*. Next, the MLM-head-based decoder will predict the masked word according to its context, which will diversify the generated adversarial samples with semantically consistent consideration. Then, these processed sentences are embedded into continuous token embeddings as aforementioned. Input Reconstruction Loss. Intuitively, the quality of generated adversarial samples is largely affected by the reconstruction accuracy of the MLM-head-based decoder. If failing to recover the original sentence even with no perturbations added, its capacity to generate fluent adversarial samples from perturbed hidden states might be limited. Therefore, the MLM-head-based decoder should be constrained with external constraints to ensure reconstruction accuracy, thus guaranteeing the quality of generated adversarial samples. Note that the MLM-head has been pre-trained to precisely fill the masked word, which is also fitted to our task. Hence, to preserve the reconstruction performance of the MLM-head in optimization, we add the MLM loss as a regularization term to the loss function. Specifically, the loss function used in Eq. 4 consists of two components: L(f(x), y) = L1(f(x), y) + βL2(f(x), y), (5) where L1 (f(x), y) is the original loss of the local model on specific tasks (e.g. cross-entropy loss in sentiment classification), L2 (f(x), y) is the CE loss of the input reconstruction task, and β is a weighting constant. Considering that we aim to reduce the reconstruction loss L2 while increasing L(f(x), y) along the gradient direction, β should be negative. Taking two losses into account jointly, we adjust the perturbation searching target to successfully fool the victim models with fewer modifications. Selection for Layer Index m. The layer index m is dataset-specific but victim-agnostic. This is because there is a trade-off between ASR and USE when decoding different layers (layer index ↑, USE ↑, ASR ↓ ). Therefore, we determine the m by tuning the USE score on a sampled dataset. In practice, we sample 100 examples and adopt BERT as the victim to conduct pilot experiments. We compute the USE scores of decoding different layers. We then set a USE threshold t = 0.8 and disregard layers which leads to a USE score lower than t. Finally, we find the lowest USE among the rest of the layers and set m as the index of the corresponding layer. We set h = 10,11, and 7 for SST-2, MNLI, and AG, respectively. Antonym Filtering. Li et al. (2019) reports that semantically opposite words locate closely in their representation embeddings since antonyms usually appear in similar contexts. Therefore, we filter antonyms of original words using WordNet (Fellbaum, 2010) to prevent invalid adversarial samples. ## 5 Experiments We conduct comprehensive experiments to evaluate our general framework and T-PGD algorithm on the task of sentiment analysis, natural language inference, and news classification. We consider both automatic and human evaluations to analyze our method in terms of attack performance, semantic consistency, and grammaticality. ## 5.1 Datasets And Victim Models For sentiment analysis, we choose SST-2 (Socher et al., 2013), a binary sentiment classification benchmark dataset. For natural language inference, we choose the mismatched MNLI (Williams et al., 2018) dataset. For news classification, we choose AG's News (Zhang et al., 2015) multi-classification datasets with four categories: World, Sports, Business, and Science/Technology. We randomly sample 1,000 samples that models can classify correctly from the test set and perform adversarial attacks on those samples. For each dataset, we evaluate T-PGD by attacking BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020) and XLNet (Yang et al., 2019) with a local fine-tuned BERT model to generate potential adversarial samples. Details of datasets and the original accuracy of victim models are listed in Table 1. ## 5.2 Experimental Setting Baseline Methods. We select four strong scorebased attacks as baselines: (1) PWWS (Ren et al., 2019); (2) Textfooler (Jin et al., 2020); (3) PSO (Zang et al., 2020); (4) BERT-Attack (Li et al., 2020). Note that all of them require the confidence scores of victim models, while our model only assumes the decisions are available, which is more challenging. We also make a comparison with the decision-based GBDA (Guo et al., 2021). Evaluation Metrics. We evaluate our method considering the attack success rate and adversarial sample quality. (1) Attack Success Rate (ASR) is the proportion of adversarial samples that successfully mislead victim models' predictions. (2) Dataset #Class Train Test Avg Len BERT Acc RoBERTa Acc ALBERT Acc XLNET Acc SST-2 2 7K 1.8K 16.5 89.9 94.2 92.8 94.38 MNLI 3 433K 10K 31.7 82.8 83.6 82.3 87.06 AG's News 4 30K 1.9K 39.3 91.2 94.7 94.2 98.96 Table 1: Detailed information of datasets and original accuracy of victim models. Dataset Model BERT RoBERTa ALBERT XLNet ASR% USE ∆I ∆PPL ASR% USE ∆I ∆PPL ASR% USE ∆I ∆PPL ASR% USE ∆I ∆PPL PWWS 75.12 0.83 0.29 533.86 77.03 0.82 0.41 837.7 72.00 0.82 0.40 531.85 77.26 0.83 5.18 744.47 Textfooler 85.36 0.81 0.33 480.14 87.28 0.82 0.32 924.09 72.68 0.79 0.25 706.83 89.17 0.82 0.28 540.88 PSO 85.60 0.75 **0.10** 501.12 85.50 0.74 **0.09** 479.27 91.49 0.77 **0.14** 397.77 87.02 0.76 **0.10** 498.94 BERT-Attack 90.36 0.81 0.51 378.79 93.53 0.88 0.45 387.95 92.43 0.79 0.81 348.37 97.26 0.84 0.55 383.90 GBDA 57.19 0.64 0.42 **186.21** 58.05 0.64 0.22 **27.45** 54.31 0.64 0.47 **153.94** 56.56 0.64 0.22 **28.34** TPGD **97.00 0.92** 0.62 343.65 **94.75** 0.89 0.63 302.70 **93.59 0.90** 0.69 291.00 **97.29 0.91** 0.65 334.55 PWWS 75.12 0.83 0.34 516.95 71.65 0.84 0.3 715.42 45.88 0.77 4.17 744.49 75.10 0.83 0.34 316.95 Textfooler 72.34 0.83 0.31 780.8 77.27 0.87 0.3 640.21 82.47 0.81 0.31 854.73 84.70 0.82 0.31 1781.96 PSO 75.85 0.8 0.11 481.43 76.08 0.80 0.11 411.12 89.41 0.79 0.22 424.48 75.80 0.80 0.11 381.43 BERT-Attack 87.68 0.87 0.55 484.27 91.26 0.89 0.23 604.22 89.65 0.89 0.25 456.31 82.10 0.79 0.55 10956.63 GBDA 61.28 0.67 0.08 **265.38** 59.31 0.67 0.12 316.18 62.65 0.67 0.10 288.37 59.70 0.67 0.10 **250.75** TPGD **93.96 0.92 -0.95** 296.82 94.55 0.91 -0.97 261.62 94.65 0.93 -0.98 259.57 **93.63 0.90 -0.33** 504.34 | SST-2 MNLI | |--------------| | AG's News | AG's News PWWS 65.46 0.84 0.65 394.28 54.70 0.84 0.82 491.48 48.53 0.84 4.71 476.81 61.00 0.82 0.78 474.31 Textfooler 88.71 0.81 0.61 454.13 78.25 0.82 0.59 372.9 73.21 0.84 1.32 367.66 84.90 0.80 0.55 491.87 PSO 66.22 0.79 0.25 539.25 64.63 0.79 0.29 508.76 76.37 0.84 0.15 282.73 61.30 0.78 0.33 565.82 BERT-Attack 81.25 0.84 0.48 431.47 82.58 0.85 0.07 307.74 91.28 0.81 2.52 289.52 91.50 0.86 0.46 240.63 GBDA 77.66 0.69 -0.16 **85.69** 68.97 0.69 -0.59 **96.95** 66.67 0.73 0.20 **54.91** 71.16 0.67 **-0.39 109.49** TPGD **94.47** 0.75 **-0.05** 625.08 **99.30 0.87 -1.42** 285.12 **99.24 0.87 -1.14** 260.64 **94.05 0.89** -0.10 277.17 Quality of adversarial samples is evaluated by two automatic metrics and human evaluation, including their semantic consistency, grammaticality, and fluency. Specifically, we use Universal Sentence Encoder (Cer et al., 2018) to compute the semantic similarity between the original text and the corresponding adversarial sample, Language-Tool1 to calculate the increase of grammar errors in texts after being perturbed, and GPT-2 (Radford et al., 2019) to compute the increase of perplexity to measure fluency. We also conduct a human evaluation to measure the validity and quality of adversarial samples. ## 5.3 Experimental Results The results of automatic evaluation metrics are listed in Table 2. Attack Performance. T-PGD consistently outperforms the strong score-based attack methods considering the attack success rate. We attribute the success of our attack method to the more effective searching process following the guidance of the gradient information, which is verified in the ablation study (Section 6). $${\tt n}/{\tt j x m o r r i s l}\,{\tt l}\,{\tt2}$$ Adversarial Sample Quality. We observe that the quality of the adversarial samples generated by T-PGD increases with the text length. Our adversarial samples yield overall higher USE scores than baseline models, indicating that our method can manipulate adversarial samples more precisely with explicit gradient information. And although the grammatical performance of T-PGD is not the best on SST-2, which mostly contains shorter text (See Table 1), MNLI and AG's News T-PGD produce the fewest grammatical errors and the lowest perplexity, since the embedding space of longer text is broader and has a better optimal solution. Finally, we attribute the overall high quality of our adversarial samples to the introduction of reconstruction loss, which is demonstrated in Section 6. ## 5.4 Human Evaluations To further study the quality and validity of adversarial samples, we randomly selected 100 original SST-2 sentences and 100 adversarial samples from the SOTA baseline BERT-Attack and T-PGD respectively for human evaluation. Following (Li et al., 2020), we shuffle the 300 samples and ask 3 independent human judges to evaluate the quality (300 samples per person). For semantic consistency evaluation, we ask humans to predict the labels of mixed texts. For grammar and fluency, | Source | Accuracy | Grammar & Fluency | |-------------|------------|---------------------| | Original | 0.92 | 4.63 | | BERT-Attack | 0.48 | 3.41 | | T-PGD | 0.68 | 3.52 | human judges score from 1 to 5 on the above examples. All annotators have no knowledge about the source of the text, and all their evaluation results are averaged (shown in Table 3). Semantic Consistency. Since human judges have high accuracy on the original text, the prediction results on texts can be regarded as ground truth labels. Therefore, human accuracy can be a criterion for semantic consistency between original sentences and adversarial ones. From the results, human judges achieve 0.68 accuracies on adversarial samples crafted by T-PGD, significantly higher than the baseline method. This result verifies that the adversarial samples crafted by T-PGD have a better semantic consistency. Grammar and Fluency. We can also conclude from Table 3 that adversarial samples crafted by T-PGD have better quality compared to the baseline method considering the grammar and fluency, evaluated by human annotators. However, both BERT-Attack and T-PGD suffer a decline in grammatical correctness and fluency of adversarial text, leaving room for improvement in future research. ## 6 Further Analysis Importance of Gradient Information. T-PGD employs the gradient of the proxy local BERT model to approximate the perturbations. To verify the effectiveness of the gradient information, we conduct an ablation experiment on SST-2 by adding only random perturbations in the embedding space without exploiting the gradient information. In detail, we generate a Gaussian noise with the same mean and variance as our gradient-based perturbations. The results in Table 4 shows that without exploiting the direction of the gradient, the search in embedding space may deviate from the vicinity where the optimal and original points are located, reflected by the low ASR and USE score respectively. Importance of Reconstruction Task. We show the importance of adding a reconstruction loss (L2 in Eq.( 5)) for generating more accurate reconstruc- Model T-PGD Random ASR USE ASR USE BERT 97.00 0.92 47.48 0.79 RoBERTa 94.75 0.89 56.59 0.79 ALBERT 93.59 0.90 51.36 0.79 XLNET 97.29 0.91 49.94 0.84 Table 4: Ablation results of gradient information on SST-2. *Random* corresponds to adding random perturbations to the embeddings. VictimT-PGD β=0 ASR USE ∆I PPL ASR USE ∆I PPL BERT 97.00 0.92 0.62 343.65 100 0.79 1.45 875.64 RoBERTa 94.75 0.89 0.63 302.70 100 0.84 1.36 466.56 ALBERT 93.59 0.90 0.69 291.00 100 0.83 1.50 693.39 XLNET 97.29 0.91 0.65 334.55 99.42 0.83 1.24 623.23 Table 5: Ablation results on the reconstruction loss. β=0 denotes the setting without the reconstruction loss. tions. We conduct an ablation study on SST-2. The results are shown in Table 5. On all three victim models, the attack performances (ASR) improve significantly (up to 100) while the quality of adversarial samples deteriorates, with USE score decreasing and grammar errors and perplexity increasing. This validates our claim that without reconstruction loss, the adversarial samples attempt to change the predictions of the model, ignoring whether the semantics is preserved and the linguistic quality is guaranteed. We further tune β to study the trend of ASR and USE score. Results on BERT are shown in Figure 4. We observe that as the absolute value of β increases, at the early stage ASR declines while USE increases, suggesting that at first the effectiveness is sacrificed for sample quality; at the later stage ASR continues to decline and so does the USE, showing that the reconstruction loss should not be over-weighted either. ![7_image_0.png](7_image_0.png) Efficiency and Imperceptibility. Despite TPGD presenting impressive effectiveness in Table 2, it is also important to figure out if it is obtained by sacrificing efficiency and imperceptibility. Therefore, we examine the query number and perturbation rate by attacking XLNET on SST-2. Re- ![8_image_0.png](8_image_0.png) sults are shown in Table 6. We observe that TPGD has the lowest perturbation rate, but the query number is relatively high. Hence, we conduct a more detailed experiment to set different MaxStep to track the trend of ASR, USE, and query number. As shown in Figure 5, we can see that by fixing MaxStep to 500, TPGD can still perform a strong attack (ASR=*89.27*) with a low query budget (Query=*89.91*). In conclusion, despite we require a relatively high query number to achieve the reported result, we can resort to an efficient version of TPGD which still achieves very competitive ASR. Attacker ASR USE Query Pert. (%) PWWS 77.26 0.83 147.11 20.21 Textfooler 89.17 0.82 97.14 20.16 PSO 87.02 0.76 5113.83 15.96 BERT-Attack 97.26 0.84 **66.82** 23.83 GBDA 56.56 0.64 102.53 44.98 T-PGD **97.29 0.91** 211.20 **14.84** Transferability Across Models. We investigate the transferability of adversarial examples. We sample 1,000 samples from SST-2 and craft adversarial samples by T-PGD and baseline methods by attacking BERT. Then we test the attack success rate of these adversarial samples on RoBERTa to evaluate the transferability of adversarial samples. As seen in Table 7, adversarial samples crafted by T-PGD achieve the best transferability performance. Transfer ASR 28.21 18.00 44.73 11.02 **45.29** Method PWWS Textfooler PSO BERT-Attack TPGD Table 7: The ASR on SST-2 of attacking RoBERTa using adversarial samples crafted on attacking BERT. Transferability Across Training Datasets. We consider a more practical setting in which the attacker does not have the same downstream training dataset as the victim, i.e. the local proxy model is trained on a different dataset from the victim model. To this end, we train a local proxy BERT model on another sentiment analysis dataset, IMDB or Amazon, and attack the victim model on SST-2. We compared the results with attacking with the local proxy model trained on the same dataset as the true victim model in Table 8. We can see that T-PGD can also achieve great attack performance in these practical circumstances, although slightly worse than training on the same dataset. | Victim | BERT-SST-2 | | | | |----------|--------------|------|------|--------| | Dataset | ASR | USE | ∆I | ∆PPL | | SST-2 | 97.00 | 0.92 | 0.62 | 343.65 | | IMDB | 93.30 | 0.90 | 0.70 | 204.18 | | Amazon | 96.40 | 0.91 | 1.00 | 388.93 | ## 7 Conclusion And Future Work In this paper, we propose a general framework to facilitate generating discrete adversarial texts using optimization-based methods. In our framework, the problem of searching textual adversarial samples in discrete text space is transformed into the continuous embedding space, where the perturbation can be optimized by gradient information, as explored in CV. The perturbations in embeddings will be amplified in the forward propagation process, then decoded by an MLM head from the latent representations. We instantiate our framework with T-PGD, where the gradient comes from the local proxy model instead of the true victim model, i.e. T-PGD performs a decision-based black-box attack. Experimental results show the superiority of our method in terms of attack performance and adversarial sample quality. In the future, we will adopt other methods in CV with our framework. Besides, we find that our framework can serve as a general optimization framework for discrete texts, and thus has the potential to provide solutions to other tasks like text generation. We will further explore this direction. ## Limitations In experiments we only take PLMs into account because of their prevalence, hence the transferability to non-pretrained models is still unknown. However, due to the generality of PLMs, this can be a minor point in practical scenarios. Moreover, although we successfully transfer adversarial attack methods in CV to NLP using a unified framework, we only instantiate the framework with the PGD attack as an example. It would be interesting to transfer more attack methods in CV and conduct a comprehensive analysis of what methods can benefit NLP, aiming to have a deeper understanding of PLMs. ## Ethical Consideration In this section, we discuss the potential broader impact and ethical considerations of our paper. Intended Use. In this paper, we design a general framework to adapt existing gradient-based methods in CV to NLP, and further, propose a decisionbased textual attack method with impressive performance. Our motivations are twofold. First, we attempt to introduce adversarial attack methods of CV to NLP, since image attack methods have been well-explored and proved to be effective, therefore helping these two fields better share research resources hence accelerating the research process on both sides. Second, we hope to find insights into the interpretability and robustness of current blackbox DNNs from our study. Potential Risk. There is a possibility that our attack methods may be used maliciously to launch adversarial attacks against off-the-shelf commercial systems. However, studies on adversarial attacks are still necessary since it is important for the research community to understand these powerful attack models before defending against these attacks. Energy Saving. We will public the settings of hyper-parameters of our method, to prevent people from conducting unnecessary tuning and help researchers quickly reproduce our results. We will also release the checkpoints including all victim models to avoid repeated energy costs. ## Acknowledgement This work was supported in part by the National Natural Science Foundation of China under Grant No. 62276110, in part by CCF-AFSG Research Fund under Grant No.RF20210005, and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL). Thanks to Naixi Chen from SRU for providing hardware maintainance support for this work. The authors would also like to thank the anonymous reviewers for their comments on improving the quality of this paper. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. 2019. Universal adversarial attacks on text classifiers. In *ICASSP 2019 - 2019 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7345–7349. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Yangyi Chen, Jin Su, and Wei Wei. 2021. Multigranularity textual adversarial attack with behavior cloning. *arXiv preprint arXiv:2109.04367*. Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4324–4333, Florence, Italy. Association for Computational Linguistics. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In *Proceedings of the 2019 ACL Workshop BlackboxNLP:* Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In *Proceedings of the* 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics. Christiane Fellbaum. 2010. *WordNet*, pages 231–243. Springer Netherlands, Dordrecht. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5747–5757, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi, ChunCheng Hsieh, and Hung-Yi Lee. 2021. Bert's output layer recognizes all hidden layers? some intriguing phenomena and a simple way to boost bert. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. Proceedings 2019 Network and Distributed System Security Symposium. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2019. Towards deep learning models resistant to adversarial attacks. Rishabh Maheshwary, Saket Maheshwary, and Vikram Pudi. 2021. Generating natural language attacks in a hard label black box setting. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The limitations of deep learning in adversarial settings. In *2016 IEEE European Symposium on* Security and Privacy (EuroS P), pages 372–387. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016b. Crafting adversarial input sequences for recurrent neural networks. In *MILCOM 2016 - 2016 IEEE Military Communications Conference*, pages 49–54. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016c. Distillation as a defense to adversarial perturbations against deep neural networks. In *2016 IEEE Symposium on Security and Privacy (SP)*, pages 582–597. Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, and Maosong Sun. 2021. Mind the style of text! adversarial and backdoor attacks based on text style transfer. *arXiv preprint* arXiv:2110.07139. Alec Radford, Jeffrey Wu, and Rewon Child. 2019. Rewon child, david luan, dario amodei, and ilya sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085–1097, Florence, Italy. Association for Computational Linguistics. Andrew Ross and Finale Doshi-Velez. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. *Proceedings of the AAAI Conference on Artificial Intelligence*, 32(1). Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Jincheng Xu and Qingfeng Du. 2020. Texttricker: Loss-based and gradient-based adversarial attacks on text classification models. *Engineering Applications of Artificial Intelligence*, 92:103641. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066–6080, Online. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc. ## A Adversarial Training B Ablation Study Of Random Masking C Case Study | Ori Acc | 89.90% | | | | | |-----------|----------|------------|-------|-------------|-------| | Adv.T Acc | 90.48% | | | | | | Method | PWWS | Textfooler | PSO | BERT-Attack | T-PGD | | Ori ASR | 69.94 | 86.38 | 82.03 | 86.55 | 92.22 | | Adv.T ASR | 66.78 | 87.41 | 73.34 | 84.84 | 83.78 | | Model | w | w/o | | | |---------|-------|-------|-------|------| | ASR | USE | ASR | USE | | | BERT | 97.00 | 0.92 | 92.20 | 0.91 | We explore to enhance models' robustness against adversarial attacks through adversarial training on SST-2 with BERT. Specifically, we first generate adversarial samples using the original training dataset. Then we fine-tune the BERT model using the training dataset augmented with generated adversarial samples. We evaluate the model's original accuracy on the test set and robustness against different adversarial attack methods. As seen in Table 9, the model shows generally better robustness through adversarial training. Besides, the accuracy on the test set is also improved from 89.90 to 90.48, which is different from previous textual adversarial attacks where accuracy is sacrificed for robustness (Ren et al., 2019; Zang et al., 2020). Table 9: Results of adversarial training. *Adv.T* denotes the adversarial training paradigm. We conduct an ablation study of random masking. Our intuition is that random masking can broaden the searching scope of adversarial examples, and thus lead to diverse adversarial samples and higher attack success rate. To prove this, we attack BERT on SST-2, with and without our random masking strategy. Result are shown in Table 10. Table 10: Ablation results of random masking on SST2 against BERT. In Table 11, we present some cases of our adversarial samples which successfully fooled XLNET. | Dataset | Type | Text | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------| | Ori | the movie bounces all over the map. | | | Adv | the movie bounce & all over & map. | | | Ori | looks like a high school film project completed the day before it was due. | | | Adv | looks like a unique school film project completed the day before it was due. | | | SST-2 | Ori | PREMISE: and he said , what 's going on ? HYPOTHESIS: he wanted to know what was going on . | | Adv | PREMISE: and he said , what 's going on ? HYPOTHESIS: he wanted to know what was going on ¡ | | | Ori | PREMISE: they seem to have him on a primary radar . HYPOTHESIS: they have got him on a primary radar . | | | Adv | PREMISE: they seem to have him on a primary radar . HYPOTHESIS: they finally got him on a primary radar. | | | MNLI | Ori | nortel lowers expectations nortel said it expects revenue for the third quarter to fall short of expectations . | | Adv | nortel lowers expectations nortel said , expects income for the third quarter to fall short of expectations . | | | Ori | itunes now selling band aid song ipod owners can download the band aid single after apple reaches agreement with the charity . | | | Adv | the now selling band aid song dar norman can reach the band aid single after apple reaches agreement with the charity. | | | AG's News | | | | Table 11: Cases of adversarial examples generated by T-PGD. The differences between original and adversarial | | | Table 11: Cases of adversarial examples generated by T-PGD. The differences between original and adversarial texts are in **bold**. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The section following Sec.7 ✓ A2. Did you discuss any potential risks of your work? The section after Limitation ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 5, 6 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.2, 5 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? just a single run ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 5.4 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
zhang-etal-2023-dub
{DUB}: Discrete Unit Back-translation for Speech Translation
https://aclanthology.org/2023.findings-acl.447
How can speech-to-text translation (ST) perform as well as machine translation (MT)? The key point is to bridge the modality gap between speech and text so that useful MT techniques can be applied to ST.Recently, the approach of representing speech with unsupervised discrete units yields a new way to ease the modality problem. This motivates us to propose Discrete Unit Back-translation(DUB) to answer two questions (1) Is it better to represent speech with discrete units than with continuous features in direct ST? (2) How much benefit can useful MT techniques bring to ST? With DUB, the back-translation technique can successfully be applied on direct ST and obtains an average boost of 5.5 BLEU on MuST-C En-De/Fr/Es. In the low-resource language scenario, our method achieves comparable performance to existing methods that rely on large-scale external data. Code and models are available at \url{https://anonymous.4open.science/r/DUB/}.
# Dub: Discrete Unit Back-Translation For Speech Translation Dong Zhang1,2†, Rong Ye2, Tom Ko2, Mingxuan Wang2∗**, Yaqian Zhou**1∗ 1 School of Computer Science, Fudan University 2 ByteDance dongzhang22@m.fudan.edu.cn {yerong, tom.ko, wangmingxuan.89}@bytedance.com zhouyaqian@fudan.edu.cn ## Abstract How can speech-to-text translation (ST) perform as well as machine translation (MT)? The key point is to bridge the modality gap between speech and text so that useful MT techniques can be applied to ST. Recently, the approach of representing speech with unsupervised discrete units yields a new way to ease the modality problem. This motivates us to propose Discrete Unit Back-translation (DUB) to answer two questions: (1) Is it better to represent speech with discrete units than with continuous features in direct ST? (2) How much benefit can useful MT techniques bring to ST? With DUB, the back-translation technique can successfully be applied on direct ST and obtains an average boost of 5.5 BLEU on MuST-C En-De/Fr/Es. In the low-resource language scenario, our method achieves comparable performance to existing methods that rely on large-scale external data. Code and models are available at https://github.com/0nutation/ DUB. ## 1 Introduction Speech-to-text translation (ST) converts the spoken source language into the written target language, which is a closely related task to machine translation (MT). In recent years, direct ST that does not rely on intermediate transcription has received considerable attention due to its potential applications in unwritten language scenarios and various domains (Bérard et al., 2016; Sung et al., 2019; Han et al., 2021; Papi et al., 2021; Fang et al., 2022; Ye et al., 2022; Cheng et al., 2023). One of the major challenges faced by ST is data scarcity, which is similar to the low-resource scenarios encountered in MT. Intuitively, techniques developed for low-resource MT (Imamura et al., 2018; Xia et al., 2019; Chen et al., 2020; Liu et al., 2020; Tang †Work was done while Dong Zhang was a research intern at ByteDance. *Corresponding authors. et al., 2020) should be utilized to improve ST performance. However, these techniques are hard to be transferred to ST due to the modality gap between speech and text, where ST takes continuous speech as input and MT takes discrete tokens as input. Generally speaking, if there is a way to efficiently remove the modality gap, a large number of useful NLP techniques can be applied and facilitate the improvement of ST. Recently, representing speech with unsupervised discrete units has become popular and successful in the field of speech processing (Baevski et al., 2019, 2020; Hsu et al., 2021; Lakhotia et al., 2021). Instead of losing relevant information, discretizing continuous speech has been found to have the advantage of filtering out extraneous signals (Sicherman and Adi, 2023; Lakhotia et al., 2021), leading to significant improvements in the speech tasks, such as automatic speech recognition (Meng et al., 2022), text-to-speech (Dunbar et al., 2019), and speech-to-speech translation (Zhang et al., 2021; Lee et al., 2022; Inaguma et al., 2022). Based on this observation, we are motivated to explore the answers to the following two questions: (1) Is it better to represent speech with discrete units and use them as model input than with continuous features for direct ST? (2) By narrowing the modality gap with discrete speech units, how much benefit can useful MT techniques bring to direct ST? In this paper, we propose Discrete Unit Backtranslation (DUB), which migrates the useful backtranslation technique from MT to ST by discretizing the speech signals into unit sequences. In our proposed method, we first convert speech into discrete units using the clustering indices on HuBERT (Hsu et al., 2021) representations. To complete the translation task, we feed the discrete units into the Unit-to-Text Translation (U2TT) model. For the back-translation training strategy, DUB employs a text-to-unit translation model that learns to predict the source discrete units from the target text. By leveraging the additional easily accessible text in the target language, we utilize the synthetic parallel data generated by the text-to-unit translation model in conjunction with the original parallel data to update the final unit-to-text model. Our contributions include the following. - We design a novel unit-text translation (U2TT) framework for direct ST by discretizing the speech feature in an unsupervised manner. Our analysis shows that in such a framework, the unit retains the semantic information for translation and can be used as model input. - Based on the U2TT framework, we propose DUB, which successfully applies the backtranslation technique to direct ST. Experimental results show that DUB can further yield an average 5.5 BLEU gain over the U2TT model on the MuST-C English-to-German, French, and Spanish translation directions. - Our approach is particularly beneficial for lowresource or unwritten languages in the world because unit extraction does not require any textual supervision and only speech-translation pairs are used for training. ## 2 Related Work Speech translation Without using textual transcriptions during inference or training, translating audio directly into the target language is very meaningful for languages that do not have a written form. Bérard et al. (2016) first proposed an end-to-end encoder-decoder architecture for such direct speech-to-text translation. Later, novel models (Di Gangi et al., 2019b; Dong et al., 2021a,b; Zheng et al., 2021) and training techniques, such as multi-task learning (Indurthi et al., 2021; Tang et al., 2021; Ye et al., 2021), knowledge distillation (Liu et al., 2019; Dong et al., 2021b), and pretraining methods (Zheng et al., 2021; Zhang et al., 2022c), were developed to improve end-to-end performance. However, these training methods often rely on the use of source text or knowledge from the pre-trained models. Without using transcripts or pretraining, Zhang et al. (2022a) proposed the parameterized distance penalty to better model speech locality in the self-attention structure and provided results on ST benchmarks covering 23 languages. Back-translation Back-translation (BT) is a widely used method for improving machine translation systems by training a target-to-source model and creating synthetic parallel data from monolingual target text. This approach has been shown to be effective in both statistical (Bertoldi and Federico, 2009; Bojar and Tamchyna, 2011) and neural machine translation models (Sennrich et al., 2016; Edunov et al., 2018; Hoang et al., 2018), and is frequently used to improve translation performance in WMT competitions (Farhad et al., 2021; Wenzek et al., 2021; Adelani et al., 2022). A similar data augmentation idea through synthesizing speech from utterances can be applied to automatic speech recognition (ASR) (Tjandra et al., 2017; Hayashi et al., 2018; Ueno et al., 2021). However, applying BT in ST is not trivial. Zhang et al. (2022b) augmented the triplet data by TTS generation from transcription, but the experiment showed that such scaling yields minimal improvement to the final ST model. ## Discrete Speech Representation Discrete Speech representations are often studied in the work on self-supervised speech representation learning (Van Den Oord et al., 2017; Baevski et al., 2019, 2020; Hsu et al., 2021; Meng et al., 2022). For example, Van Den Oord et al. (2017) proposed Vector Quantised-Variational AutoEncoder (VQ-VAE) to map continuous signals, like speech or image, into a discrete sequence space. Hsu et al. (2021) proposed HuBERT, which learns self-supervised speech representation by extracting speech features and clustering them offline, and iteratively training the clustering indexes of features at masked locations. Although the clustered discrete representations are only a by-product of HuBERT, they are used to build the generative spoken language model (Lakhotia et al., 2021; Kharitonov et al., 2022), enhance speech representation (Chung et al., 2021; Meng et al., 2022; Chen et al., 2022; Wu et al., 2022; Zhang et al., 2022c), and model direct speech-to-speech translation (Lee et al., 2022; Inaguma et al., 2022). In the prior literature, probably the most similar task to ours is the textless speechto-speech translation (Lee et al., 2022; Nguyen et al., 2022), but the difference is that they discretized the target-side speech and convert speechto-speech generation into speech-to-discrete-unit generation, while we discretize the speech at the source side. Zhang et al. (2022c) leveraged the discrete unit as an interface to align speech and text, and proposed a unified-modal encoder-decoder pretraining model, SpeechUT. SpeechUT aims to improve speech representation via the units, while we use the units to construct a unified framework for ![2_image_0.png](2_image_0.png) ## 3 Our Approach 3.1 Problem Formulation Unlike cascade systems or existing end-to-end ST work that utilizes speech-transcription-translation triplet (s, x, y), we aim to build and train the model that translates speech directly into text in another language without using the transcription x. The training dataset is denoted as Ds,y = {(s, y)}. Also, we introduce the monolingual corpus of the target language D′y = {y′}, enhance the model via the discrete unit back-translation (DUB) method (described in Section 3.3). ## 3.2 Model Structure As illustrated in Figure 1(a), our model consists of three main components: discrete unit extractor, *unit-to-text translation model*, and text-to-unit translation model. Discrete Unit Extractor The discrete unit extractor converts continuous speech signals into a sequence of discrete units, which we use the Hiddenunit BERT (HuBERT) (Hsu et al., 2021). HuBERT is a self-supervised model learned by predicting discrete labels of masked audio segments from k-means clustering on the model's intermediate representations. It consists of a stack of 1-D convolutional layers and a Transformer encoder to encode the speech into continuous intermediate representations, and a k-means model to convert the representations into a sequence of cluster indices. We then remove the adjacent duplicate indices to obtain the discrete units sequence, denoted as u = (u1, u2, . . . , uT ), ui ∈ {0, 1*, . . . , K* − 1}, ∀1 ≤ i ≤ T, where K is the number of clusters. Note that the discrete unit extractor used **offline** during the pre-processing stage before translation, can be considered as a **feature extractor**. Unit-to-Text Translation (U2TT) Model The U2TT model θu→y performs the forward translation. It consists of a discrete unit embedding layer and a Transformer. The discrete unit embedding layer converts discrete units u into the embedding e = (e1, e2*, . . . ,* eT ). In order to retain more contextual and textual information from HuBERT, we adopt the intermediate representations of HuBERT's k-means cluster centroids as prior knowledge to initialize the unit embedding. This initialization operation is referred to as *pre-trained* embedding in the later analysis (Section 6.1). The Transformer follows the vanilla Transformer architecture (Vaswani et al., 2017), consisting of a Transformer encoder and a Transformer decoder. The encoder takes unit embedding e plus sinusoidal positional embedding as input and outputs semantic representation. The decoder generates the translation sequence y = (y1, y2*, . . . ,* y|y|) autoregressively based on the semantic representation. Text-to-Unit Translation (T2UT) Model The T2UT model θy→u has the same structure as the U2TT model, but with a randomly initialized text embedding layer. It is added to perform the text-tounit translation and to incorporate the DUB training. ## 3.3 Discrete Unit Back-Translation (Dub) Training Steps Given ST parallel dataset Ds,y = {(s, y)}, extra target-language corpus D′y = {y′}, and the discrete unit extractor E. As shown in Figure1(b), the DUB training steps are as follows. 1. Extract unit for each speech input u = E(s), and get unit-translation pairs Du,y = {(u, y)}; 2. Train the T2UT model based on Du,y with crossentropy loss as in Eq. (2); 3. For each text y′ ∈ D′y , generate corresponding synthetic units uˆ′through the BT model (generation methods will be discussed in Section 3.4). Then, add special <BT> indicator at the begining of uˆ′(Caswell et al., 2019). The synthetic unitstranslation set is denoted as D′u,y = {(uˆ′, y′)}; 4. Upsample the original training data by a rate of r and train U2TT model based on Du,y ∪ D′u,y and loss in Eq. (1). Training Objective The training objectives for U2TT and T2UT models are the negative loglikelihood losses based on the unit-translation pairs: LU2TT = −E(u,y)∈D log P (y | u, θu→y) (1) LT2UT = −E(u,y)∈Du,y log P (u | y, θy→u) (2) , where D refers to Du,y ∪ D′u,y for DUB training, and when D = Du,y, Eq. (1) is the loss function for training U2TT from scratch. ## 3.4 Generation Methods For Back-Translated Units We explore the following generation methods for producing synthetic units: beam search, sampling, and top-k sampling. We also apply a speech normalization method to remove speaker information when generating units. Beam search tries to identify the maximum a posteriori (MAP) output and generate the sentence with the largest estimated probability given an input. **Sampling** means sampling from the distribution randomly at each step, which generates diverse outputs. **Top-k sampling** is a middle ground between beam search and sampling. At each time step, we select the k most likely tokens from the output distribution, re-normalize and then sample from this restricted set. The discrete unit extractor produces various unit sequences for speech with the same content when delivered by multiple speakers (Lee et al., 2022). These variations pose a challenge for training the BT model. In order to address this issue, we adopt a **Speech Normalization** module from (Lee et al., 2022), which removes speaker information from the discrete units and produces norm units. Specifically, it is an off-the-shelf HuBERT-CTC model trained on VoxPopuli (Wang et al., 2021a) that normalizes variant speech input to a single speaker to eliminate such influence (denoted as *Speech Norm*). We implement back-translation with norm units and use the resulting BT model to generate pseudo norm units. ## 4 Experiments 4.1 Datasets MuST-C MuST-C1(Di Gangi et al., 2019a), one of the most widely-used ST benchmarks, contains translations from English to 8 languages collected from TED talks. We train and validate our approach in three ST directions: English-German (En-De), English-French (En-Fr), and English-Spanish (EnEs). CoVoST-2 X-En CoVoST-2 (Wang et al., 2021b) is a multilingual ST corpus derived from the Common Voice project, which offers translations from English to 15 languages and from 21 languages to English. We conducted experiments on X-En, including high-resource languages (≥ 10 hours of speech) such as French (Fr) and German (De), and low-resource languages (≤ 2 hours of speech), like Arabic (Ar), Swedish (Sv), Japanese (Ja), etc. Without the need for transcription, the evaluation focuses on the capability of our method to generalize to the low-resource unwritten multi-languages. Monolingual text corpus Monolingual targetlanguage text corpora are introduced for backtranslation. For MuST-C we include 48M German, 79M French and 64M Spanish sentences sampled from TED1(Duh, 2018), WMT1(Bojar et al., 2016) and CCMartix1(Schwenk et al., 2019) datasets for En-De/Fr/Es respectively. For CoVoST-2 X-En experiments, we introduce 1M extra English sentences sampled from the transcriptions of Common Voice Corpus 11.01(Ardila et al., 2020). All statistics of the datasets are in Appendix A. ## 4.2 Experimental Setups Pre-processing The model accepts 16-bit 16KHz mono-channel raw waveform speech and then discretizes them into units. We denote the discrete units clusters by the numbers (*e.g.* \#1, \#2), and 1All released under CC BY NC ND 4.0 International combining with the target-language sentences, we learn the joint vocabulary via SentencePiece (Kudo and Richardson, 2018). We set the joint vocabulary size to 8000 for both MuST-C and CoVoST-2. Model Configuration For MuST-C experiments, we use the HuBERT-base2(pre-trained on Librispeech without fine-tuning) with a 500-cluster k-means quantizer based on the 9th layer representation as the *discrete unit extractor*. For CoVoST-2, we employ the mHuBERT3 with a 1000-cluster kmeans quantizer based on the 11th layer representations pre-trained on the VoxPopuli (Wang et al., 2021a) speech in English, Spanish, and French. The U2TT and T2UT models have the same model architecture, consisting of a 12-layer Transformer encoder and a 6-layer Transformer decoder, with hidden size d = 768, 16 attention heads, and 4096 FFN hidden states. Additionally, we implement the BASE and LARGE versions of the model, with hidden sizes of 768 and 1024, respectively. Both versions are performed in the main experiments, while in the analysis, we primarily investigate the BASE model. More information on model size and scalability experiments can be found in Appendix D. Evaluation We evaluate the models using casesensitive sacreBLEU4(Post, 2018) on the MuST-C tst-COM sets and CoVoST-2 test sets. See Appendix B for more details on vocabulary learning, training, and test. ## 4.3 Baseline Models We compare our method with the baselines as listed in Table 1 ∼ 3 (Appendix C for details). In particular, we explain the following baselines that do not involve transcriptions during training. Revisit ST (Zhang et al., 2022a) is a direct speechto-translation model with parameterized distance penalty (PDP) and CTC regularization. Its framework and training objectives are sorely different from ours. Unit-to-text Translation (U2TT) has the structure as described in Section 3.2 and is trained using only speech-translation supervision from the ST dataset from scratch, without applying DUB. As a | Method | De | Fr | Es | Avg. | |----------------------------------------------------------------------------------------------------------------|------|------|------|--------| | Methods that utilize transcriptions Fairseq ST (Wang et al., 2020) 22.7 | 32.9 | 27.2 | 27.6 | | | NeurST (Zhao et al., 2021) | 22.8 | 33.3 | 27.4 | 27.8 | | Espnet ST (Inaguma et al., 2020) | 22.9 | 32.8 | 28.0 | 27.9 | | E2E-ST-JT (Du et al., 2022) | 23.1 | 32.8 | 27.5 | 27.8 | | Speechformer (Papi et al., 2021) | 23.6 | - | 28.5 | - | | Cascaded (Inaguma et al., 2020) | 23.6 | 33.8 | 28.7 | 28.7 | | MTL (Tang et al., 2021) | 23.9 | 33.1 | 28.6 | 28.5 | | Self-training (Pino et al., 2020) | 25.2 | 34.5 | - | - | | SpeechT5 (Ao et al., 2022) | 25.2 | 35.3 | - | - | | Methods that do not involve transcriptions Revisit ST (Zhang et al., 2022a) 23.0 33.5 | 28.0 | 28.2 | | | | Transformer-ST | 18.0 | 28.5 | 24.1 | 23.5 | | U2TT (BASE) | 20.4 | 30.3 | 25.3 | 25.3 | | w/ DUB | 25.8 | 34.7 | 30.2 | 30.2 | | U2TT (LARGE) | 20.5 | 30.1 | 24.7 | 25.1 | | w/ DUB | 26.2 | 35.3 | 30.4 | 30.6 | | SoTA: use much more speech and various pre-training tasks SpeechUT (Zhang et al., 2022c) ∗ 30.1 41.4 33.6 35.0 | | | | | Table 1: BLEU Scores on MuST-C En-X tst-COM set. ∗ is the state-of-the-art system, which designed various mask-predict pre-training tasks and trained using extra 1.4k hours of speech and parallel MT data from WMT. Random sampling is the decoding strategy for DUB. baseline, comparison with this model helps to see the influence of DUB. Transformer-ST stands for training the SpeechTransformer (Dong et al., 2018) from scratch, but without ASR pre-training as in the previous work (Wang et al., 2020; Inaguma et al., 2020; Zhao et al., 2021). The training details are in Appendix C. ## 4.4 **Main Results On Speech-To-Text Translation** MuST-C As shown in Table 1, compared to the methods that do not involve the transcribed text, our method, U2TT (LARGE) with DUB, gets the best ST results by introducing extra target-language text, and DUB obtains an average boost of 5.5 BLEU compared with U2TT in the three En-X directions. Encouragingly, we find that our method achieves comparable performance to previous models that utilize transcriptions through multi-task learning or pre-training. As for the baseline, U2TT outperforms the Transformer-ST, where we believe that the discrete units still retain the semantic information of the audio feature (*e.g.* log Mel-filter bank, abbr. Fbank) for translation. As for the gap between our method and the SoTA system, we argue that SpeechUT (Zhang et al., 2022c) performed various mask-predict pre-training tasks using extra 1.4k hours of speech and parallel MT data from WMT, which is not included in our approach. Aux. Data Methods ASR Text Fr De Es Ca It Ru Zh **Avg.** Transformer-ST†- - 4.3 8.4 12.0 14.4 0.2 1.2 1.4 8.8 Transformer-ST + ASR pre-train† ✓ - 26.3 17.1 23.0 18.8 11.3 14.8 5.8 16.7 Cascaded ST† ✓ - 27.6 **21.0** 27.4 21.3 13.5 16.8 7.0 19.2 Revisit ST (Zhang et al., 2022a) - - 26.9 14.1 15.7 17.2 2.4 3.6 2.0 11.7 U2TT (LARGE) - - 27.4 16.7 28.1 23.1 20.0 21.9 5.9 20.5 w/ DUB - ✓1M **29.5** 19.5 **30.9 25.2 23.9 23.2** 6.1 **22.6** Table 2: Test BLEU scores on CoVoST-2 X-En language pairs with more than 10 hours of speech. Auxiliary data refers to all data at training excluding *<speech,translation>* pairs. †: Results from (Wang et al., 2021b). Random sampling is the decoding strategy for DUB. Aux. Data Ar Sv Lv Sl Ta Ja Id Cy **Avg.** Methods **ASR Text** 2h 2h 2h 2h 2h 1h 1h 2h - Transformer-ST†- - 0.3 0.2 0.1 0.3 0.3 0.3 0.4 0.3 0.3 Transformer-ST A2E†- - 0.6 0.6 0.4 1.2 0.1 0.2 0.3 2.6 0.8 Transformer-ST + ASR pre-train† ✓ - 4.3 2.7 2.5 3.0 0.3 1.5 2.5 2.7 2.4 Larger models based on large-scale multilingual speech, text or joint pre-training, involving more data XLS-R (0.4B)∗ ✓ - 8.1 5.3 3.1 5.3 0.0 2.0 3.3 3.4 3.8 Wav2seq (0.4B)∗ ✓ - **10.5** 8.8 4.8 5.9 0.0 1.9 5.0 5.7 5.3 XLS-R + mBART-50 (0.7B)♢ ✓ ✓ 3.0 **10.3** 6.0 6.6 0.2 0.6 1.4 2.5 3.8 LNA-E,D (0.7B)♠ ✓ ✓ 3.7 5.9 4.6 4.6 0.7 1.7 2.9 2.8 3.4 U2TT (LARGE) - - 7.0 8.0 6.3 6.8 0.3 1.6 6.6 2.7 4.9 w/ DUB - ✓1M 7.1 8.9 **6.9 7.9** 0.5 2.1 7.0 5.7 5.8 Table 3: Test BLEU scores on CoVoST-2 low-resource X-En language pairs with less than 2 hours of speech. †: results from (Wang et al., 2021b). ∗: results from (Wu et al., 2022)♢: results from (Babu et al., 2021). ♠: results from (Li et al., 2021). The numbers in parentheses are their parameter sizes. Random sampling is the decoding strategy for DUB. CoVoST-2 Our method performs similarly to MuST-C on the high-resource En-X (Table 2). Without considering auxiliary data or pre-training methods, adding only 1M additional English text, DUB improves by an average of 2.1 BLEU over 7 language pairs compared to U2TT, and by an average of 3.4 BLEU over the cascaded ST system. For the low-resource setting, our method can bring improvement on almost every language pair and achieve better performance than the large-scale multilingual speech or text pre-training models, like XLS-R+mBART-50 model (Babu et al., 2021), with much fewer parameters. The discrete unit extractor is unsupervised, so our method does not require transcriptions, which is particularly advantageous for unwritten ST. This experiment mimics such low-resource nature of unwritten languages in practice. The results also show that the U2TT model and the DUB training have the potential to translate low-resource unwritten languages. Table 4: MuST-C En-De tst-COM BLEU scores for different methods that utilize 10M monolingual text data in ST. Transformer-ST and U2TT are described in Section 4.3. ## 5 Analysis On The Effect Of Discrete Unit Back-Translation (Dub) 5.1 Is Dub Better Than Other Methods That Leverage Extra Raw Data? The key benefit of the DUB is to make use of a lot of monolingual text. Here, alternative techniques such as pseudo-labeling and pre-training (implemented as Cascaded BT and Bi-modal BART) are also evaluated on the MuST En-De translation, by introducing an equivalent corpus of 10 million German sentences. | Aux. Data | | | | |-----------------------|--------|------|------| | Method | Speech | Text | BLEU | | Transformer-ST | - | - | 18.0 | | w/ Cascaded BT | - | 10M | 20.3 | | U2TT | - | - | 20.4 | | w/ Bimodal BART | 10kh | 10M | 22.4 | | w/ DUB | - | 10M | 25.0 | | w/ Bimodal BART + DUB | 10kh | 10M | 25.4 | - **Cascaded BT** aims to build a target-to-source MT-TTS pipeline to construct pseudo-speech translation augmented data for the training. Specifically, we use transcription-translation pairs of MuST-C to train a back-translation MT model and use the released FastSpeech25(Ren et al., 2020) and a HiFi-GAN (Kong et al., 2020) vocoder for TTS generation. - **Bi-modal BART** has the same structure as U2TT, and is pre-trained by denoising large-scale corrupted discrete units and monolingual text, following the recipe of mBART (Liu et al., 2020). We combine the 10M additional text with 7M discrete units extracted from 10k hours of speech in GigaSpeech (Chen et al., 2021) to pre-train the model and fine-tune it based on MuST-C unittranslation pairs. See Appendix B for training details. As shown in Table 4 introducing equivalent raw text, **DUB is superior to the above two approaches and has a greater potential to exploit** monolingual raw text. We find that the gain from cascaded BT-synthesized speech is limited because the synthetic speech is robotic and monotonic, making it easy to overfit the model to the synthetic pairs. Although the bi-modal BART pre-training can bring about 2 BLEU improvements, it is still inferior to DUB. We attribute this to the gap between the denoising pre-training task and the downstream generation tasks, while DUB does not have such a gap. Meanwhile, we observe that combination of bi-modal BART and DUB can bring further performance improvements, which indicates that they are complementary to each other. ## 5.2 The Better The Pseudo-Unit, The More Effective The Dub Method? In Section 3.3, we presented four generation methods to create synthetic pseudo-units based on the BT model, namely beam search, sampling, top-k sampling, and speech normalization. In the experiments, we set a beam size of 5 for the beam search, k=10 for top-k sampling, and use an offthe-shelf speech normalizer6from Lee et al. (2022) for Speech Norm. Does the forward model gain more from synthesized pairs when the synthesized units are of higher quality? We calculate the Unit Error Rate (UER) on the MuST-C validation set to assess the synthesis quality. A lower UER indicates that the generated units are closer to the directly extracted units, *i.e.* of higher quality. We systematically vary the backtranslated data from 1M to 10M, and present the BLEU scores and UERs of the generation methods in Table 5 and Figure 2. The *Speech Norm* module produces the highest quality synthesized units, while the sampling-based methods have lower quality. Interestingly, the sampling method with the lowest synthesis quality has the most significant improvement over the forward model. We conjecture that **the richness and irregularity of the synthesized data can better improve** the forward ST model, while regular pseudounits, *e.g.* generated by MAP-based beam search, are more predictable and not conducive to performance improvement. This is consistent with previous findings of BT techniques in machine translation (Edunov et al., 2018). In addition, Speech Norm**, which normalizes speech to a** single speaker, is not necessary for our DUB method. Although such an operation makes the ST model easier to learn and the UER smaller, it compromises the diversity of the synthetic data, which is also not helpful for performance improvement. The model generalization ability weakens when these single-speaker synthesis units increase. | UER(%) | ∆BLEU | | |-----------------|---------|-----| | Speech Norm | 73.0 | 0.7 | | Beam Search | 83.0 | 1.7 | | Top-10 Sampling | 89.0 | 3.6 | | Sampling | 92.0 | 4.6 | Table 5: The quality of generated pseudo-units using different generation methods and their BLEU increases from 10M extra texts, evaluated by **Unit Error Rate** (UER) on MuST-C En-De Dev, the smaller the better. ## 6 Why Does Dub Work? - Analysis On The Property Of Discrete Unit Why does DUB work? To answer this question, we examine the properties of the discretized speech unit. Specifically, (1) do the units make sense to replace the original speech input in the forward translation process? (2) Do the units generated by back translation also contain semantic information and can they even restore the speech? ![7_image_0.png](7_image_0.png) ## 6.1 Are Discrete Units Suitable Features For St Input? We show how much semantic information is retained for different input forms by comparing the results of the downstream ST task (shown in Table 6). Training from scratch, we find that the U2TT model translates better than TransformerST (19.9 vs.18.0), indicating that **compared to the** speech feature, like Fbank, the discrete unit is a better choice for input and has no information loss in terms of the semantic information required for translation. We assume that this is strongly correlated with the HuBERT-based discrete unit extractor, since HuBERT is designed to learn a combined acoustic and language model over the continuous speech input, which preserves much textual information for the speech. But rigorously, compared to the continuous representation of HuBERT, the discretization procedure does suffer from semantic information loss. Comparing Line II and III, there is a gap of 2.9 BLEU between U2TT and HuBERT-Transformer (where frozen HuBERT Layer-9 continuous representation is taken as input to perform ST), in terms of ST metrics. Fortunately, the gap can be compensated by a) initializing the unit embedding as its corresponding K-means cluster centroid on continuous HuBERT representations as described in Section 3.2 (denoted as pretrained embedding, Line IV), which can slightly close the gap by 0.5 BLEU; and b) simply introducing only 1M additional text and applying DUB, which can achieve 2.7 BLEU improvement (Line V vs. IV). | No. | Methods | BLEU | |-------|-----------------------------|--------| | I | Transformer-ST | 18.0 | | II | HuBERT + Transformer | 22.8 | | III | U2TT | 19.9 | | IV | ,→ w/ pre-trained embedding | 20.4∗ | | V | ,→ w/ DUB-1M | 23.1 | ## 6.2 Can We Recover Faithful Speech From Pseudo-Units? Do the back-translated units capture the semantics of the target language text? Since it is difficult to directly evaluate the correctness of the pseudo-units generated by the back-translation model, we concatenate a unit-based HiFi-GAN vocoder7 with our back-translation model to recover speech from the generated pseudo-units, thus completing the textto-speech (TTS) translation task. TTS generation quality is measured by ASR-BLEURT, where we transcribe the speech output using a high-quality open-source ASR model8and calculate BLEURT9 with reference transcription.As shown in Table 7, the ASR-BLEURT of beam search and sampling is 0.6 and 0.47 respectively, indicating that **the unit** sequence back-translated from a given target language text can convey its general semantic meaning, which is the guarantee for the success of DUB. We conduct the listening test by checking 30 randomly sampled BT-recovered speeches for semantic consistency with ground-truth. 22 of 30 sentences matched ground-truth speech, while the remaining 8 had minor issues, with only 1 being of low quality and the other 7 missing or repeating 1-2 details. We also provide some generated audio samples in Appendix F to help illustrate the degree of speech restoration. ## 7 Conclusion In this paper, we propose Discrete Unit Backtranslation (DUB), as well as the Unit-to-Text | Generation Method | ASR-BLEURT | |---------------------|--------------| | beam search | 0.60 | | sampling | 0.47 | Table 7: MuST-C En-De tst-COM ASR-BLEURT for text-to-speech translation Translation (U2TT) model for direct speech translation. Our approach successfully migrates the back-translation technique from MT to ST by discretizing the speech signals into unit sequences and making use of extra widely accessible text in the target language. Without using transcription, DUB can achieve an average increase of 5.5 BLEU on MuSTC En-De/Fr/Es over the raw U2TT framework, and achieves comparable performance to the large-scale speech-text joint pre-training models on CoVoST-2 low-resource ST. The analysis experiments also show the potential of such discrete audio units as inputs and outputs for text or speech generation tasks. ## Broader Impact Our proposed model structure with a discrete unit extractor for speech and the unit-to-text translation model, which does not need any transcriptions during training, is particularly relevant for speech translation for more than 3,000 languages and dialects in the world that cannot be transcribed. Since these unwritten languages are typically lowresource, we emphasize that boosting ST performance via text-to-unit back-translation data augmentation, *i.e.* DUB, is very promising. Meanwhile, as a by-product of DUB, TTS translation has significant implications for assisting visually impaired or dyslexic people in understanding the world as well as preserving low-resource unwritten spoken languages. However, as exploratory work, we focus on investigating the potential of using BT to enhance ST performance, while popular large-scale pretraining methods are not employed in this paper. This makes our method slightly inferior to these methods in terms of performance, perhaps. But promisingly, in terms of structure, the model is more general across various modalities and also has more potential to integrate with the methods in NLP area (might be the topic of future research). Also, the models are still far from real industrial applications. For example, the data used for training is much smaller than the scale in reality, while the real speech is noisier and more complex than the open-source dataset, which may require front-end processing. Moreover, the success of our method is partly attributable to the HuBERT representation, which contains certain textual information for the speech, and via experiments, we also find that the quality of discrete units influences the translation performance. Nevertheless, learning meaningful discrete units is not the primary goal of HuBERT pre-training, and how to learn discrete units or representations for speech with more *contextual* semantic information can be explored in the future. ## Acknowledgements We thank Fuliang Weng for the careful guidance and revisions to the paper and thank all the anonymous reviewers for their insightful and valuable comments. ## References David Ifeoluwa Adelani, Md Mahfuz Ibn Alam, Antonios Anastasopoulos, Akshita Bhagia, Marta CostaJussá, Jesse Dodge, Fahim Faisal, Christian Federmann, Natalia Fedorova, Francisco Guzmán, et al. 2022. Findings of the wmt 2022 shared task on largescale machine translation evaluation for african languages. In *Proc. of WMT*. Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, et al. 2022. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. In *Proc. of ACL*, pages 5723–5738. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4218–4222, Marseille, France. European Language Resources Association. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al. 2021. Xls-r: Self-supervised cross-lingual speech representation learning at scale. arXiv preprint arXiv:2111.09296. Alexei Baevski, Steffen Schneider, and Michael Auli. 2019. vq-wav2vec: Self-supervised learning of discrete speech representations. In *Proc. of ICLR*. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449–12460. Alexandre Bérard, Olivier Pietquin, Laurent Besacier, and Christophe Servan. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In NIPS Workshop on end-to-end learning for speech and audio processing. Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Proceedings of the fourth workshop on statistical machine translation, pages 182–189. Ond rej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In *Proceedings of the First Conference* on Machine Translation, pages 131–198, Berlin, Germany. Association for Computational Linguistics. Ondˇrej Bojar and Aleš Tamchyna. 2011. Improving translation model by monolingual data. In *Proceedings of the sixth workshop on statistical machine* translation, pages 330–336. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63. Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, et al. 2021. Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio. arXiv preprint arXiv:2106.06909. Peng-Jen Chen, Ann Lee, Changhan Wang, Naman Goyal, Angela Fan, Mary Williamson, and Jiatao Gu. 2020. Facebook ai's wmt20 news translation task submission. In *Proceedings of the Fifth Conference* on Machine Translation, pages 113–125. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. 2022. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. *IEEE Journal of Selected Topics in Signal Processing*, 16(6):1505–1518. Xuxin Cheng, Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, and Yuexian Zou. 2023. M 3 st: Mix at three levels for speech translation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu. 2021. W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 244–250. IEEE. Mattia A Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019a. Must-c: a multilingual speech translation corpus. In Proc. of the NAACL-HLT, pages 2012–2017. Association for Computational Linguistics. Mattia A Di Gangi, Matteo Negri, and Marco Turchi. 2019b. Adapting transformer to end-to-end spoken language translation. In *Proc. of INTERSPEECH*, pages 1133–1137. International Speech Communication Association (ISCA). Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speechtransformer: a no-recurrence sequence-to-sequence model for speech recognition. In *Proc. of ICASSP*, pages 5884–5888. IEEE. Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021a. Consecutive decoding for speech-to-text translation. In *Proc. of AAAI*, pages 12738–12748. Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. 2021b. Listen, understand and translate: Triple supervision decouples end-to-end speech-to-text translation. In *Proc. of* AAAI, pages 12749–12759. Yichao Du, Zhirui Zhang, Weizhi Wang, Boxing Chen, Jun Xie, and Tong Xu. 2022. Regularizing end-toend speech translation with triangular decomposition agreement. In *Proc. of AAAI*, pages 10590–10598. Kevin Duh. 2018. The multitarget ted talks task. http://www.cs.jhu.edu/~kevinduh/a/ multitarget-tedtalks/. Ewan Dunbar, Robin Algayres, Julien Karadayi, Mathieu Bernard, Juan Benjumea, Xuan-Nga Cao, Lucie Miskic, Charlotte Dugrain, Lucas Ondel, Alan Black, et al. 2019. The zero resource speech challenge 2019: Tts without t. In *Proc. of INTERSPEECH*. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In *Proc. of EMNLP*, pages 489–500, Brussels, Belgium. Association for Computational Linguistics. Qingkai Fang, Rong Ye, Lei Li, Yang Feng, and Mingxuan Wang. 2022. Stemm: Self-learning with speechtext manifold mixup for speech translation. In *Proc.* of ACL, pages 7050–7062. Akhbardeh Farhad, Arkhangorodsky Arkady, Biesialska Magdalena, Bojar Ondˇrej, Chatterjee Rajen, Chaudhary Vishrav, Marta R Costa-jussa, España-Bonet Cristina, Fan Angela, Federmann Christian, et al. 2021. Findings of the 2021 conference on machine translation (wmt21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88. Association for Computational Linguistics. Chi Han, Mingxuan Wang, Heng Ji, and Lei Li. 2021. Learning shared semantic space for speech-to-text translation. In *Proc. of ACL - Findings*, pages 2214– 2225, Online. Association for Computational Linguistics. Tomoki Hayashi, Shinji Watanabe, Yu Zhang, Tomoki Toda, Takaaki Hori, Ramon Astudillo, and Kazuya Takeda. 2018. Back-translation-style data augmentation for end-to-end asr. In *2018 IEEE Spoken Language Technology Workshop (SLT)*, pages 426–433. IEEE. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative backtranslation for neural machine translation. In Proceedings of the 2nd workshop on neural machine translation and generation, pages 18–24. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *IEEE/ACM Transactions on Audio,* Speech, and Language Processing, 29:3451–3460. Kenji Imamura, Atsushi Fujita, and Eiichiro Sumita. 2018. Enhancement of encoder and attention using target monolingual corpora in neural machine translation. In *Proceedings of the 2nd Workshop on Neural* Machine Translation and Generation, pages 55–63. Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki Karita, Nelson Yalta, Tomoki Hayashi, and Shinji Watanabe. 2020. ESPnet-ST: All-in-one speech translation toolkit. In *Proc. of ACL*, pages 302–311. Hirofumi Inaguma, Sravya Popuri, Ilia Kulikov, PengJen Chen, Changhan Wang, Yu-An Chung, Yun Tang, Ann Lee, Shinji Watanabe, and Juan Pino. 2022. Unity: Two-pass direct speech-to-speech translation with discrete units. *arXiv preprint arXiv:2212.08055*. Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Seokchan Ahn, Sangha Kim, Chanwoo Kim, and Inchul Hwang. 2021. Task aware multi-task learning for speech to text tasks. In *Proc. of ICASSP*, pages 7723–7727. IEEE. Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi, Jade Copet, Kushal Lakhotia, Tu Anh Nguyen, Morgane Riviere, Abdelrahman Mohamed, Emmanuel Dupoux, et al. 2022. Text-free prosody-aware generative spoken language modeling. In *Proc. of ACL*, pages 8666–8681. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Advances in Neural Information Processing Systems, 33:17022– 17033. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proc. of EMNLP, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, et al. 2021. On generative spoken language modeling from raw audio. *TACL*, 9:1336– 1354. Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. 2022. Textless speech-to-speech translation on real data. In *Proc. of the NAACL-HLT*. Association for Computational Linguistics. Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. Multilingual speech translation from efficient finetuning of pretrained models. In *Proc. of ACL*, pages 827–838. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *TACL*, 8:726– 742. Yuchen Liu, Hao Xiong, Jiajun Zhang, Zhongjun He, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019. End-to-end speech translation with knowledge distillation. *Proc. of INTERSPEECH*, pages 1128–1132. Chutong Meng, Junyi Ao, Tom Ko, Mingxuan Wang, and Haizhou Li. 2022. Cobert: Self-supervised speech representation learning through code representation learning. *arXiv preprint arXiv:2210.04062*. Xuan-Phi Nguyen, Sravya Popuri, Changhan Wang, Yun Tang, Ilia Kulikov, and Hongyu Gong. 2022. Improving speech-to-speech translation through unlabeled text. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint arXiv:1904.01038*. Sara Papi, Marco Gaido, Matteo Negri, and Marco Turchi. 2021. Speechformer: Reducing information loss in direct speech translation. In *Proc. of EMNLP*. Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-training for end-toend speech translation. In *Proc. of INTERSPEECH*, pages 1476–1480. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191. Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2020. Fastspeech 2: Fast and high-quality end-to-end text to speech. arXiv preprint arXiv:2006.04558. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019. Ccmatrix: Mining billions of high-quality parallel sentences on the web. *arXiv preprint arXiv:1911.04944*. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In *Proc. of ACL*, pages 86– 96, Berlin, Germany. Association for Computational Linguistics. Amitay Sicherman and Yossi Adi. 2023. Analysing discrete self supervised speech representation for spoken language modeling. *arXiv preprint* arXiv:2301.00591. Tzu-Wei Sung, Jun-You Liu, Hung-yi Lee, and Lin-shan Lee. 2019. Towards end-to-end speech-to-text translation with two-pass decoding. In *Proc. of ICASSP*, pages 7175–7179. IEEE. Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In *Proc. of ACL*. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401. Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2017. Listening while speaking: Speech chain by deep learning. In *2017 IEEE Automatic Speech* Recognition and Understanding Workshop (ASRU), pages 301–308. IEEE. Sei Ueno, Masato Mimura, Shinsuke Sakai, and Tatsuya Kawahara. 2021. Data augmentation for asr using tts via a discrete representation. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 68–75. IEEE. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. *Advances in neural* information processing systems, 30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021a. Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In *Proc. of ACL*, pages 993–1003. Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq S2T: Fast speech-to-text modeling with fairseq. In Proc. of AACL, pages 33–39, Suzhou, China. Association for Computational Linguistics. Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021b. Covost 2 and massively multilingual speech translation. In *Proc. of INTERSPEECH*, pages 2247– 2251. Guillaume Wenzek, Vishrav Chaudhary, Angela Fan, Sahir Gomez, Naman Goyal, Somya Jain, Douwe Kiela, Tristan Thrush, and Francisco Guzmán. 2021. Findings of the wmt 2021 shared task on large-scale multilingual machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 89–99. Felix Wu, Kwangyoun Kim, Shinji Watanabe, Kyu Han, Ryan McDonald, Kilian Q Weinberger, and Yoav Artzi. 2022. Wav2seq: Pre-training speech-totext encoder-decoder models using pseudo languages. arXiv preprint arXiv:2205.01086. Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. *arXiv* preprint arXiv:1906.03785. Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-toend speech translation via cross-modal progressive training. In *Proc. of INTERSPEECH*. Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proc. of NAACL-HLT. Biao Zhang, Barry Haddow, and Rico Sennrich. 2022a. Revisiting end-to-end speech-to-text translation from scratch. In *Proc. of ICML*. Chen Zhang, Xu Tan, Yi Ren, Tao Qin, Kejun Zhang, and Tie-Yan Liu. 2021. Uwspeech: Speech to speech translation for unwritten languages. In Proc. of the AAAI, pages 14319–14327. Weitai Zhang, Zhongyi Ye, Haitao Tang, Xiaoxi Li, Xinyuan Zhou, Jing Yang, Jianwei Cui, Pan Deng, Mohan Shi, Yifan Song, et al. 2022b. The ustcnelslip offline speech translation systems for iwslt 2022. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 198–207. Ziqiang Zhang, Long Zhou, Junyi Ao, Shujie Liu, Lirong Dai, Jinyu Li, and Furu Wei. 2022c. Speechut: Bridging speech and text with hidden-unit for encoder-decoder based speech-text pre-training. In Proc. of EMNLP. Chengqi Zhao, Mingxuan Wang, Qianqian Dong, Rong Ye, and Lei Li. 2021. NeurST: Neural speech translation toolkit. In *Proc. of ACL - System Demonstrations*. Renjie Zheng, Junkun Chen, Mingbo Ma, and Liang Huang. 2021. Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation. In *Proc. of ICML*. ## A Data Statistics | En→ | Hours | Samples | |-------|---------|-----------| | De | 408 | 234K | | Fr | 492 | 280K | | Es | 504 | 270K | Table 8: Statistics of MuST-C dataset | Languages | Code | Hours | Samples | |-------------|--------|---------|-----------| | English | En | - | - | | French | Fr | 264 | 207374 | | German | De | 184 | 127834 | | Spanish | Es | 113 | 79015 | | Catalan | Ca | 136 | 95854 | | Italian | It | 44 | 31698 | | Russian | Ru | 18 | 12112 | | Chinese | Zh | 10 | 7085 | | Arabic | Ar | 2 | 2283 | | Swedish | Sv | 2 | 2160 | | Latvian | Lv | 2 | 2337 | | Slovenian | Sl | 2 | 1843 | | Tamil | Ta | 2 | 1358 | | Japanese | Ja | 1 | 1119 | | Indonesian | Id | 1 | 1243 | | Welsh | Cy | 2 | 1241 | Table 9: Statistics of CoVoST-2 X-En involved in this paper. | Lang. | TED | WMT† | CCMatrix∗ | Sum | |---------|-------|--------|-------------|-------| | De | 0.2M | 4.5M | 43M | 48M | | Fr | 0.2M | 39M | 50M | 79M | | Es | - | 13M | 61M | 64M | ## B Experimental Details Vocabulary We apply the SentencePiece10 (Kudo and Richardson, 2018) to tokenize the text and discrete units into subwords. We add all the discrete units as special symbols to the joint vocabulary. 10https://github.com/google/ sentencepiece The joint subword tokenizer is learned on all the translation sentences and discrete unit sequences in the ST training set. The vocabulary size is 8000 for both MuST-C and CoVoST-2 experiments. Specifically, for MuST-C experiments, since the number of K-means clusters is 500, the vocabulary is composed of 500 special unit symbols and 7500 text subwords. For CoVoST2 X-En experiments, the vocabulary consists of 1000 special unit symbols representing 1000 clusters of mHuBERT, and 7000 text subwords. Training details We use Adam optimizer with β1 = 0.9, β2 = 0.98, and 4k warm-up updates to optimize the parameters in our model. We train the model with a batch size of 5k tokens. The learning rate is 7e-4 and we apply an inverse square root schedule. The value of label smoothing is set to 0.1. The up-sampling rate r in DUB is set to 32, given the huge volume differences between the BT data and the original data. For MuST-C experiments, we train U2TT and T2UT models of each translation direction under bilingual settings. For CoVoST-2 X-En experiments, we train a multi-lingual X-En model covering 21 translation directions, distinguished by the language tags of the units in different languages. We implement our models based on Fairseq11 (Ott et al., 2019) codebase. All models are trained on 8 Nvidia Tesla-V100 GPUs and take about 400k steps to converge. During inference, We save the checkpoint with the best BLEU on the validation set and average the last 10 checkpoints. We use beam search with a beam size of 5 for each translation direction. Training details for Bi-modal BART The training of bi-modal BART follows the recipe of mBART (Liu et al., 2020). We implemented a mask rate of 0.3, with the replacement of the masked tokens by random tokens at a probability of 0.1. Additionally, the mask length was determined through sampling from a Poisson distribution, with a lambda parameter of 3.5. ## C Baseline Models Existing ST Systems We list the ST systems we compared with on different datasets: - **MuST-C** In Table 1, we compare our method with the following: Fairseq ST (Wang et al., 2020), NeurST (Zhao et al., 2021), Espnet ST (Inaguma et al., 2020), E2E-ST-JT (Du et al., 2022), Speechformer (Papi et al., 2021), Cascaded ST (Inaguma et al., 2020), MTL (Tang et al., 2021), Self-training (Pino et al., 2020), SpeechT5 (Ao et al., 2022), SpeechUT (Zhang et al., 2022c) and Revisit ST (Zhang et al., 2022a). - **CoVoST X-En high resource** In Table 2, we compare our method with several baselines from (Wang et al., 2021b), including Transformer-ST, Transformer-ST + ASR pre-train and Cascaded ST, and Revisit ST (Zhang et al., 2022a). - **CoVoST X-En low resource** In Table 3, we compare our method with several existing ST methods, including Transformer-ST, Transformer-ST + ASR pre-train from (Wang et al., 2021b) and large-scale multilingual speech or text pre-training methods: XLS-R (Wu et al., 2022), Wav2seq (Wu et al., 2022), XLSR+mBART-50 (Babu et al., 2021), LNA-E,D (Li et al., 2021). Note that XLS-R is pre-trained on 436K hours of speech across 128 languages. Transformer-ST for MuST-C For a fair comparison, we keep the parameters roughly the same size as DUB, setting two covolutional layers, a 12-layer Transformer encoder and a 6-layer Transformer decoder, with hidden size d = 768, 16 attention heads, and 4096 FFN hidden states, which makes the model size larger than baselines like Fairseq ST (Wang et al., 2020), NeurST (Zhao et al., 2021), and Espnet ST (Inaguma et al., 2020). ## D Scalability How does model size affect the results of our method? How much improvement does the raw text in the target language bring to our method? To this end, we take MuST-C English-German translation as an example. We set the model size to 73M, 176M and 260M parameters respectively (the specific hyperparameter settings are shown in Table 11), and introduce extra 1M, 10M, and 48M German sentences. Figure 3 shows the BLEU scores of different sizes of models, with different amounts of monolingual back-translation data added. In general, regardless of the model size, introducing **more text** brings better performance. When we introduce a large amount of back-translated data, **the larger** model gets significantly better performance. We find that when no or less back-translated data is introduced, the performance of the large model is instead not optimal. This is because the large model is prone to overfitting when the original training data is small, but as the monolingual data is gradually introduced, the advantage of the large model becomes obvious, without replying to the transcription, introducing 48M back-translated pairs, the model with 260M parameters can boost up to 6.1 BLEU on En-De. | Model | Encoder | Decoder | Hidden | | |---------|-----------|-----------|----------|------| | Params | Layers | Layers | Dim | | | 1 SMALL | 73M | 6 | 6 | 512 | | 2 BASE | 176M | 12 | 6 | 768 | | 3 LARGE | 260M | 12 | 6 | 1024 | Table 11: Hyper-parameter settings for the models in Figure 3. ![13_image_0.png](13_image_0.png) ## E Comparison With Cascaded System It could be argued that our model employs a cascaded architecture, comprising a unit extractor and a unit-to-text translation model. The traditional cascade ST system (ASR+MT) can also be enhanced through applying back-translation to improve its MT model. In Table 12, we compare the performance of DUB with the BT-enhanced cascaded ST system both utilizing 10M unpaired text. By comparison, we can find that the BLEU score of the U2TT model is inferior to that of the cascaded system when utilizing 10 million unpaired text samples. This discrepancy can likely be attributed to the higher baseline performance of the cascaded system. Additionally, DUB demonstrates a superior relative improvement in BLEU score compared to the cascaded system. Moreover, the discrete unit | Method | Extra Text | BLEU | ∆BLEU | |-------------|--------------|--------|---------| | U2TT | - | 20.4 | - | | w/ DUB | ✓ | 25.0 | 4.6 | | Cascaded ST | - | 23.1 | - | | w/ MT-BT | ✓ | 26.0 | 2.9 | extractor is obtained through unsupervised training on unlabeled speech, which requires no transcriptions compared with the ASR system trained on speech-transcription pairs. ## F Cases Of Text-To-Speech Translation In Table 13, we show two cases of German-English text-to-speech translation on MuST-C En-DE TSTCOM set. In CASE 1, our text-to-speech translation system generates speech with the same content and a similar spectrogram as reference speech. In CASE 2, the synthetic speech deviated slightly from the reference speech, but the translation is correct - "release" has the same meaning as "shoveling out" and "all the time " just means "all along". The samples of generated audio are included in https://anonymous.4open. science/r/DUB/ttss_samples. ![15_image_0.png](15_image_0.png) German: Einzige Land der Welt. CASE 1 ![15_image_1.png](15_image_1.png) CASE 2 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Broader Impact ✓ A2. Did you discuss any potential risks of your work? Broader Impact ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section4 & section5 & section6 & appendix D & appendix E ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section4.2 & appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
nayyeri-etal-2023-knowledge
Knowledge Graph Embeddings using Neural {I}to Process: From Multiple Walks to Stochastic Trajectories
https://aclanthology.org/2023.findings-acl.448
Knowledge graphs mostly exhibit a mixture of branching relations, e.g., hasFriend, and complex structures, e.g., hierarchy and loop. Most knowledge graph embeddings have problems expressing them, because they model a specific relation r from a head h to tails by starting at the node embedding of h and transitioning deterministically to exactly one other point in the embedding space. We overcome this issue in our novel framework ItCAREToE by modeling relations between nodes by relation-specific, stochastic transitions. Our framework is based on stochastic ItCARETo processes, which operate on low-dimensional manifolds. ItCAREToE is highly expressive and generic subsuming various state-of-the-art models operating on different, also non-Euclidean, manifolds. Experimental results show the superiority of ItCAREToE over other deterministic embedding models with regard to the KG completion task.
# Knowledge Graph Embeddings Using Neural Itoˆ **Process:** From Multiple Walks To Stochastic Trajectories Mojtaba Nayyeri1, Bo Xiong1**, Majid Mohammadi**2, Mst. Mahfuja Akter6, Mirza Mohtashim Alam6, Jens Lehmann4,5**, Steffen Staab**1,3 1University of Stuttgart, 2Vrije Universiteit Amsterdam, 3University of Southampton, 4TU Dresden, 5Amazon (work done outside of Amazon), 6University of Bonn ## Abstract Knowledge graphs mostly exhibit a mixture of branching relations, e.g., *hasFriend*, and complex structures, e.g., hierarchy and loop. Most knowledge graph embeddings have problems expressing them, because they model a specific relation r from a head h to tails by starting at the node embedding of h and transitioning deterministically to exactly one other point in the embedding space. We overcome this issue in our novel framework ItoˆE by modeling relations between nodes by relation-specific, stochastic transitions. Our framework is based on stochastic Itoˆ processes, which operate on low-dimensional manifolds. ItoˆE is highly expressive and generic subsuming various stateof-the-art models operating on different, also non-Euclidean, manifolds. Experimental results show the superiority of ItoˆE over other deterministic embedding models with regard to the KG completion task. ## 1 Introduction Knowledge graphs (KGs) play a central role in many AI-related tasks (Nickel et al., 2016) such as recommendation systems (Lukovnikov et al., 2017) and question answering (Zhang et al., 2016). KGs represent real-world knowledge as a set of facts in the form of triples *(entity, relation, entity)*, e.g., *(Alice, FriendOf, Bob)*. Entities are nodes of a graph and relations are the directed edges. KGs are highly incomplete which adversely affects the outcome of various KG-centered tasks. To tackle this problem, various link prediction approaches have been proposed to leverage the existing links for the prediction of new ones (a.k.a., knowledge graph completion). Among existing approaches (Wang et al., 2017; Ji et al., 2021), KG embedding (KGE) succeeded and became a popular line of work. KGE models map entities and relations in KGs into a low dimensional geometric space and measure the plausibility of each triple *(entity, relation,* ![0_image_0.png](0_image_0.png) entity) by a scoring function f that uses the embedded vectors of this triple **(entity, relation, entity)** to gauge its plausibility by a real, positive value f(**entity, relation, entity**). Prominent examples of KGEs include translational/rotational families (Wang et al., 2014; Lin et al., 2015), bilinear families (Nickel et al., 2011; Yang et al., 2014) and neural network families (Dettmers et al., 2018; Nguyen et al., 2018). However, these models are all defined in Euclidean space that suffers from modeling complex structures such as hierarchies (Chami et al., 2020) in low-dimensional vector space. Therefore, KG embeddings based on non-Euclidean geometric spaces such as hyperbolic space (Chami et al., 2020; Balazevic et al., 2019) have been proposed. Both lines of work often formulate the relationspecific transitions between subsequent nodes in a deterministic way and represent them, e.g., by translations or rotations, implying that with regard to a particular relation each node is connected to a single other node. In a broader view, this implies that starting from an embedded node with a given single relation type, there will be only one walk with the length m, which is m subsequent relation-specific transitions from the starting node, and this walk will lead to another specific embedded node. However, starting at any specific node and considering a branching relation such as *hasFriend*, there may be several relation-specific walks of length m. These walks define a group of nodes in distance m. Figure 1 illustrates the limits of traditional embedding models based on translation or rotation with deterministic transitions (a) and contrasts it with the tree-like structure that unfolds in a non-deterministic model (b) able to represent a branching relationship. In this paper, we formalize probabilistic walks of length m − 1 between two groups of nodes (Ωt1 , Ωtm) in order to represent branching relationships. Each group of nodes Ωti from start Ωt1 to end Ωtm is represented by a random variable Xti , which indicates a probabilistic distribution on the manifold Xti : Ωti → M, i = 1*, . . . , m*. The sequence of random variables S = {Xt1 , Xt2 , . . . , Xtm} constitutes a stochastic process. A sample, also called realization, of the stochastic process S is a walk with m−1 transitions containing m entity representations {et1 , . . . , etm} on the manifold, eti ∈ M. We model probabilistic transitions between groups of nodes using the Itoˆ process, which defines transitions between nodes through an integral containing drift and diffusion. Drift models the deterministic nature of a transition, while diffusion captures stochastic variation. Drift and diffusion operations are learned separately via a relationspecific neural network. Owing to the probabilistic perspective, we develop a KGE model, called ItoˆE, that uses the Itoˆ process for stochastic transitions as well as various, also non-Euclidean, manifolds (i.e., sphere, Poincaré ball, hyperboloid) to support the modeling of heterogeneous graph structures on manifolds. We also provide an extensive theoretical investigation and prove that ItoˆE: a) is fully expressive; b) is capable of encoding various structures as stochastic processes; c) subsumes various state-of-the-art KGE models, including ComplEx and QuatE, d) models various relational patterns, e.g., symmetric patterns. Experimental results show the superiority of our model, especially in low-dimensional space. ## 2 Related Work Euclidean KGE Models The earliest KGE models employ Euclidean geometry, i.e., a d dimensional real space R d, with its corresponding Euclidean distance function as well as inner product. TransE represents each relation as a translation from a head to a tail embedding. Several variants of TransE have been developed to model one-to-many relations, as well as symmetric and reflexive patterns (Wang et al., 2014; Lin et al., 2015). DistMult (Yang et al., 2014) proposes a tensor factorization with a diagonal relation matrix. The model captures symmetric relations, but not anti-symmetry. ComplEx(Trouillon et al., 2016) and QuatE (Zhang et al., 2019) models have been proposed as extensions of DistMult from the real space to the (hyper)complex space. RotatE (Sun et al., 2019) models each relation as rotation in the complex space using the Euler formula, which captures symmetric, anti-symmetric, inverse, and composition patterns. Apart from the mentioned shallow embedding models, there is a thread of neural network KGE models such as Neural Tensor Network (Socher et al., 2013), ConvE (Dettmers et al., 2018), and ConvKB (Nguyen et al., 2018). In summary, these models cannot capture various graph structures, especially in low dimensional space due to the underlying Euclidean space. A series of graph embedding models have been proposed based on random walks in the graph space. Node2vec (Grover and Leskovec, 2016; Ristoski and Paulheim, 2016; Portisch and Paulheim, 2022; Huang et al., 2021; Perozzi et al., 2014) are among the models which perform a biased random walk in graph space and compute the low dimensional representation of nodes in such a way to maximize the likelihood of preserving network neighborhoods of nodes. The random walk-based models are not among state-of-the-art KGE models in the link prediction task. In addition, in most cases, the random walk is performed in the graph space to obtain the sequences of nodes for embedding, but the notion of walk in the embedding space is neglected as there is no transition function in the embedding space to model this. Non-Euclidean KGE Models Euclidean-based KGE models are not capable of preserving complex graph structures in a low-dimensional space. However, embedding KGs on non-Euclidean manifolds has shown promising performance in the preservation of a few structures, especially in lowdimensional spaces. (Nickel and Kiela, 2017; Chami et al., 2020; Balazevic et al., 2019; Weber and Nickel, 2018) showed the advantage of Poincaré Ball and other geometries for embedding graphs with various structures including hierarchical structures. 5∗E (Nayyeri et al., 2021a) utilized the projective geometry with the five main transitions, namely translation, rotation, inversion, reflection, and homothety, to capture heterogeneous structures such as loop-path subgraphs. (Nayyeri et al., 2021b) embedded KGs on the vector field. UltraE (Xiong et al., 2022) considers a mixture manifold–pseudo-Riemannian space that generalizes hyperbolic and spherical spaces. Other manifold-based KGEs can be found in (Suzuki et al., 2019). Overall, the mentioned models suffer from the same problem mentioned for the Euclidean counterpart, i.e., deterministic transitions between embedded nodes. ## 3 Preliminaries Stochastic Process Let T be an arbitrary index set. A *stochastic process* is a collection of random variables S = {Xt: t ∈ T} defined on a probability space P with the index set T of size |T|. All random variables Xt: Ωt −→ R d, et7−→ Xt(et), t ∈ T belong to the same probability space. A random experiment (realization) is a selection of an outcome Xt(et) ∈ M at random considering the probability measure P. A *sample path* of a stochastic process S = {Xt: t ∈ T} is a function from t to Xt(et) using an ordered index set T, giving us a random walk. Figure 2b illustrates a random walk in the vector space with 50000 steps. Brownian Motion A stochastic process B = {B(t), t ≥ 0} defined on a probability space P = (Ω, F, P) (F is a sigma algebra (events)) is a Brownian motion if a) B(0) = 0; b) B has independent and stationary increment, i.e., B(t1), B(t2)−B(t1)*, . . . , B*(tn)−B(tn−1) are independent random variables for 0 < t1 *< . . . < t*n; c) B has Gaussian increments, i.e., B(tn + α) − B(t) ∼ N(0, α), ∀t ≥ 0*, α >* 0; and d) B has continuous sample paths, i.e., B is continuous in t. Figure 2a shows the evolution of two tree structures via the Brownian motion of several particles. Brownian motion is suitable to model probabilistic branching in random walks as path diffusion when traversing the embedded KG in a vector space. In our setting, we will write interchangeably B(et) and B(et) for B(t). Itoˆ**Process** The stochastic process S = {Xt, t ≥ 0} that solves the following integral is called an Itoˆ ![2_image_0.png](2_image_0.png) process, $$X_{t}=X_{0}+\int_{0}^{t}a(X_{s},s)\,ds\,+\,\int_{0}^{t}b(X_{s},s)\,dB_{s},\tag{1}$$ where t ≥ 0 and X0 is a scalar starting point. {a(Xt, t) : t ≥ 0}, {b(Xt, t) : t ≥ 0} are stochastic processes which are called *drift* and *diffusion*, respectively. B is Brownian motion, dB is normally distributed with zero mean and variance dt. The above formulation of Itoˆ process is approximated by the *Euler-Maruyama* approximation as follows $$\begin{array}{l}{{X(t_{n+1})=X(t_{n})+a(X(t_{n}),t_{n})\Delta t+}}\\ {{b(X(t_{n}),t_{n})\Delta B(t_{n}),\quad t_{n}=n\Delta t.}}\end{array}$$ $$\mathbf{\Sigma}$$ (2) Non-Euclidean Geometry In this part, we introduce three popular non-Euclidean manifolds namely spherical S = {x ∈ R d+1|⟨x, x⟩ = 1}, Hyperboloid H = {x ∈ R d+1|⟨x, x⟩ =1K}, and Poincaré ball B = {x ∈ R d| ∥x∥ < − 1 K}. K > 0 is curvature. The tangent space T K x at a point x on manifold is a d-dimensional vector space. This space covers all possible directions of paths on a manifold starting from x. Each point on the tangent space is mapped to the manifold via an exponential map. Given v as a tangent vector at point x on the manifold, the exponential map for sphere, Hyperboloid and Poincaré ball are $$exp_{\mathbf{x}}(\mathbf{v})=cos(\|\mathbf{v}\|)\mathbf{x}+sin(\|\mathbf{v}\|)\frac{\mathbf{v}}{\|\mathbf{v}\|},\tag{3}$$ $$cosh(\sqrt{|K|}\|\mathbf{v}\|)\mathbf{x}+\mathbf{v}\frac{sinh(\sqrt{|K|}\|\mathbf{v}\|)}{\sqrt{|K|}\|\mathbf{v}\|},$$ and $\mathbf{x}\oplus_{K}(tanh(\sqrt{|K|}\frac{\lambda_{\mathbf{x}^{K}}\|\mathbf{v}\|}{2})\frac{\mathbf{v}}{\sqrt{|K|}\|\mathbf{v}\|})$, respectively. The exponential map projects the tangent vector at a point x on a manifold to another point laying on the geodesic curve, i.e, a curve with the shortest distance on the manifold. K is curvature and ⊕K is the Möbius addition (Balazevic et al., 2019). Note that the tangent vector is defined as v = dx dt , which is orthogonal to the manifold at the point x. Later in the paper, we show that by using the Itoˆ process, as stochastic differential equations, we can derive a stochastic process evolving on the manifold. ## 4 ItoˆE: Neural Itoˆ **Process Embedding** We introduce ItoˆE, a novel KGE model that utilizes the stochastic processes on manifolds for KG embedding. ItoˆE is capable of preserving various graph structures and capturing branching relations by modeling the evolution of graph structures in the embedding space as a stochastic process. KGE models have four essential components: entity and relation representation, score function, and loss function. In the following, these four components of ItoˆE are explained. Entity Representation Let's suppose that E represents the collection of all entities present in the knowledge graph (KG). Each symbolic entity e ∈ E in a KG is embedded on a d-dimensional manifold M, i.e., e ∈ M. Therefore, entity embeddings are points on the manifold. In the proposed model, we use Poincaré Ball, Hyperboloid, Euclidean, and Sphere manifolds. Relation Representation Each fact in a KG is represented by a triple (et*, r, e*t+1). Because the entities in a triple are subsequent but with arbitrary indexes, we use the notation (etn*, r, e*tn+1 ) to represent the triple, where n is between 1 and |E| (number of entities in the KG). Most KGE methods model the transition from etn to etn+1 via a relation-specific transition (e.g., translation, or rotation). Therefore, each relation is modeled as a point-wise deterministic transition. However, in a broader view, the relational dependencies between nodes are stochastic processes, i.e., a transition to a tail given a head node and a relation happens with a probability so that the sequence of such probabilistic transitions constitutes a stochastic process. Each random variable in the stochastic process includes all the entities E, a few of which has a non-zero likelihood since the transition from a given entity is only possible to its neighbors. The group of neighbors at each step together with their likelihood in a probability space is considered a random variable. As a result, there ![3_image_0.png](3_image_0.png) is a relational mapping between two random variables Xtn, Xtn+1 , associated with the two groups of nodes Ωtn, Ωtn+1 at each step. In the following, we model a relation r (defining transitions between nodes) as a stochastic process Sr. To explain this, let us view the notion of walks formed by a relation r from a stochastic processes angle. A sequence of symbolic entities, connected by a relation r forms a walk with a particular length. A walk of length n − 1 includes n entities and is shown by Pr = {et1 , et2 , . . . , etn }. Assume that there are multiple walks from a set of given entities Ωt1 associated to Xt1 , to a set of entities in Ωtn associated to Xtn . The transition between the nodes in a walk is done randomly according to a distribution. During traversing the graph from a starting node taken from Xt1 to a target node, taken from Xtn , and at each step ti, there are ni possible options for selecting the next node. Therefore, traversing a graph with walks of length n − 1 (with relation r) leads to a stochastic process Sr = {Xt1 , Xt2 , . . . , Xtn }. Consequently, each relation in the KG is represented by a stochastic process. In this paper, among various stochastic processes, we employ the Itoˆ process for modeling each relation r due to the simplicity of implementation and controlling drift and diffusion. The Itoˆ process is defined as $$X_{t}=X_{0}+\int_{0}^{t}a_{r}(X_{s},s)\,ds\,+\,\int_{0}^{t}b_{r}(X_{s},s)\,dB_{s},\,t\geq0,\tag{4}$$ where ar(*., .*) and br(*., .*) are relation-specific drift and diffusion in the Itoˆ integral, respectively. The drift part captures the deterministic transitions, while the diffusion part captures the stochastic transitions. In this equation, we consider the random variables corresponding to the group of entities that can be seen at each step as a continuous representation. For implementation, we provide the Euler-Maruyama approximation (Jahnke, 2016) as a well-known approximation for discrete space: $$\begin{array}{c}{{X(t_{n+1})=X(t_{n})+a_{r}(X(t_{n}),t_{n})\Delta t+}}\\ {{b_{r}(X(t_{n}),t_{n})\Delta B(t_{n}),\,t_{n}=n\Delta t,}}\end{array}\tag{5}$$ where X(ti) is the random variable at step i. For simplicity, we set ∆t = 1 from now onward. Note that this approximation is used for the Euclidean manifold. The drift ar(X(tn), tn) and diffusion br(X(tn), tn) parts are parameterized by two separate multi-layer neural networks. In this way, both drift and diffusion at each step are learned by the model. To enable the Itoˆ process acting on a manifold, it is essential to obtain the tangent vector v. The tangent vector determines the direction of movement on manifold M. The exponential map (Equation 3) is used to map the two subsequent random variables on the manifold while considering drift and diffusion as follows $$\begin{array}{c}{{X_{t_{n+1}}=e x p_{X_{t_{n}}}(a_{r}(X(t_{n}),t_{n})\Delta t+}}\\ {{b_{r}(X(t_{n}),t_{n})\Delta B(t_{n})),\,t_{n}=n\Delta t.}}\end{array}\tag{6}$$ Therefore, each two subsequent sampled entities lie on a geodesic curve (shortest path). Because the above equation is a stochastic process on a manifold, at each iteration of batch learning, a set of entities in batch triples are observed randomly to hold this equation (realization) as follows: $$\begin{array}{l}{{X_{t_{n+1}}(e_{t_{n+1}})=e x p_{X_{t_{n}}(e_{t_{n}})}(a_{r}(X_{t_{n}}(e_{t_{n}},t_{n}))}}\\ {{\Delta t+b_{r}(X_{t_{n}}(e_{t_{n}},t_{n}))\Delta B(t_{n})),\,t_{n}=n\Delta t,}}\end{array}$$ where the equation is held for each sample walk. Note that Xti (eti ) = eti ∈ M, i = 1*, . . . , n* is entity embeddings. Scoring Function For a given triple (etn*, r, e*tn+1 ), the score is as follows $$\begin{array}{l}{{f(e_{t_{n}},r,e_{t_{n+1}})=-\|e_{t_{n+1}}-e x p_{e_{t_{n}}}(a_{r}(e_{t_{n}},t_{n}))}}\\ {{\Delta t+b_{r}(e_{t_{n}},t_{n})\Delta B(t_{n}))\|,\ \ t_{n}=n\Delta t.}}\end{array}$$ For positive (negative) triples, f(etn*, r, e*tn+1 ) is a high (low) value. That is, the two sampled entities etn, etn+1 lie on a geodesic curve on a manifold. Loss Function For training the model, we use the following loss function (Chami et al., 2020) $\mathcal{L}=\sum_{e^{\prime}\in\mathcal{E}}log(1+exp(y_{e^{\prime}}(f(e_{t_{n}},r,e^{\prime})+\delta_{e_{t_{n}}}+\delta_{e^{\prime}}))),$ where $y_{e^{\prime}}=1$ if $e^{\prime}=e_{t_{n+1}}$, and $y_{e^{\prime}}=-1$ if e′ ̸= etn+1 , and δetn and δe′ are trainable entity biases. In the next section, we present important insights about our formulation, followed by theoretical justification for the core formulation of our model. ## 5 Insights And Theoretical Analysis Memory Complexity In ItoˆE, the number of relation parameters grows linearly with the relation's dimensionality. Hence, ItoˆE's space complexity is O(Ne × de + Nr × dr), where Ne and Nr are the numbers of entities and relations, de and dr are the embedding dimensionality of entities and relations, respectively. The additional parameters come from the neural network that approximates the ItoˆE' process, which is in our case, shared across all entities. Relational Sub-structures Here we show the capability of the stochastic process in equation (5) for modeling various graph sub-structures. To this end, let us have a sample walk from the stochastic process Sr, which is {Xt1 (et1 )*, . . . , X*tn(etn)} = {et1*, . . . ,* etn }. A stochastic process Sr inherently covers various graph structures such as hierarchy, loop, and path. This is due to the fact that various sample walks over a stochastic process generate different parts of a subgraph. Figure 3 shows an example of a stochastic process for the relation *ConnectedT o* which forms several relational and structural patterns such as loop and path by sampling from the process in the airport example (airports are connected with various shapes). Another example is a tree-like structure. For tree-like structures (see Figure 1), Xt1 contains only a root node, and Xtn contains all the leaves. Xti , i = 2*, . . . , n* − 1 generates the intermediate nodes from the root to the leaves. Therefore, any walk from the root node to each of the leaves is a sample taken from the stochastic process. The relation-specific stochastic process generates various sample walks at random covering various parts of a sub-graph. Subsumption of Other KGE Models ItoˆE provides a general framework that covers various baselines and state-of-the-art KGE models. We prove, in this part, that our model subsumes TransE, RotatE, QuatE, 5*E, ComplEx, and DistMult. That is, given any set of triples with arbitrary true/false labeling, any score value represented by each of the mentioned models for the triples in the set is also represented by the score function of ItoˆE. In this regard, the following theorem holds: Theorem 1. Itoˆ*E subsumes TransE, RotatE, DistMult, ComplEx, QuatE, and 5*E.* As a consequence of Theorem 1, ItoˆE is fully expressive and capable of capturing various graph structures and relational patterns that each of the mentioned models is capable of. Therefore, the following corollaries hold: Corollary 1. Itoˆ*E is fully expressive, i.e., for every* ground truth over an arbitrary KG, there are assignments to the entities and relations embedding to capture the ground truth. Corollary 2. Itoˆ*E models symmetric, antisymmetric, composition, inversion, transitive, and* reflection patterns. Corollary 3. Itoˆ*E models one-to-many relation.* Corollary 4. *a) Let* L n r1 be a loop structure with a single relation r1 and n nodes. ItoˆE models the loop structure L n r1 . b) Let P n r2 be a path structure with relation r2 and n *nodes. In addition, each nodes of* L n r1 is connected to one node in P n r1 . The combined structure is denoted by LPn r1r2r3 . Itoˆ*E models* LPn r1r2r3 . c) Itoˆ*E models loop-path structure with single* relation r1*, i.e.,* LPn r1r1r1 . ## 6 Experiments And Results Experimental Setup In this section, we evaluate the performance of ItoˆE against various state-ofthe-art KGE models in the link prediction task. Our evaluation in this section includes link prediction in low-dimensional space, analysis on capturing complex structures, analysis of capturing hierarchical structures, and time and memory complexity. Further evaluations including results per manifold, variance of results of our model, influence of embedding dimension, and discussion on the effect of loss function on deterministic models can be found in appendix. Evaluation Metrics We use four standard metrics for link prediction namely Mean Reciprocal Rank (MRR), Hits@k (k=1,3,10). To compute each of the metrics, we use the procedure in (Bordes et al., 2013; Lacroix et al., 2018). For each test triple (etn*, r, e*tn+1 ), we first replace the head entity etn by each of the entities in the dictionary, i.e., e′ *∈ E − {*etn }. This results in ne corrupted triples {(e′*, r, e*tn+1 )}, where ne is the number of entities in the KG. We filtered this set by removing all triples that are already appeared the dataset as well as self loop. We then compute the scores of the original test triple (etn*, r, e*tn+1 ) and the corrupted triples {(e′*, r, e*tn+1 )}, sort them based on scores and rank them. The resulted rank of the original triple is the left rank rl. The same procedure is performed to compute the right rank by the corruption of the tail entities. rr denotes the right rank. The average of the left and the right ranks is denoted by ra. The mean rank of all testing triples is MR. The percentage of the test triples ranked lower than k = 1, 3, 10 denotes Hits@k. MRR is the average reciprocal of rank for all testing triples. Environment and Hyperparameters We implemented1 our model using Python and PyTorch library. We added reciprocal relations to the training samples as a standard technique used in (Kazemi and Poole, 2018; Lacroix et al., 2018). We added N3 regularization for training the models (Lacroix et al., 2018). Because one of the main goal of this paper is modeling graphs in low dimensional space, we follow the common practice of existing works in low dimensional embedding (Chami et al., 2020) and trained the models in a low dimension (d = 32). The other dimensions have been done as further analysis in the appendix. We split data into several batches and used the Adagrad/Adam as an optimizer. An early stopping technique based on validation MRR has been used to terminate the running and perform testing. Batch size b, learning rate lr, N3 regularization coefficient α are among the hyperparameters used in this paper. In addition, we set the number of hidden layers for drift and diffusion neural networks to two. l denotes the number of neurons in the hidden layer of each neural networks. The used distribution for Brownian motion ∆B is a normal distribution with zero mean and σ variance. For simplicity, we set σ = 1. Due to randomness, for our model, we perform experiments 10 times and report the average results in Table 1. Because the variances were low, we did not report them in the main table. The manifold M is selected from Poincaré ball B*, Hyperboloid* 1https://github.com/ColdMist/ItoE ![6_image_0.png](6_image_0.png) model H*, Euclidean real manifold* R and *spherical* manifold S. The optimal hyperparameters per each dataset are reported in a separate table in appendix. Dataset We used the two standard benchmark datasets namely FB15k-237 (Toutanova and Chen, 2015) and WN18RR (Dettmers et al., 2018) for evaluating ItoˆE on static KGs. Both datasets contain structural and relational patterns including symmetric/anti-symmetric and composition patterns. In addition, WN18RR contains hierarchical structures associated with *hypernym* and *part-of*. Furthermore, both datasets include various types of relationships, including one-to-one relationships, one-to-many relationships (where a subject can have multiple objects but each object has only one ![6_image_2.png](6_image_2.png) ![6_image_1.png](6_image_1.png) subject), many-to-one relationships, and many-tomany relationships (where a subject can have multiple objects and an object can have multiple subjects). KGE Models Two classes of baselines and stateof-the-art KGE models have been selected as competitors: a) baselines KGEs: TransE, DistMult, and ComplEx, b) state-of-the-art KGEs: RotatE, QuatE, 5∗E, MurP/MurE, and RotH/RefH/AttH. We trained these models using entity bias, and cross-entropy loss, regularization, and also we added reverse triples as in (Chami et al., 2020). Such techniques improves the performance of the models including TransE comparing to their original results. MurP, RotH/RefH/AttH employ Poincaré ball as a non-Euclidean manifold to preserve hierarchical structures. RotH additionally takes the advantage of rotation in hyperbolic space to model various relational patterns such as symmetry, anti-symmetry, inversion, and composition. To train ComplEx, QuatE, RotH/RefH/AttH, MurP, DistMult, and our model, we enriched the data with reverse triples, as the standard technique employed in (Kazemi and Poole, 2018), and used N3 regularization (Lacroix et al., 2018). | Model | FB15k-237 | WN18RR | | | | | | | |---------------|-------------|----------|------|------|------|------|------|------| | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | | | TransE | .295 | .210 | .322 | .466 | .366 | .274 | .433 | .515 | | RotatE | .051 | .029 | .051 | .091 | .309 | .293 | .317 | .336 | | ComplEx | .287 | .203 | .316 | .456 | .421 | .391 | .434 | .476 | | QuatE | .293 | .212 | .320 | .460 | .421 | .396 | .430 | .467 | | MuRP | .321 | .239 | .352 | .495 | .473 | .421 | .484 | .546 | | 5 ∗E | .323 | .240 | .355 | .501 | .449 | .418 | .462 | .510 | | REFH | .316 | .229 | .345 | .490 | .449 | .418 | .462 | .510 | | ROTH | .315 | .226 | .348 | .491 | .477 | .426 | .490 | .548 | | ATTH | .321 | .240 | .355 | .501 | .465 | .426 | .481 | .540 | | ItoˆE (R) | .330 | .242 | .361 | .508 | .455 | .404 | .480 | .548 | | ItoˆE (S − P) | .334 | .245 | .361 | .511 | .474 | .426 | .499 | .574 | ![7_image_0.png](7_image_0.png) Results Table 1 shows the results of ItoˆE (Poincare (P), Euclidean (R), Sphere (S)) and other models in low-dimensional embedding d = 32 on FB15K-237 and WN18RR. Note that d = 32 is a common practice of KGE literature for evaluation of the models in low dimensional embedding (Chami et al., 2020). According to our experiments, ItoˆE with the Poincaré ball outperforms all models on WN18RR dataset which contains mainly hierarchical relations such as *hypernym* and *part-of*. This dataset also contains relations forming loop structures such as *similar-to*. MuRP, REFH, ROTH, and ATTH utilize Poincaré ball with deterministic transitions (e.g., rotation, reflection, translation, and Affine mapping). ItoˆE with the stochastic transition on Poincaré ball outperforms all of these models on WN18RR across all metrics. This is especially visible by looking at Hits@3 and Hits@10. The results show the superiority of stochastic transitions over deterministic transitions to model various structures such as Hierarchical and loop structures in a low-dimensional space. Using FB15k-237, ItoˆE with Spherical manifold outperforms other competitors including the Hyperbolic models. Figure 8 presents performance per dimension. As shown in the figure, our model outperforms other KGE models in low dimensions. In high dimension, our model get competitive performance to other models on WN18RR dataset. Capturing hierarchical structure In this part, we generate tree structure in the embedding space by using the transition functions of the Itoˆ process, translation, and rotation. Starting from a point on ![7_image_1.png](7_image_1.png) the embedding space (i.e., root shown in red in Figure 4), a model generates n child for each node and traverses the vector space to generate the whole tree structure. As shown in Figure 4b, the Itoˆ process generates the tree-like structure by generating stochastic trajectories in the vector space. However, translation ( Figure 4c) and rotation (Figure 4d) generate a single trajectory which consequently cannot traverse the vector space to generate a tree structure. Figure 4a illustrates the evolution of tree structure in a sphere. The root is located on the center and the leaves are distributed on the border of the sphere. All the intermediate nodes on the lth level of the tree are distributed on a sphere with radius of rl, inside the main sphere where rl1 < rl2, l1 < l2. A trajectory (dashed line) is shown from root to leaf. Capturing Complex structures Here we examine ItoˆE and several other Euclidean and nonEuclidean models to preserve various complex structures. We train the models on the substructures and then present the ranking results given by the models in a heatmap. For the graph in Figure 7, the heatmap is presented in Figure 6. As shown in the figure, ItoˆE gets a very low rank (mostly 1 which is ideal) to the graph edges. This shows that the model learns the structure. For other models including manifold-based models, e.g., AttH and RotH, the ranking for the edges are high, i.e., these models do not learn the structure. Figure 5 shows the ranking results of modeling heterogeneous structure on the example of a loop connected to a path. | Model | N-Parameters | Time | |---------|----------------|--------| | TransE | 1392766 | 40s | | MurE | 1393470 | 55s | | RefH | 1394196 | 120s | | AttH | 1395604 | 240s | | RotH | 1394196 | 120s | | ItoˆE | 1395100 | 74s | Memory and Time Complexity Table 2 shows the training time (per epoch) and the number of model parameters for ItoˆE, TransE, MurE, RefH, RotH and AttH. According to the table, ItoˆE has a close number of parameters to other state-of-the-art models. Among the models, TransE is the most efficient model in terms of the number of parameters. In addition, our model is competitive with other models in terms of training time. ## 7 Conclusion This paper presented ItoˆE, a knowledge graph embedding model that considers the stochastic transitions between nodes of a knowledge graph on a manifold. For doing so, ItoEˆ modeled the relations in a KG as stochastic processes so that the transitions between two nodes could only happen with an associated likelihood. Such stochastic transitions allowed ItoˆE to present multiple stochastic trajectories between any two embedded nodes and to capture more sophisticated structures in KGs, including loops connected to paths and is mathematically proved to be a generalization of several state-of-the-art models. Experiments on the synthesized datasets showed that the proposed model can capture heterogeneous complex structures and patterns. ## Limitations In this section, we discuss the limitation of the proposed model. Currently, the hidden layer of the two neural networks for drift and diffusion are shared between all entities and relations. This might cause over-fitting on relations that show simple structures if the neural networks are set to be very deep. On the other hand, if the neural networks are set to be shallow it might negatively influence modeling complex relations as the direction of trajectories will be limited. One possible solution is to cluster relations based on complexities and use separate neural networks for each cluster depending on the complexity of the corresponding relation. This requires, however, prior knowledge about the structure of different relations which we leave as future work. ## Ethics Statement The authors declare that we have no conflicts of interest. This article does not contain any studies involving business data and personal information. ## Acknowledgements The authors thank the International Max Planck Research School for Intelligent Systems (IMPRSIS) for supporting Bo Xiong. Bo Xiong is funded by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No: 860801. Mojtaba Nayyeri is funded by the German Federal Ministry for Economic Affairs and Climate Action under Grant Agreement Number 01MK20008F (Service-Meister). This research was partially funded by the Ministry of Science, Research, and the Arts (MWK) Baden-Württemberg, Germany, within the Artificial Intelligence Software Academy (AISA) and the German Research Foundation (DFG) via grant agreement number STA 572/18-1 (Open Argument Mining). We acknowledge the support of the Stuttgart Center for Simulation Science (SimTech). The authors would like to thank the reviewers for their constructive comments and suggestions. ## References Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. Multi-relational poincaré graph embeddings. Advances in Neural Information Processing Systems, 32:4463–4473. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. 2020. Lowdimensional hyperbolic knowledge graph embeddings. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6901–6914. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *Thirty-second AAAI conference on artificial intelligence*. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In *Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining*, pages 855–864. Zexi Huang, Arlei Silva, and Ambuj Singh. 2021. A broader picture of random-walk based graph embedding. In *Proceedings of the 27th ACM SIGKDD* conference on knowledge discovery & data mining, pages 685–695. Tobias Jahnke. 2016. Numerical methods in mathematical finance. Master's thesis, Karlsruhe Institute of Technology. Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. 2021. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems. Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 4289–4300. Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In *International* Conference on Machine Learning, pages 2863–2872. PMLR. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In *Twentyninth AAAI conference on artificial intelligence*. Denis Lukovnikov, Asja Fischer, Jens Lehmann, and Sören Auer. 2017. Neural network-based question answering over knowledge graphs on word and character level. In *Proceedings of the 26th international* conference on World Wide Web, pages 1211–1220. Mojtaba Nayyeri, Sahar Vahdati, Can Aykul, and Jens Lehmann. 2021a. 5* knowledge graph embeddings with projective transformations. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 9064–9072. Mojtaba Nayyeri, Chengjin Xu, Franca Hoffmann, Mirza Mohtashim Alam, Jens Lehmann, and Sahar Vahdati. 2021b. Knowledge graph representation learning using ordinary differential equations. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9529–9548. Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung, et al. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327–333. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A review of relational machine learning for knowledge graphs. *Proceedings* of the IEEE, 1(104):11–33. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In *Icml*. Maximillian Nickel and Douwe Kiela. 2017. Poincaré embeddings for learning hierarchical representations. Advances in neural information processing systems, 30:6338–6347. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In *Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data* mining, pages 701–710. Jan Portisch and Heiko Paulheim. 2022. Walk this way! entity walks and property walks for rdf2vec. arXiv preprint arXiv:2204.02777. Petar Ristoski and Heiko Paulheim. 2016. Rdf2vec: Rdf graph embeddings for data mining. In International Semantic Web Conference, pages 498–514. Springer. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pages 926–934. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. *arXiv preprint* arXiv:1902.10197. Atsushi Suzuki, Yosuke Enokida, and Kenji Yamanishi. 2019. Riemannian transe: Multi-relational graph embedding in non-euclidean space. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, pages 57–66. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *International conference on machine learning*, pages 2071– 2080. PMLR. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724– 2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28. Melanie Weber and Maximilian Nickel. 2018. Curvature and representation learning: Identifying embedding spaces for relational data. *NeurIPS Relational* Representation Learning. Bo Xiong, Shichao Zhu, Mojtaba Nayyeri, Chengjin Xu, Shirui Pan, Chuan Zhou, and Steffen Staab. 2022. Ultrahyperbolic knowledge graph embeddings. In KDD, pages 2130–2139. ACM. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 353–362. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. *arXiv* preprint arXiv:1904.10281. ## A Appendix We organize the appendix as follows: We first present the proof of Theorem 1, and corollaries 1-4. We then discuss the effect of loss functions on deterministic models, followed by the results per manifold and hyperparameters specification. ## Proof Of Theorem 1 Proof. Here we prove that ItoˆE subsumes 5∗E (Nayyeri et al., 2021a). We start with the formulation of 5∗E. For simplicity, we remove the index of relation from the relation matrix in 5∗E and consider one-dimensional complex projective line. The following equation is modeling triple in the vector space by 5∗E e p tn+1≈ ηe p tn , η = a b c d, (7) e p tn = etn 1 , a, b, c, d, z ∈ C. Let η = ar + aI i br + bI i cr + cI i dr + dI i = ar br cr dr + aI bI cI dI i = ηr + ηI i, and e p tn = e r tn 1 + e I tn 0 = e pr tn + e pI tn . We have e p tn+1 = ηe p tn = (ηre pr tn−ηIe pI tn )+(ηre pI tn−ηIe pr tn )i. (8) We merge the matrices and rewrite the formulation as follows: e p tn+1 = ηr ηI e pr tn −e pI tn + ηr ηI e pI tn e pr tn i. (9) Let ηrI = ηr ηI , erI tn = e pr tn −e pI tn , eIr tn = e pI tn e pr tn . Therefore, we have e p tn+1 = ηrIe Ir tn + ηrIe Ir i. tn The above formulation can be rewritten in the vectored form as follows $e_{t_{n+1}}^{p}=\eta_{r I}\begin{pmatrix}e_{tn}^{rI}\\ e_{tn}^{Ir}\end{pmatrix}=\eta_{r I}e_{tn}^{IrI}$. The formulation of ItoE is $\begin{array}{l}\mathbf{e}_{t_{n+1}}\quad\approx\quad\quad\mathbf{e}_{t_{n}}\quad+\quad a_{r}(\mathbf{e}_{t_{n}},t_{n})\Delta t\quad+\\ b_{r}(\mathbf{e}_{t_{n}},t_{n})\Delta B(t_{n}).\end{array}$ If ∆t = 1, br(etn, tn) = 0, there is a neural network ar(etn, tn) that approximates the multivariate function (ηrI − I)e IrI tn with the error as close as zero due to universal approximation ability of the NNs. Therefore, ItoˆE can approximate the score of 5∗E with an arbitrary small error. Consequently, ItoˆE subsumes 5∗E. Because 5∗E subsumes TransE, RotatE, ComplEx, ItoˆE subsumes these models as well. We now prove that ItoˆE subsumes QuatE. Considering the formulation of ItoE, etn+1 = etn + ar(etn, tn)∆t + br(etn, tn)∆B(tn), and setting ∆t = 1, br(etn, tn) = 0, ar(etn, tn) = (R − I)e v, where e vis the vector representation of a Quaternion number, the assumption of QuatE is fulfilled by ItoˆE $$\mathbf{e}_{t_{n+1}}^{v}\approx\mathbf{R}\mathbf{e}_{t_{n}}^{v}.\qquad\qquad(10)$$ Therefore, ItôE subsumes QuatE (Zhang et al., 219). ## Proof Of Corollaries Proof. Here we present the proof of Corollaries 14. Because ItoˆE subsumes 5∗E, ComplEx, QuatE, RotatE, and TransE, it can encode all relational and structural patterns (symmetric, anti-symmetric, reflexive, transitive, inverse, and combination of loop and path) modeled by these models. Moreover, the ItoˆE model is fully expressive because it subsumes ComplEx which is fully expressive. Effect of Loss Function on Deterministic Models As mentioned in the paper, the models based on deterministic transitions such as TransE with etn + r = etn+1 provide a single trajectory between any two nodes. However, by using a loss function forcing an upper-bound for the score of positive samples we have etn + r = etn+1 + ϵ. This allows the models based on deterministic transitions to mitigate the problem of a single trajectory. However, the problem is not fully solved because the nodes after each transition are embedded very closely. In this way, there is a single trajectory that is retrievable by transition function in which the other embedded nodes are not reachable by using the transition function in the embedding space. In contrast, using the stochastic transitions, the model can learn at each embedded node etn via diffusion NN, br(etn, tn), the degree of branching, i.e., degree of diffusion. Therefore, different trajectories 7176 | Model | Dataset | Neg Samp. | batch_size | l_rate | reg_co | Epochs | |---------|-----------|-------------|--------------|----------|----------|----------| | ItoˆE | WN18RR | 500 | 500 | 0.001 | 0 | 300 | | ItoˆE | FB15k-237 | 500 | 50 | 0.05 | 0 | 300 | Table 3: Best hyperparameters found for ItoˆE Table 4: Variance of MRR for ItoˆE ![12_image_0.png](12_image_0.png) are learned between any two nodes which can be either very close or far. Variance in Itoˆ**E Performance** Table 4 shows ![12_image_1.png](12_image_1.png) the variance of ItoˆE with dimension 32 on WN18RR. As shown in the table, the variance of 10 times running on the model is very low. Therefore, the model obtains stable performance on different runs. ![12_image_2.png](12_image_2.png) Results On WN18RR Per Manifolds In this part, we analyze the performance of ItoˆE using various manifolds (Poincare ball, Hyperboloid, Euclidean, and Sphere) on WN18RR. Figure 9 illustrates the performance comparison according to different metrics namely MRR, Hits@1, Hits@3, Hits@10. The experiments have been done in a very low dimension of 32. According to the figure, ItoˆE with Poincare ball and Hyperboloid outperformed ItoˆE with Sphere and Euclidean manifold. This is consistent with the nature of the used KG where most relations are hierarchical in WN18RR. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section ✓ A2. Did you discuss any potential risks of your work? Ethic statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 6 And Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 6 and Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 6 and Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
cao-zhao-2023-leveraging
Leveraging Denoised {A}bstract {M}eaning {R}epresentation for Grammatical Error Correction
https://aclanthology.org/2023.findings-acl.449
Grammatical Error Correction (GEC) is the task of correcting errorful sentences into grammatically correct, semantically consistent, and coherent sentences. Popular GEC models either use large-scale synthetic corpora or use a large number of human-designed rules. The former is costly to train, while the latter requires quite a lot of human expertise. In recent years, AMR, a semantic representation framework, has been widely used by many natural language tasks due to its completeness and flexibility. A non-negligible concern is that AMRs of grammatically incorrect sentences may not be exactly reliable. In this paper, we propose the AMR-GEC, a seq-to-seq model that incorporates denoised AMR as additional knowledge. Specifically, We design a semantic aggregated GEC model and explore denoising methods to get AMRs more reliable. Experiments on the BEA-2019 shared task and the CoNLL-2014 shared task have shown that AMR-GEC performs comparably to a set of strong baselines with a large number of synthetic data. Compared with the T5 model with synthetic data, AMR-GEC can reduce the training time by 32{\%} while inference time is comparable. To the best of our knowledge, we are the first to incorporate AMR for grammatical error correction.
# Leveraging Denoised Abstract Meaning Representation For Grammatical Error Correction Hejing Cao1,2**, Dongyan Zhao**1,2∗ 1 Wangxuan Institute of Computer Technology, Peking University 2 Center for Data Science, Peking University {caohejing,zhaody}@pku.edu.cn ## Abstract Grammatical Error Correction (GEC) is the task of correcting errorful sentences into grammatically correct, semantically consistent, and coherent sentences. Popular GEC models either use large-scale synthetic corpora or use a large number of human-designed rules. The former is costly to train, while the latter requires quite a lot of human expertise. In recent years, AMR, a semantic representation framework, has been widely used by many natural language tasks due to its completeness and flexibility. A non-negligible concern is that AMRs of grammatically incorrect sentences may not be exactly reliable. In this paper, we propose the AMR-GEC, a seq-to-seq model that incorporates denoised AMR as additional knowledge. Specifically, We design a semantic aggregated GEC model and explore denoising methods to get AMRs more reliable. Experiments on the BEA-2019 shared task and the CoNLL-2014 shared task have shown that AMR-GEC performs comparably to a set of strong baselines with a large number of synthetic data. Compared with the T5 model with synthetic data, AMR-GEC can reduce the training time by 32% while inference time is comparable. To the best of our knowledge, we are the first to incorporate AMR for grammatical error correction. ## 1 Introduction Nowadays, high performance of grammatical error correction model mainly depends on data augmentation (Kiyono et al., 2019; Grundkiewicz et al., 2019; Raffel et al., 2020; Wan and Wan, 2021; Wu and Wu, 2022; Zhang et al., 2022). According to the type of additional information, grammatical error correction models can be divided into dataenhanced models and knowledge-enhanced models. Data-enhanced models require millions of synthetic data, which is obtained by back-translation or directly adding noise. Training on these synthetic ![0_image_0.png](0_image_0.png) datasets is very time-consuming, which is unacceptable in some application scenarios. Knowledgeenhanced model is to artificially design a large number of grammatical rule templates, and add the templates as external knowledge to GEC model. This external knowledge is language-dependent and it requires the intervention of human grammar experts. Abstract Meaning Representation (AMR) is a type of rooted, labeled graph which contains semantic structures with fine-grained node and edge types. AMR breaks through the limitations of the traditional syntax tree structure and supports reentrancy. Figure 1 is a graph of sentence "*I don't* want to go to school on Sunday.". In AMR, *:arg0* is typically the agent, *:arg1* is typically the patient, and other arguments do not have standard definitions and may vary with the verb being annotated. Negative meaning is denoted as "-". Special keywords such as entity types, quantities and logical conjunctions are supported by AMR. AMR obtains a simple representation from natural language sentence and it is suitable for GEC as extra knowledge. A non-negligible concern is that AMRs of errorful sentences may not be exactly reliable. If these AMRs with errors are directly introduced ∗ Corresponding author: Dongyan Zhao. into the GEC model as additional information, it may confuse the model. We use a pre-trained AMR parser to predict AMR of erroneous sentences and corrected sentences separately on the BEA-19 development set. If two AMRs are completely consistent, we assume that the AMR of errorful sentences is reliable. After statistical analysis, we found that about half of the graphs are reliable. We designed a denoising semantic aggregated grammatical error correction model. Specifically, we added a graph aggregation encoder based on a sequence-to-sequence model. The graph encoder aims to update the representation of the sequence encoder by AMR semantic structure. Besides, we designed two mask strategies to reduce the dependence on the model graph information. We designed these mask strategies by granularity: node/edge level mask and subgraph level mask. Experiments have proved that the denoising semantic aggregated grammatical error correction model significantly improved the error correction accuracy. ## 2 Related Works Data-enhanced GEC models. Lots of works have found their way to incorporating additional data into GEC model. Kaneko et al. (2020) uses a pretrained mask language model in grammatical error correction by using the output of BERT as additional features in the GEC model. Kiyono et al. (2019) and Grundkiewicz et al. (2019) explore methods of how to generate and use the synthetic data and make use of Gigaword to construct hundreds of millions of parallel sentence pairs. Some works (Katsumata and Komachi, 2020, Pajak and Gonczarek, 2021, Rothe et al., 2021) give a strong baseline by finetuning BART (Lewis et al., 2020), T5 (Raffel et al., 2020) on a GEC corpus. Malmi et al. (2019) casts GEC as a text editing task. Zhao et al. (2019) and Panthaplackel et al. (2021) propose a copy-augmented architecture for the GEC task by copying the unchanged words and spans. Knowledge-enhanced GEC models. Wan and Wan (2021) use dependency tree as syntactic knowledge to guide the GEC model. Wu and Wu (2022) adds part-of-speech features and semantic class features to enhance the GEC model. Omelianchuk et al. (2020) design thousands of custom tokenlevel transformations to map input tokens to target corrections. Lai et al. (2022) proposes a multistage error correction model based on the previous model. Applications of AMR. Song et al. (2019) and Li and Flanigan (2022) incorporate AMR in neural machine translation. Bonial et al. (2020) makes use of AMR by abstracting the propositional content of an utterance in dialogue. Xu et al. (2021) constructs a dynamic semantic graph employing AMR to cope with Multi-hop QA problems. ## 3 Model We add a graph encoder based on Transformer to ![1_image_0.png](1_image_0.png) aggregate denoised semantic information. The architecture of AMR-GEC is shown on Figure 2. ## 3.1 Semantic Aggregated Encoder Transformer is an attention-based encoder-decoder model, where the encoder encodes the input sentence into a context vector, and the decoder converts the context vector into an output sentence. Formally, we denote the tokens of the sentence is Tn = {t1, t2*, ..., t*n}. Vinilla encoder-decoder model works as follows: $$\begin{array}{l}{{h_{1},h_{2},...,h_{n}=\mathrm{Enc}(t_{1},t_{2},...,t_{n})}}\\ {{y_{1},y_{2},...,y_{m}=\mathrm{Dec}(h_{1},h_{2},...,h_{n})}}\end{array}$$ $$\begin{array}{l}{(1)}\\ {(2)}\end{array}$$ We then designed a semantic graph encoder based on a graph attention network to incorporate semantic graph information. To preserve the information of the sequence encoder, we use a residual connection to combine the outputs of two encoders. $\begin{array}{c}\hat{y}_1,\hat{y}_2,...,\hat{y}_m=\text{GNN}(h_1,h_2,...,h_n)\\ y_i'=y_i\oplus\hat{y}_i,\;\;i=1,2,...,m\end{array}$ ### Denoising Function Masked Language Modeling (MLM) is a classic pre-trained model modeling method. The task of MLM is to mask some tokens with a special token mask and train the model to recover them. This allows the model to handle both the left and right context of the masked token. MLM can divided into five types: single word masking, phrase making, random span masking, entity masking, whole word masking. Referring to Bai et al. (2022), we use the mask strategy on AMR. We used two ways to add masks: node/edge level mask and sub-graph level mask. Node/edge level mask refers to mapping the nodes/edges in the AMR graph using a noise function to generate a graph with noise. Sub-graph level mask means randomly removing subgraphs and replacing them with a mask label. ## 3.3 Sequence-Amr Graph Construction In this section, we will show details about the graph encoder module. To preserve sequence information, we design a graph that fuses sequence and AMR. We first use the alignment tool JAMR to get the mapping from AMR node to sequence token. First connect the sequences through the special labels forward-label and backward-label respectively, and then map the edges of AMR to the sequence-AMR graph. Figure 3: sequence-AMR graph ![2_image_0.png](2_image_0.png) ## Algorithm 1 Graph Construction Require: AMR, sequence (x1,x2,...,xn), Aligner Ensure: sequence-AMR graph 1: amr2seq = Aligner(sequence, AMR) 2: graph= new Graph() 3: for i=1 to n-1 do 4: AddEdge(xi, xi+1, label-forward) 5: AddEdge(xi+1, xi, label-backward) 6: **end for** 7: for edge in AMR.edges() do 8: AddEdge(amr2seq[s], amr2seq[t], label) 9: **end for** 10: return graph ## 4 Experiments 4.1 Dataset CoNLL-2014. The CoNLL-2014 shared task test set contains 1,312 English sentences with error annotations by 2 expert annotators. Models are evaluated with M2 scorer (Dahlmeier and Ng, 2012) which computes a span-based F0.5-score. BEA-2019. The BEA-2019 test set consists of 4477 sentences and the outputs are scored via ERRANT toolkit (Felice et al., 2016, Bryant et al., 2017). The released data are collected from Write & Improve and LOCNESS dataset. ## 4.2 Baseline Model Following Rothe et al. (2021), we use T5 as the baseline model for GEC. ## 4.3 Amr Parsing And Alignment We adopt SPRING (Bevilacqua et al., 2021) as our AMR parsing model. SPRING performs nearly state-of-the-art AMR parsing by linearizing AMR to sequence and converting text-to-amr task to seqto-seq task. It obtained 84.5 Smatch F1 points on AMR 2.0 dataset.We use JAMR (Flanigan et al., 2014) to align the AMRs to sentences. JAMR is an alignment-based AMR parsing model that finds a maximum spanning, connected subgraph as an optimization problem. We use the alignment for graph information aggregation. ## 4.4 Others Our models were trained on a single GPU (GeForce GTX 1080), and our implementation was based on publicly available code1. we set the batch_size to 6 and the learning_rate to 2e-5. We use pytorch_geometric2to implement the semantic aggregated encoder. ## 5 Results And Analysis 5.1 Results Table 1 shows the results of the BEA-test and CoNLL-2014 dataset. 1) Compared with the model without synthetic data, the single model of AMRGEC is 2.8 points and 1.8 points higher in BEA19 and CoNLL-14, respectively. Ensemble models give similar results. 2) Compared with models using synthetic data, AMR-GEC gives com- | Models | Synthetic data | BEA-test | CoNLL-14 | | | | | |------------------------------|------------------|------------|------------|------|------|------|------| | P | R | F0.5 | P | R | F0.5 | | | | Katsumata and Komachi (2020) | - | 68.3 | 57.1 | 65.6 | 69.3 | 45.0 | 62.6 | | Kiyono et al. (2019) | ✓ | 69.5 | 59.4 | 64.2 | 67.9 | 44.1 | 61.3 | | Kaneko et al. (2020) | ✓ | 67.1 | 61.0 | 65.6 | 69.2 | 45.6 | 62.6 | | Rothe et al. (2021) | ✓ | - | - | 67.1 | - | - | 65.1 | | Omelianchuk et al. (2020) | ✓ | 79.2 | 53.9 | 72.4 | 77.5 | 40.1 | 65.3 | | AMR-GEC | - | 71.5 | 58.3 | 68.4 | 70.2 | 48.3 | 64.4 | | Katsumata and Komachi (2020) | - | 68.8 | 57.1 | 66.1 | 69.9 | 45.1 | 63.0 | | Kiyono et al. (2019) | ✓ | 74.7 | 56.7 | 70.2 | 67.3 | 44.0 | 67.9 | | Omelianchuk et al. (2020) | ✓ | 79.4 | 57.2 | 73.7 | 78.2 | 41.5 | 66.5 | | AMR-GEC | - | 73.5 | 55.9 | 69.1 | 70.3 | 48.2 | 64.4 | parable or even higher F-score, except for GECToR (Omelianchuk et al., 2020), which uses both synthetic data and human knowledge. For example, our single model achieves 68.4 on BEA-19, higher than the models by Kiyono et al. (2019), Kaneko et al. (2020), and Rothe et al. (2021). This shows that semantic graphs, as additional knowledge for GEC, have a comparative advantage over synthetic data. Our ensemble model does not show significant improvements over the single model, probably because more optimal ensemble strategies are needed: averaging generation probabilities (Omelianchuk et al., 2020), ensemble editings (Pajak and Gonczarek, 2021), etc. | Error Type | T5-GEC | AMR-GEC | | | | | |--------------|----------|-----------|------|------|------|------| | P | R | F0.5 | P | R | F0.5 | | | PUNCT | 79.8 | 49.4 | 71.0 | 78.7 | 72.9 | 77.4 | | DET | 78.6 | 64.8 | 75.4 | 78.6 | 65.8 | 75.7 | | PREP | 72.9 | 48.0 | 66.0 | 73.1 | 61.5 | 70.4 | | ORTH | 84.6 | 55.7 | 76.7 | 69.5 | 62.9 | 68.1 | | SPELL | 83.0 | 58.3 | 76.5 | 80.9 | 61.9 | 76.2 | ## 5.2 Advantages Of Amr Table 2: BEA-test scores for the top five error types, except for OTHER We compared the most common error types in BEA-test (except for OTHER) between T5-GEC and AMR-GEC. As shown in Table 2, the F-scores of PUNCT and PREP in AMR-GEC is 4-6 points higher than T5-GEC. AMR dropped prepositions, tense, and punctuation to obtain simple and base meanings, and exactly these error types are the most common errors in GEC scenarios. With such error ignored in AMR, sentences generated from AMR are more likely to get correct results. Besides, graphs are good at solving the long sentence dependency problem. The pain point of the sequence model is that it is difficult to pay attention to long-distance dependent information. In AMR, associative concept nodes are explicitly connected with edges, making it easier for the model to focus on long-distance information. ## 6 Ablation Study 6.1 Graph Neural Networks Ablation Results Graph neural networks have been proven effective in dealing with unstructured data problems. However, few studies have analyzed the effect of different GNN-encoded AMRs for natural language generation tasks. To study the differences of graph neural networks of encoding AMR, we carry on a set of experiments. We select different graph encoders of GCN, GAT, and DeepGCN as variables, and conduct experiments on BEA-2019 dataset while ensuring the same amount of model parameters. We do not use the denoising method in this ablation study. | Model | P | R | F0.5 | |-------------|-------|-------|--------| | T5-GEC | 71.47 | 53.46 | 66.96 | | AMR-GCN | 72.95 | 52.17 | 67.57 | | AMR-GAT | 68.26 | 63.41 | 67.23 | | AMR-DeepGCN | 66.34 | 62.57 | 65.55 | Table 3: Results on BEA-test with GCN, GAT, DeepGCN as AMR encoders Table 3 shows the results of BEA-test with different graph encoders. We can draw these conclusions: 1) Even if the AMRs of the errorful sentences are not reliable, they still benefit GEC. Compared with T5-GEC, AMR-GCN and AMR-GAT are about 0.2 and 0.4 points higher respectively. This shows that the model makes use of the semantic information and connection relationship of reliable AMR. 2) AMR-GCN gives the best performance among the three models. When picking a graph encoder, the GCN model is sufficient to encode the semantic structure information of AMR. It is worth noting that GAT and DeepGCN have high recall value and low precision. In the grammatical error correction task, precision measures the error correction result. Generally speaking, precision is more important than recall. In the grammatical error correction task, most of the errors are local errors, and the semantic information required for grammatical error correction in AMR can be captured without a deeper graph convolution model. ## 6.2 Denoise Method Ablation Study | Model | P | R | F0.5 | |---------------------|-------|-------|--------| | T5-GEC | 71.47 | 53.46 | 66.96 | | AMR-GCN | 72.95 | 52.17 | 67.57 | | AMR-GCN (node/edge) | 73.52 | 55.91 | 69.14 | | AMR-GCN (subgraph) | 72.12 | 57.60 | 68.60 | Table 4: Results on BEA-test with node/edge and subgraph denoising methods Table 4 shows the results of BEA-test with node/edge and subgraph denoising methods. The node/edge level denoising strategy and the subgraph level denoising strategy increased by 1.57 and 1.03 points, respectively. Node level mask strategy performs better because the subgraph may mask too much information. ## 7 Conclusion In this paper, We propose a denoising semantic aggregated grammatical error correction model, AMR-GEC, leveraging AMR as external knowledge to the GEC. We believe it gives a strong baseline for incorporating AMR in GEC. ## Limitations In this paper, we leverage AMR to the GEC model as external knowledge, and achieve a high F-score on single model. However, we do not use R2L reranking, model ensemble and other methods to ensemble single model and compare them with state-of-the-art ensemble models. Our aim is to provide a strong baseline for incorporating AMR in GEC, so it is easy to generalize AMR-GEC to ensemble models. ## Ethics Statement The training corpora including the Lang-8, NUCLE and the BEA-2019 test data and CoNLL-2014 test data used for evaluating our framework are publicly available and don't pose privacy issues. The algorithm that we propose does not introduce ethical or social bias. ## Acknowledgements We would like to thank the anonymous reviewers for their constructive comments. We would like to express appreciation to Yansong Feng for his insightful suggestions on the algorithm framework. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106600). ## References Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022. Graph pre-training for AMR parsing and generation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland. Association for Computational Linguistics. Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In *Proceedings of AAAI*. Claire Bonial, Lucia Donatelli, Mitchell Abrams, Stephanie M. Lukin, Stephen Tratz, Matthew Marge, Ron Artstein, David Traum, and Clare Voss. 2020. Dialogue-AMR: Abstract Meaning Representation for dialogue. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 684–695, Marseille, France. European Language Resources Association. Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In *Proceedings of the 55th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 793–805, Vancouver, Canada. Association for Computational Linguistics. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In *Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 568–572, Montréal, Canada. Association for Computational Linguistics. Mariano Felice, Christopher Bryant, and Ted Briscoe. 2016. Automatic extraction of learner errors in ESL sentences using linguistically enhanced alignments. In *Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics:* Technical Papers, pages 825–835, Osaka, Japan. The COLING 2016 Organizing Committee. Jeffrey Flanigan, Sam Thomson, Jaime G Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In *Proceedings of the 52nd Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436. Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In *Proceedings of the Fourteenth* Workshop on Innovative Use of NLP for Building Educational Applications, pages 252–263, Florence, Italy. Association for Computational Linguistics. Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4248–4254, Online. Association for Computational Linguistics. Satoru Katsumata and Mamoru Komachi. 2020. Stronger baselines for grammatical error correction using a pretrained encoder-decoder model. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 827–832, Suzhou, China. Association for Computational Linguistics. Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1236–1242, Hong Kong, China. Association for Computational Linguistics. Shaopeng Lai, Qingyu Zhou, Jiali Zeng, Zhongli Li, Chao Li, Yunbo Cao, and Jinsong Su. 2022. Typedriven multi-turn corrections for grammatical error correction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3225–3236, Dublin, Ireland. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Changmao Li and Jeffrey Flanigan. 2022. Improving neural machine translation with the Abstract Meaning Representation by combining graph and sequence transformers. In *Proceedings of the 2nd Workshop* on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pages 12–21, Seattle, Washington. Association for Computational Linguistics. Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics. Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In *Proceedings of the Fifteenth Workshop* on Innovative Use of NLP for Building Educational Applications, pages 163–170, Seattle, WA, USA → Online. Association for Computational Linguistics. Krzysztof Pajak and Adam Gonczarek. 2021. Grammatical error correction with denoising autoencoder. International Journal of Advanced Computer Science and Applications, 12(8). Sheena Panthaplackel, Miltiadis Allamanis, and Marc Brockschmidt. 2021. Copy that! editing sequences by copying spans. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13622–13630. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, and Aliaksei Severyn. 2021. A simple recipe for multilingual grammatical error correction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 702–707, Online. Association for Computational Linguistics. Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. *Transactions of the Association for Computational Linguistics*, 7:19–31. Zhaohong Wan and Xiaojun Wan. 2021. A syntaxguided grammatical error correction model with dependency tree correction. *arXiv preprint* arXiv:2111.03294. Xiuyu Wu and Yunfang Wu. 2022. From spelling to grammar: A new framework for chinese grammatical error correction. *arXiv preprint arXiv:2211.01625*. Weiwen Xu, Huihui Zhang, Deng Cai, and Wai Lam. 2021. Dynamic semantic graph construction and reasoning for explainable multi-hop science question answering. *arXiv preprint arXiv:2105.11776*. Yue Zhang, Bo Zhang, Zhenghua Li, Zuyi Bao, Chen Li, and Min Zhang. 2022. SynGEC: Syntax-enhanced grammatical error correction with a tailored GECoriented parser. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 2518–2531, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? "Limitations". ✓ A2. Did you discuss any potential risks of your work? "1 Introduction", "2 Related works". ✓ A3. Do the abstract and introduction summarize the paper's main claims? "Abstract", "1 Introduction". ✓ A4. Have you used AI writing assistants when working on this paper? We used Grammarly to correct the grammar of the full paper. ## B ✓ **Did You Use Or Create Scientific Artifacts?** "4 Experiments". ✓ B1. Did you cite the creators of artifacts you used? "4 Experiments". ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? "4 Experiments". ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? "4 Experiments", "5 Results and Analysis". ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? "4 Experiments". ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? "4 Experiments". ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. "4 Experiments". ## C ✓ **Did You Run Computational Experiments?** "4 Experiments". ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? "4 Experiments". The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? "4 Experiments". ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? "5 Results and Analysis". ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? "4 Experiments". D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xu-etal-2023-prediction
Prediction and Calibration: Complex Reasoning over Knowledge Graph with Bi-directional Directed Acyclic Graph Neural Network
https://aclanthology.org/2023.findings-acl.450
Answering complex logical queries is a challenging task for knowledge graph (KG) reasoning. Recently, query embedding (QE) has been proposed to encode queries and entities into the same vector space, and obtain answers based on numerical computation. However, such models obtain the node representations of a query only based on its predecessor nodes, which ignore the information contained in successor nodes. In this paper, we proposed a Bi-directional Directed Acyclic Graph neural network (BiDAG) that splits the reasoning process into prediction and calibration. The joint probability of all nodes is considered by applying a graph neural network (GNN) to the query graph in the calibration process. By the prediction in the first layer and the calibration in deep layers of GNN, BiDAG can outperform previous QE based methods on FB15k, FB15k-237, and NELL995.
# Prediction And Calibration: Complex Reasoning Over Knowledge Graph With Bi-Directional Directed Acyclic Graph Neural Network Yao Xu1,2, Shizhu He1,2, Li Cai3, Kang Liu1,2, Jun Zhao1,2 1 The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3 Meituan, Beijing, China {yao.xu, shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn, caili03@meituan.com ## Abstract Answering complex logical queries is a challenging task for knowledge graph (KG) reasoning. Recently, query embedding (QE) has been proposed to encode queries and entities into the same vector space, and obtain answers based on numerical computation. However, such models obtain the node representations of a query only based on its predecessor nodes, which ignore the information contained in successor nodes. In this paper, we proposed a Bi-directional Directed Acyclic Graph neural network (BiDAG) that splits the reasoning process into prediction and calibration. The joint probability of all nodes is considered by applying a graph neural network (GNN) to the query graph in the calibration process. By the prediction in the first layer and the calibration in deep layers of GNN, BiDAG can outperform previous QE based methods on FB15k, FB15k-237, and NELL995. ## 1 Introduction Knowledge Graphs (KGs) organize world knowledge as interlinked triples which describe entities and their relationships (Ji et al., 2020). Compared with link prediction (Rossi et al., 2021), answering logical queries (i.e., complex query answering, CQA (Wang et al., 2021), as shown in Figure 1 (A)) is a more challenging task because it needs to perform first-order logic (FOL) operators such as conjunction (∧), disjunction (∨), and negation (¬). Recently, Query Embedding (QE) models (Hamilton et al., 2018; Ren et al., 2020) have been proposed to jointly encode logical queries and entities into the same vector space, and then retrieve answers (entities) based on the similarity scores. Although QE models can obtain answers in linear time and implicitly reason over incomplete KGs by iteratively predicting the representation of intermediate and target nodes, such models obtain the representation of the current node only based on its ![0_image_0.png](0_image_0.png) Figure 1: An Example and its corresponding computation graph of CQA. predecessor nodes, which causes (1) The joint probability of all nodes in the query graph is ignored. Take the example in Figure 1, the probability distribution of node V 1 will be more concentrated in Japan and *China* after considering node A1. (2) The information contained in successor nodes is ignored. As shown in Figure 1, the type of node V 1 can only be *country* after considering the successor relation *nationality*.) To address the above drawbacks, in this paper, we propose a novel QE based method called Bidirectional Directed Acyclic Graph neural network (BiDAG), which splits the reasoning process into the following two processes: (1) **Prediction** is used to obtain the initial representation of nodes by aggregating the information of predecessor nodes, which is similar to previous QE models. (2) **Calibration**. In this process, the original unidirectional query graph is extended to a bidirectional graph, then we apply GNN to the bidirectional graph. In this way, BiDAG can take the joint probability into account, as each node is continuously calibrated by information of its predecessor and successor nodes. Our contributions can be summarized as follows: (1) We propose a framework that predicts first and then calibrates in CQA, which enables the model to take the joint probability of all nodes into account. (2) We conducted experiments on three standard benchmarks, and show that calibration can improve model performance significantly. The source codes and data can be found on https://github.com/ YaooXu/BiDAG. ## 2 Related Work Modeling entity and query representations and logical operators are critical points of QE models. GQE (Hamilton et al., 2018) answers the conjunctive queries by representing queries and entities as points in Euclidean space. To represent queries with a large set of answer entities, Query2Box (Ren et al., 2020) utilized hyper-rectangles to encode queries. By converting union queries into Disjunctive Normal Form (DNF) (Davey and Priestley, 2002), Query2Box can handle arbitrary existential positive first-order (EPFO) queries (i.e., queries that include any set of ∧, ∨, ∃). To further support the negation operator (¬), BetaE (Ren and Leskovec, 2020) was proposed to support a full set of operations in FOL by encoding entities and queries into Beta distributions. MLPMix (Amayuelas et al., 2022) utilized MLP-mixer (Tolstikhin et al., 2021) to model logical operators. By encoding each query into multiple points in the vector space, Query2Particles (Bai et al., 2022) can retrieve a set of diverse answers from the embedding space. In this paper, we not only predict the intermediate and target node representations but also constantly calibrate them by modeling the joint probability of all nodes in the query graph. ## 3 Preliminary In this section, we formally describe the task of complex query answering over KGs. We denote a KG as G = (V, R), where v ∈ V represents an entity, and each r ∈ R represents a binary function as r : *V × V → {*0, 1} which indicates whether a directed relationship r exists between a pair of entities. First-order logic queries The complex queries in KGs are described in logic form with first-order logic (FOL) operators such as existential quantification (∃), conjunction (∧), disjunction (∨), and negation (¬). A complex query q consists of a set of anchor entities Va ⊆ V, some existential quantified variables V1*, ...V*k, and a single target variable V?. The disjunctive normal form (DNF) of a FOL ![1_image_0.png](1_image_0.png) query q is defined as follows: $$q[V_{7}]=V_{7}:\,\exists\,V_{1},...,V_{k}\,:(e_{11}\wedge...\wedge e_{1n_{1}})\vee...$$ $$\forall(e_{m1}\wedge...\wedge e_{mn_{m}})$$ where each eij represents a literal containing anchor node or variables, i.e., eij = r(va, V ′) or r(*V, V* ′), where va ∈ Va, V ∈ {V1, ...Vk}, V ′ ∈ {V?, V1*, ...V*k}. The goal of CQA is finding the answer set S = {v|v ∈ V, q[v] = 1}. Computation Graph Each logical query can convert to a corresponding computation graph in the form of directed acyclic graph (DAG, as shown in Figure 1 (B)), where each node corresponds to an entity, and each edge corresponds to a logical operation. The logical operations are defined as follows. (1) **Relation projection**: Given a set of entities S ⊆ V and a relation r ∈ R, the relation projection will return entities ∪v∈SPr(v) related to v ∈ S via r, where Pr(v) = {v′ ∈ V : r(*v, v*′) = 1}. (2) **Intersection/union**: Given sets of entities {S1*, ..., S*n}, compute their intersection ∩ n i=1Si or union ∪ n i=1Si. It should be noticed that, in QE models, all these operations are executed in the embedding space. So, we can obtain the target node representation by iteratively computing the node representation following the neural logic operators in the DAG. ## 4 Bi-Directional Directed Acyclic Graph Neural Network The key idea of BiDAG is utilizing information of predecessor nodes to obtain the current node representation and then calibrating the representation with global information, as shown in Figure 2. Specifically, BiDAG includes two modules: 1) Representation prediction module; 2) **Representation calibration** module. In the view of GNN, BiDAG can be regarded as the stack of one prediction module (the first layer) and multiple calibration modules (the deep layers). ## 4.1 Representation Prediction In this module, we define neural logic operations. We can obtain the representation of each node by applying logical operations based on the predecessor node representations. Projection Given a node embedding h and an edge embedding r, the projection operator P outputs a new node embedding h ′ = P(h, r). Compared with the geometric projection operator and multi-layer perceptron (MLP) used in the previous works (Hamilton et al., 2018; Ren and Leskovec, 2020), we use the gates mechanism to dynamically adjust the transformation of each node embedding under the specific relation, which is implemented by Gated Recurrent Units (GRU) (Cho et al., 2014): h ′ = GRU(r, h), where r, h, and h ′are treated as the input, past state, and updated state/output of a GRU. Intersection We model the intersection of a set of query embeddings {q1*, ...,* qn} as the weighted sum of them, which can be regarded as performing sets intersection in the embedding space. We implement it by adopting attention mechanisms: $$\mathbf{q}_{i n t e r}=\sum_{i}\alpha_{i}\cdot\mathbf{q}_{i},\ \ \alpha_{i}={\frac{e x p(M L P(\mathbf{q}_{i}))}{\sum_{j}e x p(M L P(\mathbf{q}_{j}))}}\quad{\mathrm{(2)}}$$ $\text{\hspace{0.17em}}\mathrm{sin}\left(\frac{\pi}{2}\right)=\frac{\mathrm{sin}\left(\pi\right)}{\mathrm{cos}\left(\pi\right)}=\frac{\mathrm{cos}\left(\pi\right)}{\mathrm{cos}\left(\pi\right)}.$ 2. where q*inter* is the intersection of these query embeddings, αiis the weight of query embedding qi , MLP is a multi-layer perceptron that takes qi as input and outputs a single attention scalar. Union Following Ren, Hu, and Leskovec (2020), we handle queries with union operators by transforming them into equivalent Disjunctive Normal Form (DNF). By doing so, the original query can be transformed into the union of n conjunctive queries {q 1*, ...., q*n} that without union operator. Then, we can apply the existing methods to obtain the embeddings of these conjunctive queries as {q 1*, ....,* q n}. The distance between a query q and the answer entity e is defined as: $$d(q,e)=m i n(\{s i m({\boldsymbol{q}}^{1},{\boldsymbol{e}}),...,s i m({\boldsymbol{q}}^{n},{\boldsymbol{e}})\})$$ n, e)}) (3) where {q 1*, ...,* q n} are the embeddings of these conjunctive queries, e is the embedding of entity e, sim is a similarity function such as cosine function. ## 4.2 Representation Calibration In this module, the representation of each node is calibrated continuously by context information contained in the predecessor and successor nodes, which can address the drawback of ignoring the joint probability of all nodes. Context information aggregating is completed by multi-head attention mechanism (Vaswani et al., 2017) in GNN, which is first introduced by GAT (Velickovic et al., 2018). Compared to the attention mechanism used in GAT which uses a shared linear transformation for all nodes. We make the following improvements: (1) We extend the graph attention mechanisms to handle directed relational graphs like KGs; (2) We introduce three weight matrices Q ∈ Rd×d, K,V ∈ Rd×2das query, key, and value matrix to enable the model to capture the higher-level information among neighbor nodes. (3) To enable the model to choose what to remain and update, we use GRU to update node representation in calibration, which is first used by (Li et al., 2017). (4) To avoid the calibrated representation being too different from the original representation, we adopt residual connection (He et al., 2015) to make adjustments to the original representation at each step. The representation for node j at (t + 1)-th calibration defined formally as follows (for simplicity, we only consider the single-head self-attention): $$\mathbf{h}_{j}^{t+1}=\mathbf{h}_{j}^{t}+GRU(\sum_{i\in\mathcal{N}(j)}\alpha_{i,j}\mathbf{V}([\mathbf{h}_{i}^{t}\,\|\,\mathbf{e}_{i,j}]),\mathbf{h}_{j}^{t}),\tag{4}$$ $$\alpha_{i,j}=\frac{\exp(\text{LeakyReLU}(w_{i,j}))}{\sum_{k\in\mathcal{N}(j)}\exp{(\text{LeakyReLU}(w_{i,k}))}},$$ (5) $$w_{i,j}=\frac{(\mathbf{Q}\mathbf{h}_{i}^{t})^{T}(\mathbf{K}[\mathbf{h}_{j}^{t}\,\|\,\mathbf{e}_{i,j}])}{\sqrt{d}}.\tag{6}$$ where ∥ represents the concatenation operation, h t j is the representation for node j after t-th calibration, ei,j is the representation of edge from node i to j, αi,j is the attention coefficients, N (j) is the neighbor nodes of node j. ## 4.3 Model Training Our objective is to minimize the distance between the query embedding and its answers while maximizing the distance between the query embedding and other random entities via negative sampling (Bordes et al., 2013), which we define as follows: $$L=-l o g\,\sigma(\gamma-d(q,e))-\sum_{j=1}^{k}\frac{1}{k}l o g\,\sigma(d(q,e_{j})-\gamma)\,\,\,(7)$$ FB15k-237 FB15k NELL ![3_image_0.png](3_image_0.png) Model 1p 2p 3p 2i 3i pi ip 2u up avg avg avg GQE 35.0 7.2 5.3 23.3 34.6 16.5 10.7 8.2 5.7 16.3 28.0 18.6 Q2B 40.6 9.4 6.8 29.5 42.3 21.2 12.6 11.3 7.6 20.1 38.0 22.9 BetaE 39.0 10.9 10.0 28.8 42.5 22.4 12.6 12.4 9.9 20.9 41.6 24.6 Q2P 39.1 11.4 10.1 32.3 47.7 24.0 14.3 8.7 9.1 21.9 46.8 25.5 MLPMix 42.7 11.5 9.9 33.5 46.8 **25.4** 14.0 14.0 9.2 22.9 43.4 27.4 BiDAG (w/o res) 43.4 **12.3** 10.1 34.9 47.7 22.8 14.3 14.4 10.2 23.3 46.9 28.4 BiDAG (w/ res) **43.7** 12.0 10.2 **35.0 48.8** 24.8 14.9 14.5 10.2 23.8 48.3 **28.9** where ej represents a random negative sample, γ represents the margin, d(*q, e*) represents the distance between query q and entity e. ## 5 Experiment 5.1 Experimental Setup Datasets and Evaluation Protocol We conduct experiments on three public KGs: FB15k (Bordes et al., 2013), FB15K-237 (Toutanova and Chen, 2015), and NELL995 (Xiong et al., 2017). For a fair comparison, we adopt the logical queries generated by Ren and Leskovec (2020) in model training and testing. In this paper, similar to Ren, Hu, and Leskovec (2020), we consider nine query types for evaluation. For these nine query types, we utilize the same evaluation protocol as Query2Box (Ren et al., 2020). Details about these datasets and query types can be found in Appendix A. Comparison with Baselines First, we compare BiDAG with GQE, Q2B, BetaE, Q2P, and MLPMix on the EPFO queries (containing only ∧, ∃, and ∨). The results are reported in Table 1. More details can be found in Appendix B. From the table, we can find that: (1) BiDAG demonstrates an average relative improvement in Mean Reciprocal Rank (MRR) of 3.2%, 3.9%, and 5.4% over previous QE based models on the FB15k, FB15k-237, and NELL995 datasets, respectively. (2) Residual connection can improve model performance consistently on all datasets, which means residual connection is essential in the calibration process. Even with the naive strategy that represents queries as point vectors like GQE, our BiDAG achieves a significant performance gain compared with all baselines. Furthermore, BiDAG also outperforms well on conjunctive queries (2i/3i). In our opinion, the main reason is that the target node has more processor nodes which will provide more information for calibrating. All these results demonstrate that calibration is helpful in complex query answering. Ablation Study for BiDAG To better demonstrate the effectiveness of bi-directional calibration (BC), we conduct further ablations studies by adopting different settings on FB15k. The experimental results are demonstrated in Table 2. From the table, we can find that compared to BiDAG-0BC (model without calibration), calibration can improve performance significantly. Besides, the significant improvement on multi-hop queries (2p/3p) demonstrates that calibration can also effectively alleviate the error cascading. Further study the effect of calibration To further investigate how calibration affects the node representations in each layer, we record the relative change of the calibrated representations to the initial representations (layer-0 representations obtained by the prediction module), which is defined as follows: $$c^{t}={\frac{\|\mathbf{h}_{t g t}^{t}-\mathbf{h}_{t g t}^{0}\|_{2}}{\|\mathbf{h}_{t g t}^{0}\|_{2}}}\qquad\qquad(8)$$ Method 1p 2p 3p 2i 3i pi ip 2u up avg BiDAG-0BC 75.2 27.6 23.2 61.1 71.4 46.6 29.2 46.4 24.1 45.0 BiDAG-1BC 76.5 28.0 23.8 63.4 73.4 45.8 32.3 48.0 25.4 46.3 BiDAG-2BC 77.8 29.3 24.9 64.3 73.8 46.2 33.3 49.6 26.7 47.3 BiDAG-3BC 78.6 31.0 25.3 65.2 74.4 46.6 35.3 50.8 27.8 **48.3** where h 0 tgt is the initial representation for the target node, h ttgt is the representation for the target node after t-th calibration. The larger the c t value, the greater the difference between the t-th calibrated representation and the initial representation. As shown in Figure 3, it can be founded that: (1) Throughout the training process, the relative change of final representations (c 3, the green line) increases initially and then decreases. This observation suggests that at the early stages of training, the initial representation is insufficiently accurate, so calibration mechanism changes representations a lot to get correct answers. However, as training progresses, the initial representations become increasingly precise, resulting in a relatively diminished influence of calibration later on. (2) In the middle and late stages of training, The values of c 1(the blue line) and c 2(the orange line) rise slowly, while c 3remains stable. This observation implies that the first two calibration steps remain crucial even as the initial representations become increasingly accurate. ## 6 Conclusion In this paper, we propose BiDAG, a query embedding method for answering complex queries over incomplete KGs. BiDAG splits the reasoning process into prediction and calibration. In the calibration process, the joint probability of all nodes is considered by applying GNN to the query graph that is extended to bidirectional message passing. The extensive experiments on multiple open datasets demonstrate that BiDAG outperforms previous QE based models and the effect of calibration in CQA. ## Limitations There are three main limitations of our approach: (1) Our model cannot handle negation operation. Enabling BiDAG to support negation operation is a direction for future work. (2) The modeling for query representation and logical operators is too simple. Improving BiDAG by more ingenious modeling for query representation and logical operators is also a direction for future work. (3) The training process cannot be parallelized well, which is a common drawback of QE models, as QE models have to predict node representations one by one. ## Ethics Statement This paper proposes a method for complex query answering in knowledge graph reasoning, and the experiments are conducted on public available datasets. As a result, there is no data privacy concern. Meanwhile, this paper does not involve human annotations, and there are no related ethical concerns. ## 7 Acknowledgment This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences (No.XDA27020100) and the National Natural Science Foundation of China (No.U1936207, No.61922085, No.61976211). This research work was supported by the Youth Innovation Promotion Association CAS, Yunnan Provincial Major Science and Technology Special Plan Projects (No.202202AD080004). ## References Alfonso Amayuelas, Shuai Zhang, Xi Susie Rao, and Ce Zhang. 2022. Neural methods for logical reasoning over knowledge graphs. In *International Conference on Learning Representations*. Jiaxin Bai, Zihao Wang, Hongming Zhang, and Yangqiu Song. 2022. Query2Particles: Knowledge Graph Reasoning with Particle Embeddings. Technical report. Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787–2795. Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In *Proceedings* of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Brian A Davey and Hilary A Priestley. 2002. *Introduction to lattices and order*. Cambridge university press. William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. 2018. Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 2030–2041. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. ArXiv:1512.03385 [cs]. Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. 2020. A Survey on Knowledge Graphs: Representation, Acquisition and Applications. *ArXiv preprint*, abs/2002.00388. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2017. Gated Graph Sequence Neural Networks. ArXiv:1511.05493 [cs, stat]. Hongyu Ren, Weihua Hu, and Jure Leskovec. 2020. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Hongyu Ren and Jure Leskovec. 2020. Beta embeddings for multi-hop logical reasoning in knowledge graphs. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Andrea Rossi, Donatella Firmani, Antonio Matinata, Paolo Merialdo, and Denilson Barbosa. 2021. Knowledge graph embedding for link prediction: A comparative analysis. *ACM Trans. Knowl. Discov. Data*, 15:14:1–14:49. Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. 2021. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34:24261–24272. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In *Proceedings of the 3rd Workshop on* Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *6th International* Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Zihao Wang, Hang Yin, and Yangqiu Song. 2021. Benchmarking the Combinatorial Generalizability of Complex Query Answering on Knowledge Graphs. ArXiv preprint, abs/2109.08925. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In *Proceedings of* the 2017 Conference on Empirical Methods in Natural Language Processing, pages 564–573, Copenhagen, Denmark. Association for Computational Linguistics. ## A Data Details The nine query types are shown in Figure 4. Specifically, there are five query types (1p/2p/3p/2i/3i) in the training set and also evaluated in a supervised manner, and the remaining four query types (2u/up/pi/ip) are evaluated in a zero-shot manner. Given the query type, a sample is generated by random walking on the KG. Datasets statistics are shown in Table 3. ## B Implement Details To compare with baselines fairly, we set the same size of embedding vectors as 400. And we directly use the mean reciprocal rank (MRR) scores of these baselines reported by Ren and Leskovec (2020); Amayuelas, Zhang, Rao, and Zhang (2022); Bai, Wang, Zhang, and Song (2022). In the comparison experiment with baseline, we used BiDAG-3BC for FB15k and FB15k-237, BiDAG-2BC for NELL. We tune the hyperparameters of BiDAG on the validation set for each dataset by grid search. We consider the batch size from {512, 1024, 2048}, learning rate from {2e-4, 3e-4, | Training | Validation | Test | | | | |------------|----------------|--------|--------|--------|--------| | Dataset | 1p/2p/3p/2i/3i | 1p | others | 1p | others | | FB15k | 273,710 | 59,097 | 8,000 | 67,016 | 8,000 | | FB15k-237 | 149,689 | 20,101 | 5,000 | 22,812 | 5,000 | | NELL | 107,982 | 16,927 | 4,000 | 17,034 | 4,000 | Table 3: Number of training, validation, and test queries generated for different query types. ![6_image_0.png](6_image_0.png) 4e-4}. Our experiments are conducted on GTX 3090 with PyTorch 1.11, and the random seed is fixed for each experiment. ## C Full Experimental Results The full results of Comparison with Baselines are shown in Table 4. | FB15k | |----------------| | FB15k-237 NELL | Dataset Model 1p 2p 3p 2i 3i pi ip 2u up avg GQE 54.6 15.3 10.8 39.7 51.4 27.6 19.1 22.1 11.6 28.0 Q2B 68.0 21.0 14.2 55.1 66.5 39.4 26.1 35.1 16.7 38.0 BetaE 65.1 25.7 24.7 55.8 66.5 43.9 28.1 40.1 25.4 41.6 Q2P **82.6** 30.8 **25.5** 65.1 74.7 **49.5** 34.9 32.1 26.2 46.8 MLPMix 69.7 27.7 23.9 58.7 69.9 46.7 30.8 38.2 24.8 43.4 BiDAG (w/o res) 77.8 30.0 25.0 64.2 73.7 41.5 33.2 49.6 27.0 46.9 BiDAG (w/ res) 78.6 **31.0** 25.3 **65.2** 74.4 46.7 35.3 50.8 27.8 **48.3** GQE 35.0 7.2 5.3 23.3 34.6 16.5 10.7 8.2 5.7 16.3 Q2B 40.6 9.4 6.8 29.5 42.3 21.2 12.6 11.3 7.6 20.1 BetaE 39.0 10.9 10.0 28.8 42.5 22.4 12.6 12.4 9.9 20.9 Q2P 39.1 11.4 10.1 32.3 47.7 24.0 14.3 8.7 9.1 21.9 MLPMix 42.7 11.5 9.9 33.5 46.8 **25.4** 14.0 14.0 9.2 22.9 BiDAG (w/o res) 43.3 **12.3** 10.1 34.9 47.7 22.8 14.3 14.4 10.2 23.3 BiDAG (w/ res) **43.7** 12.0 10.2 **35.0 48.8** 24.9 14.9 14.5 10.2 **23.8** GQE 32.8 11.9 9.6 27.5 35.2 18.4 14.4 8.5 8.8 18.6 Q2B 42.2 14.0 11.2 33.3 44.5 22.4 16.8 11.3 10.3 22.9 BetaE 53.0 13.0 11.4 37.6 47.5 24.1 14.3 12.2 8.6 24.6 Q2P 56.5 15.2 12.5 35.8 48.7 22.6 16.1 11.1 10.4 25.5 MLPMix 55.4 16.2 13.9 39.5 51.0 25.7 18.3 14.7 11.2 27.4 BiDAG (w/o res) 58.7 17.2 14.3 42.1 52.9 25.0 18.2 15.8 11.5 28.4 BiDAG (w/ res) 59.0 17.5 14.5 42.3 53.0 26.7 18.9 16.1 11.8 **28.9** ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the limitation section. ✓ A2. Did you discuss any potential risks of your work? In the Ethics Stateme. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In the abstract. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In The Experiment Section. ✓ B1. Did you cite the creators of artifacts you used? In the Experiment section. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In the Experiment section. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In the Appendix B. ## C ✓ **Did You Run Computational Experiments?** In The Experiment Section. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In the Appendix A. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In the Experiment section. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In the Appendix A. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
chen-etal-2023-prompt
Prompt-Based Metric Learning for Few-Shot {NER}
https://aclanthology.org/2023.findings-acl.451
Few-shot named entity recognition (NER) targets generalizing to unseen labels and/or domains with few labeled examples. Existing metric learning methods compute token-level similarities between query and support sets, but are not able to fully incorporate label semantics into modeling. To address this issue, we propose a simple method to largely improve metric learning for NER: 1) multiple prompt schemas are designed to enhance label semantics; 2) we propose a novel architecture to effectively combine multiple prompt-based representations. Empirically, our method achieves new state-of-the-art (SOTA) results under 16 of the 18 considered settings, substantially outperforming the previous SOTA by an average of 9.12{\%} and a maximum of 34.51{\%} in relative gains of micro F1.
# Prompt-Based Metric Learning For Few-Shot Ner Yanru Chen1, Yanan Zheng1**, Zhilin Yang**123∗ 1Tsinghua University, 2Shanghai Artificial Intelligence Laboratory, 3Shanghai Qizhi Institute {achen.cyanr.qaq, zyanan93}@gmail.com zhiliny@tsinghua.edu.cn ## Abstract Few-shot named entity recognition (NER) targets generalizing to unseen labels and/or domains with few labeled examples. Existing metric learning methods compute token-level similarities between query and support sets, but are not able to fully incorporate label semantics into modeling. To address this issue, we propose a simple method to largely improve metric learning for NER: 1) multiple prompt schemas are designed to enhance label semantics; 2) we propose a novel architecture to effectively combine multiple prompt-based representations. Empirically, our method achieves new state-of-the-art (SOTA) results under 16 of the 18 considered settings, substantially outperforming the previous SOTA by an average of 9.12% and a maximum of 34.51% in relative gains of micro F1. Our code is available at https://github.com/AChen-qaq/ProML. ## 1 Introduction Named entity recognition (NER) is a key natural language understanding task that extracts and classifies named entities mentioned in unstructured texts into predefined categories. Few-shot NER targets generalizing to unseen categories by learning from few labeled examples. Recent advances for few-shot NER use metric learning methods which compute the token-level similarities between the query and the given support cases. Snell et al. (2017) proposed to use prototypical networks that learn prototypical representations for target classes. Later, this method was introduced to few-shot NER tasks (Fritzler et al., 2019; Hou et al., 2020). Yang and Katiyar (2020) proposed StructShot, which uses a pretrained language model as a feature extractor and performs viterbi decoding at inference. Das et al. (2022) proposed CONTaiNER based on contrastive learning. This approach optimizes an objective that ∗Corresponding author. characterizes the distance of Gaussian distributed embeddings under the metric learning framework. Despite the recent efforts, there remain a few critical challenges for few-shot NER. First of all, as mentioned above, metric learning computes tokenlevel similarities between the query and support sets. However, the architectures used for computing similarities in previous work are agnostic to the labels in the support set. This prevents the model from fully leveraging the label semantics of the support set to make correct predictions. Second, while prompts have been demonstrated to be able to reduce overfitting in few-shot learning (Schick and Schütze, 2020), due to a more complex sequence labeling nature of NER, the optimal design of prompts remains unclear for few-shot NER. In light of the above challenges, we explore a better architecture that allows using prompts to fully leverage the label semantics. We propose a simple method of Prompt-based Metric Learning (ProML) for few-shot NER, as shown in Figure 1. Specifically, we introduce mask-reducible prompts, which is a special class of prompts that can be easily reverted to the original input by using a mask. By performing a masked weighted average over the representations obtained from multiple prompts, our method accepts multiple choices of prompts as long as they are mask-reducible. These prompts improve label efficiency by inserting semantic annotations into the text inputs. As instantiations of this framework, we design an option prefix prompt to provide the model with the candidate label options, and a label-aware prompt to associate each entity with its entity type in the input. As shown in Figure 2, a single prompt provides useful information but has some shortcoming. However, with a weighted average, multiple prompts are combined, which fully leverages label information. In our experiments, we find that using multiple prompts with the masked weighted average is effective for few-shot NER. Empirically, our method achieves new state-of-the-art (SOTA) results under 16 of the 18 considered settings, substantially outperforming the previous SOTA by an average of 9.12% and a maximum of 34.51% in relative gains of micro F1. ## 2 Related Work Few-Shot NER. Few-shot NER targets generalizing to unseen categories by learning from few labeled examples. Noisy supervised methods (Huang et al., 2020) perform supervised pretraining over large-scale noisy web data such as WiNER (Ghaddar and Langlais, 2017). Self training methods (Wang et al., 2021) perform semisupervised training over a large amount of unlabelled data. Alternative to these data-enhancement approaches, metric learning based methods have been widely used for few-shot NER (Fritzler et al., 2019; Yang and Katiyar, 2020; Das et al., 2022). Recently, prompt-based methods (Ma et al., 2021; Cui et al., 2021; Lee et al., 2022) are proposed for few-shot NER as well. To introduce more finegrained entity types in few-shot NER, a large-scale human-annotated dataset Few-NERD (Ding et al., 2021) was proposed. Ma et al. (2022b); Wang et al. (2022) formulate NER task as a span matching problem and decompose it to several procedures. Ma et al. (2022b) decomposed the NER task into span detection and entity typing, and they separately train two models and finetune them on the test support set, achieving SOTA results on FewNERD (Ding et al., 2021). Different from the above related works, our approach is a general framework of using prompts for token-level metric learning problems. Meta Learning. The idea of meta learning was first introduced in few-shot classification tasks for computer vision, attempting to learn from a few examples of unseen classes. Since then metric-based methods have been proposed, such as matching networks (Vinyals et al., 2016) and Prototypical networks (Snell et al., 2017), which basically compute similarities according to the given support set, learn prototypical representations for target classes, respectively. It has been shown that these methods also enable few-shot learning for NLP tasks such as text classification (Bao et al., 2019; Geng et al., 2019), relation classification (Han et al., 2018), named entity recognition (Fritzler et al., 2019; Yang and Katiyar, 2020; Das et al., 2022), and machine translation (Gu et al., 2018). Our approach also falls into the category of metric-based meta learning and outperforms previous work on NER with an improved architecture. Label Semantics for NER. There have been some approaches that make use of label semantics (Ma et al., 2022a; Hou et al., 2020). Hou et al. (2020) propose a CRF framework with labelenhanced representations based on the architecture of Yoon et al. (2019). However, they mainly focus on slot tagging tasks while their performance on NER tasks is poor. Ma et al. (2022a) introduce label semantics by aligning token representations with label representations. Both of them only use label semantics for learning better label representations. In contrast, our approach incorporates label semantics into the inputs so that the model is able to jointly model the label information and the original text samples. This makes the similarity scores dependent on the support set labels and is particularly crucial for metric learning. Our experiments also verify the advantages of our approach compared to previous work using labels semantics. Prompt-Based Approaches for NER. With the emergence of prompt-based methods in NLP research, very recently, some prompt-based approaches for few-shot NER have been proposed (Cui et al., 2021; Lee et al., 2022; Ma et al., 2021). However, they use prompts to help with the label predictions based on classification heads instead of metric learning. Moreover, some of these methods require searching for templates (Cui et al., 2021), good examples (Lee et al., 2022), or labelaware pivot words (Ma et al., 2021), which makes the results highly dependent on the search quality. Different from these methods, our approach does not rely on a search process. More importantly, another key difference is that we employ prompting in the setting of metric learning. ## 3 Task Definition 3.1 Few-Shot Ner Named entity recognition (NER) is a sequence labeling task1. Formally, for a sentence x consisting of n tokens x = [x1, x2, · · · , xn], there is a corresponding ground-truth label sequence y = [y1, y2, · · · , yn] where each yiis an encoding of some label indicating the entity type for token xi. Then a collection of these (x, y) pairs form a 1There also exist other formulations such as span prediction or question answering. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) dataset D. After training on the training dataset DS, the model is required to predict labels for sentences from the test dataset DT . Different from the standard NER task, the fewshot NER setting consists of a meta training phase and a test phase. At the meta training phase, the model trains on a training dataset DS. At the test phase, for various test datasets {DT (j)}, with only few labeled samples, the model is required to perform quick adaptions. In this paper, we mainly focus on two evaluation protocols and two task formulations which will be explained as follows. ## 3.2 Evaluation Protocols Following Ding et al. (2021); Ma et al. (2022a), we summarize two evaluation protocols as follows. Episode Evaluation An episode, or a task, is defined as a pair of one support set and one query set (S, Q) each consisting of sentences downsampled from the test set. For an N-way K-shot downsampling scheme, there are N labels among the support set S where each label is associated with K examples. The query set Q shares the same label set with the support set. Based on the support set, the model is required to predict labels for the query set. To perform an episode evaluation, a collection of T episodes {(St, Qt)} T t=1 are prepared. The evaluation results are computed within each episode and are averaged over all T episodes. Low-resource Evaluation Different from the few-shot episode evaluation, low-resource evaluation aims to directly evaluate the model on the whole test set. For a test dataset DT with a label set CT , a support set S associated with the labels from CT is constructed by K-shot downsampling such that each label has K examples in S. Based on the support set S, the model is required to predict labels for the query set which is the rest of the test set DT . To perform a low-resource evaluation, T different runs of support set sampling are run and averaged. ## 3.3 Task Formulation Following Yang and Katiyar (2020), we formulate few-shot NER tasks in the following two ways. Tag-Set Extension To mimic the scenario that new classes of entities emerge in some domain, Yang and Katiyar (2020) propose the tag-set extension formulation. Starting with a standard NER dataset (Dtrain, D*test*) with label set C, they split C into d parts, namely C1, C2, *· · ·* , Cd. Then for each label split Ci, a train set D (i) train is constructed from D*train* by masking the labels in Cito O (representing non-entities), and the corresponding test set D (i) test is constructed from D*test* by masking the labels in *C \ C*ito O. Domain Transfer Another task formulation is the domain transfer setting. Let DS be a training set of a standard NER task, and let {DT (i)} be the test sets of standard NER tasks but from a different domain. The training set DS is referred to as a source domain, and the test sets {DT (i)} constitute various target domains. In this setting, there may exist some overlapping entity classes between the source and target domains, but due to the domain gaps, it is still considered a few-shot setting. Note that the task formulation is independent of the evaluation protocol, and different combinations will be considered in our experiments. ## 4 Method 4.1 Prompt Schemas Motivated by existing prompt-based methods (Liu et al., 2021; Paolini et al., 2021) and the metric learning framework, our ProML provides label semantics by introducing prompts to metric learning models. We proposed a simple yet effective prompt class called the "mask-reducible prompts". Through this class of prompts, we can provide flexible prompts to the model which is consistent with metric learning methods that use token-level similarities as the metric. Starting with this schema, we will introduce two prompts that are used in ProML , the option-prefix prompt and the labelaware prompt. ## 4.2 Mask-Reducible Prompts Suppose the raw input sequence is x = [x1, x2, · · · , xl]. Let f*prompt* be a prompt function mapping x to the prompted result x′. We call this f*prompt* is a mask-reducible prompt function if for all x and its prompted result x′ = f*prompt*(x), there exists a mask m ∈ [0, 1]|x′|such that x′[m == 1] = x. Intuitively, this means there is only some insertions in the prompt construction so that we can revert x′ back to x through a simple masking operation. The corresponding prompt of f*prompt* is called a mask-reducible prompt. Given a length preserving sequence-to-sequence encoder Enc(x; θ), a sequence of input tokens x, and a mask-reducible prompt function f*prompt*, we first construct the prompted result x′ = f*prompt*(x), then pass the sequence x′through the encoder to get representations h′ = Enc(x′; θ). Since Enc(·; θ) is length preserving, the length of h′is the same as x′, and we can compute h = h′[m == 1] to get the representation for input tokens, where m is the desired mask that could reduce x′to x (i.e. x′[m == 1] = x). Through this process, the encoder receives the full prompts as its input while only the representations of raw input tokens are extracted. ## Prompt A: Option Prefix Prompts An Option prefix prompt takes the concatenation of all annotations as an option prefix to incorporate label semantics into modeling. Formally, for a given set of label options S = {s1, s2, *· · ·* , s|S|}, we construct a mask-reducible prompting function fA(x, S) associated with S using the template "s1, s2, *· · ·* , s|S|: x". An example is given in Figure 2, where option prefix prompts reduce the label space to avoid incorrectly classify non-entities. The option prefix prompts inform the main model of which labels to predict, which can be used to learn label-dependent representations for computing the similarities. Prompt B: Label-Aware Prompts A labelaware prompt appends the entity type to each entity occurrence in the input so that the model is aware of such information. While the aforementioned option prefix prompts incorporate global label information, the label-aware prompts introduce local information about each entity. Specifically, let fB(x, y) be the prompt function. Given a sequence of input tokens x and its ground-truth label sequence y, for each entity e that occurs in x, we obtain its corresponding label E from the sequence y, and replace e with an label-appended version "[e|E]" to construct the prompted result x′ = fB(x, y). Both the entity e and its label E are sequences of tokens. Because the label-aware prompt can be applied when the ground-truth label is available, in our few-shot learning setting, we do not apply this prompt to the query set. An example is given in Figure 2, where label-aware prompts provide full label information in prompted inputs. More details will be explained in the following descriptions of our model architecture. Note that it is possible to design other maskreducible prompts for NER, which will be naturally handled by our framework. In our study, we find these two prompts work well practically and use them as instantiations to demonstrate the effectiveness of our framework. ## 4.3 Model And Training The overall architecture of ProML is shown in Figure 1. Compared to the contrastive learning framework utilized by CONTaiNER (Das et al., 2022), our architecture uses a transformer backbone to encode different prompted inputs separately and employs a masked weighted average to obtain token representations, which will be elaborated as follows. These modifications significantly enhance the performance of our model when compared to the baseline method. At the meta training phase, we sample minibatches from the training set D*train*, where each mini-batch contains a few-shot episode (Strain, Q*train*). We obtain the label set associated with the support set S*train* and use a lookup dictionary to translate each label id to its natural language annotation. This leads to a set of label annotations S. Then for an input sequence x = [x1, x2, · · · , xl] and its label sequence y = [y1, y2, · · · , yl] from the support set S*train*, we collect the prompted results pA = fA(x, S), pB = fB(x, y) and the corresponding masks mA, mB. These prompted results are then passed through a pretrained language model PLM. The average of outputs from the last four hidden layers are computed as the intermediate representations ## Ha = Plm(Pa), Hb = Plm(Pb). We perform a masked weighted average to obtain token representations $$\mathbf{h}=\rho\mathbf{h_{A}}[\mathbf{m_{A}}==1]+(1-\rho)\mathbf{h_{B}}[\mathbf{m_{B}}==1],$$ where ρ ∈ (0, 1) is a hyperparameter. The token representations for the query set are computed similarly. However, during both training and testing, we only use the option-prefix prompt for the query set since the ground-truth label sequence will not be available at test time. As a result, we do not perform a weighted average for the query set. After obtaining the token representations, two projection layers fµ, fΣ are employed to produce two Gaussian embeddings, i.e., the mean and precision parameters of a d-dimensional Gaussian distribution N(µ,Σ) for each token in the query and support sets (Das et al., 2022). Given the Gaussian embeddings for samples in both the support and query sets, we compute the distance metrics. Similar to CONTaiNER (Das et al., 2022), for a token xi from the support set S*train* and a token x′j from the query set Q*train*, the distance between two tokens xi, x′j is defined as the Jenson-Shannon divergence (Fuglede and Topsøe, 2004) of their Gaussian embeddings, i.e., $$\begin{array}{l}{{d i s t(x_{i},x_{j}^{\prime})=D_{J S}(\mathcal{N}_{i},\mathcal{N}_{j}^{\prime})}}\\ {{\qquad=\frac{1}{2}(D_{K L}(\mathcal{N}_{(\mu_{i},\Sigma_{i})}||\mathcal{N}_{(\mu_{j}^{\prime},\Sigma_{j}^{\prime})})}}\\ {{\qquad\quad+D_{K L}(\mathcal{N}_{(\mu_{j}^{\prime},\Sigma_{j}^{\prime})}||\mathcal{N}_{(\mu_{i},\Sigma_{i})})),}}\end{array}$$ where DKL refers to the Kullback–Leibler divergence. The similarity between xi and x′j is then defined as s(xi, x′j ) = exp(−dist(xi, x′j )). Let Strain, Q*train* be collections of all tokens from sentences in Strain, Q*train*. For each q ∈ Q*train*, the associated loss function is computed as $$\ell(q)=-\log\frac{\sum_{p\in\mathcal{X}_{q}}s(q,p)/|\mathcal{X}_{q}|}{\sum_{p\in\overline{{{s}}}_{t r a i n}}s(q,p)},$$ $$\mathrm{\boldmath~\begin{array}{r c l c l}{{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{}}\end{array}}\mathrm{\boldmath~by~}\mathrm{\boldmath~\begin{array}{r c l c l}{{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{}}&{{}}\end{array}}=\mathrm{\boldmath~\begin{array}{r c l c l}{{}}&{{}}&{{}}&{{}}&{{}}\\ {{}}&{{}}&{{}}&{{}}&{{}}\end{array}}$$ where Xq is defined by Xq = {p ∈ Strain|*p, q* have the same labels}. The overall loss function within a mini-batch is the summation of token-level losses, L =1 |Q*train*| Pq∈Q*train* ℓ(q). ## 4.4 Nearest Neighbor Inference At test time, we compute the intermediate representations for tokens from the support and query sets just as we did during the meta training phase. Following CONTaiNER (Das et al., 2022), we no longer use the projection layers fµ, fΣ at test time but directly perform nearest neighbor inference using the token representations h. For each query token, according to the Euclidean distance in the representation space, we compute the distance to each entity type by the distance to the nearest tokens from the support set associated with that entity type and assign the nearest entity type to the query token. For the k shot setting where k > 1, we also use the average distance of the nearest k neighbors associated with each entity type as the distance to the entity types. | Method | Tag-Set Extension | Domain Transfer | Avg. | | | | | | |-------------------|---------------------|-------------------|-------------|------------------------|-----------------------------------|------------|------------|-------| | Onto-A | Onto-B | Onto-C | CoNLL | WNUT | I2B2 | GUM | | | | 1-shot | | | | | | | | | | ProtoBERT(⋆) | 19.3±3.9 | 22.7±8.9 | 18.9±7.9 | 49.9±8.6 | 17.4±4.9 | 13.4±3.0 | 17.8±3.5 | 22.77 | | NNShot(⋆) | 28.5±9.2 | 27.3±12.3 | 21.4±9.7 | 61.2±10.4 | 22.7±7.4 | 15.3±1.6 | 10.5±2.9 | 26.7 | | StructShot(⋆) | 30.5±12.3 | 28.8±11.2 | 20.8±9.9 | 62.4±10.5 | 24.2±8.0 | 21.4±3.8 | 7.8±2.1 | 27.99 | | CONTaiNER(⋆) | 32.2±5.3 | 30.9±11.6 | 32.9±12.7 | 57.8±5.5 | 24.2±7.24 | 16.4±3.19 | 17.9±2.28 | 30.33 | | ProtoBERT(†) | 8.39±2.16 | 17.12±4.04 | 8.4±1.94 | 53.09±9.89 | 21.17±4.71 | 15.85±4.89 | 11.91±3.01 | 19.42 | | NNShot(†) | 21.97±7.11 | 33.89±7.1 | 21.73±6.78 | 59.76±8.63 | 26.53±4.54 | 15.0±3.63 | 10.33±3.08 | 27.03 | | StructShot(†) | 24.02±6.24 | 36.42±8.22 | 22.70±6.65 | 60.84±7.62 | 29.16±4.88 | 18.34±2.70 | 11.17±2.18 | 28.95 | | CONTaiNER(†) | 31.63±11.74 | 51.33±8.97 | 39.97±3.81 | 57.89±16.79 26.67±8.65 | 18.96±3.97 | 12.07±1.53 | 34.07 | | | TransferBERT(†) | 7.44±5.97 | 8.97±4.94 | 7.34±3.42 | 47.09±11.02 11.83±5.07 | 35.25±4.21 | 8.97±2.56 | 18.13 | | | DualEncoder(†) | 0.83±0.62 | 2.86±1.70 | 2.55±1.37 | 54.63±3.43 | 36.03±2.02 | 14.63±3.10 | 11.87±0.76 | 17.63 | | EntLM(†) | 5.79±4.22 | 10.11±4.13 | 8.49±5.0 | 50.47±6.74 | 27.7±7.66 | 7.85±2.81 | 8.85±1.17 | 17.04 | | DemonstrateNER(†) | 0.98±0.83 | 2.02±2.1 | 4.02±3.23 | 16.12±7.33 | 20.38±8.02 | 13.29±4.73 | 3.24±1.34 | 8.58 | | ProML | 37.94±6.08 | 53.74±3.6 | 46.27±10.72 | 69.16±4.47 | 43.89±2.17 | 24.98±3.44 | 15.29±1.89 | 41.61 | | 5-shot | | | | | | | | | | ProtoBERT(⋆) | 30.5±3.5 | 38.7±5.6 | 41.1±3.3 | 61.3±9.1 | 22.8±4.5 | 17.9±1.8 | 19.5±3.4 | 33.11 | | NNShot(⋆) | 44.0±2.1 | 51.6±5.9 | 47.6±2.8 | 74.1±2.3 | 27.3±5.4 | 22.0±1.5 | 15.9±1.8 | 40.36 | | StructShot(⋆) | 47.5±4.0 | 53.0±7.9 | 48.7±2.7 | 74.8±2.4 | 30.4±6.5 | 30.3±2.1 | 13.3±1.3 | 42.57 | | CONTaiNER(⋆) | 51.2±5.9 | 55.9±6.2 | 61.5±2.7 | 72.8±2.0 | 27.7±2.2 | 24.1±1.9 | 24.4±2.2 | 45.37 | | ProtoBERT(†) | 25.81±3.0 | 31.49±4.6 | 32.08±2.12 | 65.76±5.34 | 32.81±8.78 35.05±12.25 25.02±2.66 | 35.43 | | | | NNShot(†) | 39.49±5.96 | 50.18±4.99 | 45.98±4.61 | 70.79±3.44 | 33.68±5.21 | 29.50±2.89 | 19.04±2.38 | 41.24 | | StructShot(†) | 35.68±6.17 | 51.30±4.61 | 47.85±4.74 | 71.23±3.62 | 35.36±2.99 | 27.08±3.17 | 19.67±2.45 | 41.17 | | CONTaiNER(†) | 45.62±6.58 | 67.70±2.80 | 59.84±2.62 | 75.48±2.80 | 35.83±5.51 | 30.14±3.35 | 16.19±0.68 | 47.26 | | TransferBERT(†) | 21.48±5.73 | 41.97±5.65 | 45.24±4.33 | 69.93±3.98 | 35.64±3.55 | 47.89±7.02 | 27.50±1.27 | 41.38 | | DualEncoder(†) | 7.61±2.50 | 16.41±1.22 | 26.37±7.25 | 67.05±3.69 | 36.82±1.09 | 23.27±2.26 | 24.55±1.12 | 28.87 | | EntLM(†) | 21.29±5.77 | 35.7±6.2 | 28.8±6.62 | 60.58±9.39 | 30.26±3.99 | 13.51±2.4 | 13.35±1.9 | 29.07 | | DemonstrateNER(†) | 49.25±10.34 | 63.02±4.64 | 61.07±8.08 | 73.13±4.01 | 43.85±2.56 | 36.36±4.58 | 18.01±2.81 | 49.24 | | ProML | 52.46±5.71 | 69.69±2.19 | 67.58±3.25 | 79.16±4.49 | 53.41±2.39 | 58.21±3.58 | 36.99±1.49 | 59.64 | ## 5 Experiments 5.1 Setup Datasets We conduct experiments on multiple datasets across two few-shot NER formulations, tag-set extension and domain transfer. Following Das et al. (2022); Yang and Katiyar (2020), we split OntoNotes 5.0 (Weischedel et al., 2013) into Onto-A, Onto-B, and Onto-C for the tag-set extension formulation. For the domain transfer formulation, we use OntoNotes 5.0 (Weischedel et al., 2013) as the source domain, CoNLL'03 (Sang and Meulder, 2003), WNUT'17 (Derczynski et al., 2017), I2B2'14 (Stubbs and Uzuner, 2015), and GUM (Zeldes, 2017) as target domains. We also take Few-NERD (Ding et al., 2021) as one of the tag-set extension tasks, which is a large-scale human-annotated dataset speciallly designed for few-shot NER. The datasets statistics are presented in Table 3. We adopt the IO tagging scheme, where a label "O" is assigned to non-entity tokens and an entity type label is assigned to entity tokens. We also transform the abbreviated label annotations into plain texts; e.g., [LOC] to [location]. Baselines Our baselines include metric learning based methods such as the prototypical networks ProtoBERT (Snell et al., 2017; Fritzler et al., 2019; Hou et al., 2020), a nearest neighbor based network NNShot and its viterbi decoding variant StructShot (Yang and Katiyar, 2020), and a contrastive learning method CONTaiNER (Das et al., 2022). We also include a classification head based method TransferBERT (Hou et al., 2020) based on a pretrained BERT (Devlin et al., 2019). Existing method that make use of label semantics, DualEncoder (Ma et al., 2022a) is also reproduced for comparison. Recent prompt-based methods EntLM (Ma et al., 2021) and DemonstrateNER (Lee et al., 2022) are also employed as the baselines as well. We also compare our model with the recently-introduced based meth- | Method | 1-shot | 5-shot | Avg. | | | |----------------------|-----------------------|-----------------------|-----------------------|-------|-------| | INTRA | INTER | INTRA | INTER | | | | ProtoBERT(⋆) | 20.76 | 38.83 | 42.54 | 58.79 | 40.23 | | NNShot(⋆) | 25.78 | 47.24 | 36.18 | 55.64 | 41.21 | | StructShot(⋆) | 30.21 | 51.88 | 38.00 | 57.32 | 44.35 | | CONTaiNER(⋆) | 40.43 | 53.70 | 55.95 | 61.83 | 52.98 | | ESD(⋆) | 36.08±1.60 59.29±1.25 | 52.14±1.50 69.06±0.80 | 54.14 | | | | DecomposedMetaNER(⋆) | 49.48±0.85 64.75±0.35 | 62.92±0.57 71.49±0.47 | 62.16 | | | | ProtoBERT(†) | 25.8±0.35 | 47.59±0.84 | 50.19±0.65 65.05±0.39 | 47.16 | | | NNShot(†) | 33.32±0.69 52.29±0.88 | 45.61±0.52 59.63±0.48 | 47.71 | | | | StructShot(†) | 34.51±0.68 | 53.1±0.92 | 46.88±0.48 60.45±0.51 | 48.74 | | | CONTaiNER(†) | 37.12±1.01 55.19±0.43 | 49.22±0.34 62.64±0.33 | 51.04 | | | | TransferBERT(†) | 22.43±1.49 38.26±2.36 | 48.95±1.23 | 62.2±1.36 | 42.96 | | | ProML | 58.08±0.75 | 68.76±0.4 | 68.95±0.36 75.11±0.52 | 67.73 | | Table 3: Statistics of Datasets | Dataset | Domain | # Class | # Sample | |-----------|-----------|-----------|------------| | Few-NERD | Wikipedia | 66 | 188K | | OntoNotes | General | 18 | 76K | | CoNLL'03 | News | 4 | 20K | | I2B2'14 | Medical | 23 | 140K | | WNUT'17 | Social | 6 | 5K | | GUM | Mixed | 11 | 3.5K | ods DecomposeMetaNER (Ma et al., 2022b) and ESD (Wang et al., 2022). 2 For a fair comparison, we use bert-base-uncased (Devlin et al., 2019) as the PLM encoder and adopted the same pre-trained encoder in all the reproducible experiments of the baseline methods. Evaluation Protocols Following Das et al. (2022); Yang and Katiyar (2020), we use the lowresource evaluation protocol for domain transfer tasks and for the tag-set extension tasks OntoA, Onto-B, and Onto-C. Since Few-NERD (Ding et al., 2021) is specifically designed for episode evaluation, all of our experiments on Few-NERD dataset are evaluated under episode evaluation protocol. We follow the N-way K-shot downsampling setting proposed by Ding et al. (2021). For episode evaluation, we conduct 5 different runs of experiments, each of them contains 5000 test episodes. For low-resource evaluation, 10 different runs of support set sampling is performed. ## 5.2 Main Results The main results of low-resource evaluation and episode evaluation are shown in Tables 1 and 2 respectively. Training details are provided in Appendix A.1. Our method achieves new state-ofthe-art (SOTA) results under 16 out of the 18 considered settings. To compare with previous SOTA across different settings, we collect the relative improvement fractions from all settings and then compute an average and a maximum over these fractions. The result shows that ProML substantially outperforming the previous SOTA by an average of 9.12% and a maximum of 34.51% (from 28% to 37% on GUM 5-shot) in relative gains of micro F1. These outstanding results show that our method is effective for few-shot NER tasks. The generalization difficulties are affected by both the label space and the domain gap. For example, Onto-A, B, and C datasets share the same domain but are constructed to have disjoint label space. CoNLL is a subset of the OntoNotes dataset, so its performance is much better than other domains. Compared with the other baselines, the performances of prompt-based baselines decrease by a larger margin in the 1-shot settings since they heavily rely on finetuning on support sets. ## 5.3 Ablation Study And Analysis The ablation study results for prompts choices and averaging weights on all tag-set extension tasks are shown in Table 4, 5. We adopt the episode Setting Model Onto-A Onto-B Onto-C INTRA INTER plain, plain 42.1±1.03 62.87±0.52 50.58±0.98 53.08±0.85 65.66±0.08 A, A 47.04±1.01 65.42±0.62 55.77±1.19 66.19±0.72 73.9±0.34 B, plain 39.58±2.26 51.17±1.01 40.28±3.55 49.9±1.68 65.31±1.36 plain+A (ρ = 0.3), plain 40.43±1.64 62.41±1.3 49.51±2.78 56.4±1.02 68.15±0.42 plain+A (ρ = 0.5), plain 42.35±1.32 64.37±0.48 51.94±1.06 56.69±0.93 68.73±0.25 plain+A (ρ = 0.7), plain 42.75±2.18 64.52±0.57 53.07±1.79 55.33±1.34 68.37±0.26 plain+B (ρ = 0.3), plain 46.85±1.32 58.0±1.68 50.54±1.71 54.18±1.25 67.03±0.7 plain+B (ρ = 0.5), plain 52.34±0.31 62.07±2.15 55.9±0.5 57.75±0.32 68.22±0.25 plain+B (ρ = 0.7), plain 52.37±0.57 66.39±1.22 57.7±0.71 57.52±0.81 69.04±0.2 A+B (ρ = 0.3), A 52.76±0.82 59.34±1.49 55.52±0.89 66.95±0.82 73.51±0.3 A+B (ρ = 0.5), A 55.29±0.98 62.49±1.2 59.99±0.99 68.41±0.27 74.52±0.44 A+B (ρ = 0.7), A 55.76±1.06 67.09±0.49 62.57±0.47 68.95±0.36 75.11±**0.52** evaluation protocol due to its low variance. More ablations and the training curve, case study are placed in Appendix A.3, A.2, A.4, respectively. Option Prefix Prompts & Label-Aware Prompts According to Table 4, overall, by comparing the best variant of prompting methods to "plain", using prompting consistently outperforms the methods without prompting. The improvements are consistent with our motivation in the earlier sections. With the help of label semantic annotations, the model is able to leverage this information to better learn the representation of each token. In addition, the model does not need to spend much capacity memorizing and inferring the underlying entity types for input tokens, which is crucial in the few-shot setting where labels are scarce. The performance of variant "B, plain" is not good since only the support set leverages labelaware prompts so that there is a gap between the amounts of additional information from support to query. Thus there is a potential risk that the model only emphasizes these labels in support inputs while neglecting the semantics for tokens themselves, causing an overfitting problem. However, after introducing a weighted average, as shown in "plain+B, plain", the performance significantly improves. This observation suggests that the labelaware prompt is useful and the weighted average mitigates the overfitting by reducing the gaps between support and query. As we will show in the next section, combining the two prompts always leads to the best performance because the model is able to dynamically adapt to the two representations. Effect of Masked Weighted Average As reported before, a weighted average could reduce the gaps between computing representations for the support set and the query set and make use of the information provided by label-aware prompts. By adjusting the averaging weight ρ, we are able to balance the weights of the two representations for different data distributions. | 5-shot | |----------| We compared different averaging settings in 4. The option prefix only variant "A, A" performs better than "plain+A, plain" because the label option information is provided to both support and query. The performance of "plain+B, plain" and "A+B, A" improve as ρ increases, which is consistent with our motivation According to Table 4, with a properly selected averaging weight ρ, our ProML outperforms all baselines by a large margin among all tested datasets, which indicates that both prompts contribute to our final performance. Importantly, ρ = 0.7 tends to work well in most of the settings, which can be used as the default hyperparameter in our framework without tuning. Visualizing Embedding Space We visualize the token representations from support sets and query sets over several episodes from the test set of FewNERD INTRA, as Figure 3 shows. We observe that the token representations produced by ProML are concentrated in different clusters. In addition, we shall observe a clear decision boundary between different clusters. On the contrary, CONTaiNER seems to learn scattered, less separable features. Figure 3: TSNE visualization of token representations under the Few-NERD test set for CONTaiNER (on the left) ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) and ProML (on the right), where each color represents an entity type (grey for non-entities). We only keep a fraction of 20% among the non-entities to make the TSNE visualization clearer. ## 6 Conclusions We propose a novel prompt-based metric learning framework ProML for few-shot NER that leverages multiple prompts to guide the model with label semantics. ProML is a general framework consistent with any token-level metric learning method and can be easily plugged into previous methods. We test ProML under 18 settings and find it substantially outperforms previous SOTA results by an average of 9.12% and a maximum of 34.51% in relative gains of micro F1. We perform ablation studies to show that multiple prompt schemas benefit the generalization ability for our model. We demonstrate the visualization results for embedding space to unseen entities, showing that comparing with previous SOTA, ProML learns better representations. We also present case studies and perform some analysis. ## 7 Limitations Although we discussed different task formulations and evaluation protocols, the few-shot settings are simulated by downsampling according to existing works, which is slightly different from the real scenario. ## References Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2019. Few-shot text classification with distributional signatures. arXiv preprint arXiv:1908.06039. Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using BART. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1835–1845, Online. Association for Computational Linguistics. Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338–6353, Dublin, Ireland. Association for Computational Linguistics. Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017, Copenhagen, Denmark, September 7, 2017, pages 140–147. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-NERD: A fewshot named entity recognition dataset. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3198–3213, Online. Association for Computational Linguistics. Alexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recognition task. In SAC, pages 993–1000. ACM. Bent Fuglede and Flemming Topsøe. 2004. Jensenshannon divergence and hilbert space embedding. In Proceedings of the 2004 IEEE International Symposium on Information Theory, ISIT 2004, Chicago Downtown Marriott, Chicago, Illinois, USA, June 27 - July 2, 2004, page 31. IEEE. Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. arXiv preprint arXiv:1902.10482. Abbas Ghaddar and Philippe Langlais. 2017. Winer: A wikipedia annotated corpus for named entity recognition. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 413–422. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for lowresource neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622–3631, Brussels, Belgium. Association for Computational Linguistics. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. arXiv preprint arXiv:1810.10147. Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Fewshot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1381–1393, Online. Association for Computational Linguistics. Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2020. Few-shot named entity recognition: A comprehensive study. arXiv preprint arXiv:2012.14978. Dong-Ho Lee, Akshen Kadakia, Kangmin Tan, Mahak Agarwal, Xinyu Feng, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, and Xiang Ren. 2022. Good examples make a faster learner: Simple demonstration-based learning for low-resource NER. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2687–2700, Dublin, Ireland. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956– 1971, Dublin, Ireland. Association for Computational Linguistics. Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Qi Zhang, and Xuanjing Huang. 2021. Templatefree prompt tuning for few-shot ner. arXiv preprint arXiv:2109.13532. Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed metalearning for few-shot named entity recognition. In FINDINGS. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. arXiv preprint arXiv:2101.05779. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. ACL. Timo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language models are also few-shot learners. CoRR, abs/2009.07118. Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In NIPS, pages 4077–4087. Amber Stubbs and Özlem Uzuner. 2015. Annotating longitudinal clinical narratives for deidentification: The 2014 i2b2/uthealth corpus. J. Biomed. Informatics, 58:S20–S29. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. Advances in neural information processing systems, 29. Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, and Zhifang Sui. 2022. An enhanced span-based decomposition method for few-shot sequence labeling. ArXiv, abs/2109.13023. Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, and Ahmed Hassan Awadallah. 2021. Meta self-training for fewshot neural sequence labeling. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1737–1747. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23. Yi Yang and Arzoo Katiyar. 2020. Simple and effective few-shot named entity recognition with structured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6365–6375, Online. Association for Computational Linguistics. Sung Whan Yoon, Jun Seo, and Jaekyun Moon. 2019. Tapnet: Neural network augmented with task-adaptive projection for few-shot learning. In International Conference on Machine Learning, pages 7115–7123. PMLR. Amir Zeldes. 2017. The GUM corpus: creating multilayer resources in the classroom. Lang. Resour. Evaluation, 51(3):581–612. ## A Appendix ![10_Image_0.Png](10_Image_0.Png) A.1 Training Details We use AdamW (Loshchilov and Hutter, 2019) for optimization and the learning rate is set to 3 × 10−5, linearly warming up during first 10% of all 104training iterations. We use bert-baseuncased (Devlin et al., 2019) as the PLM encoder. The weight decay is set to 0.01 for all parameters of the model except the biases and layer norm layers. The value of hyperparameter ρ is chosen from {0.1, 0.3, 0.5, 0.7, 0.9} and is set to 0.7 by default (which is good enough for almost all cases). For fair comparison, we use the same Gaussian embedding dimension d = 128 as CONTaiNER (Das et al., 2022). A single experiment run takes about 1 hour on a single RTX3090. ## A.2 Training Curve Our architecture of using multiple prompts also mitigates overfitting. We conduct two experiments on Few-NERD to prove this empirically. Figure 4 demonstrates the training curves for CONTaiNER (Das et al., 2022) and our model. From the curves we can see that the trends of performances over training set are similar while the performance of CONTaiNER on dev set stops increasing much earlier than ProML . Compared with CONTaiNER, our model gets much better in the later epochs. This shows that ProML suffers less from overfitting in the few-shot setting. ## A.3 Ablations Ablation Table for both 1-shot and 5-shot Due to page limit, we leave ablation for 1-shot to the appendix. The full version is in Table 5. Ablation for Replacing Labels with Noises & Removing Separators We made an experiment to replace labels with random strings (both in train and test, same entity type shares same label) to show the effect of label semantics. According to Table 6, the results from "ProML noise-label" are significantly worse than our ProML, but still comparable with the previous SOTA on Few-NERD dataset. This shows that the semantics of the label really help and label-aware prompts can provide useful information even if the labels are noisy. We also made an abbreviation for the selection of separator. In the experiment "ProML no-sep" from Table 6 where all separators were removed, the ## A.4 Case Study We present several randomly-selected cases from ProML and CONTaiNER using the test-set results of WNUT 1-shot domain transfer task. The results are in Table 7. We can see that ProML gives better predictions than CONTaiNER (Das et al., 2022) for most cases. Specifically, CONTaiNER often misses entities or incorrectly classifies non-entities. Table 5: Ablation Study for ProML (1-shot and 5-shot). The tuple indicates which prompts are used in the support set and query set. The variant **A, A** refers to using the option prefix prompt only in both the support set and query set. **plain+A (**ρ = 0.5), plain refers to that the original inputs and option prefix prompts are used for the support set with an averaging weight ρ = 0.5, while the query set only use origin inputs. **A+B, A** is our ProML method. All results in this table are produced by the episode evaluation protocol. Setting Model Onto-A Onto-B Onto-C INTRA INTER plain, plain 27.4±0.93 49.91±1.22 32.51±0.98 37.17±0.98 54.11±0.72 A, A 30.99±0.91 52.57±0.82 37.44±1.3 51.0±0.74 65.86±0.56 B, plain 25.8±0.89 34.76±3.3 24.02±1.1 37.99±2.55 58.98±1.57 plain+A (ρ = 0.3), plain 29.79±1.79 50.43±1.04 33.51±1.86 43.35±1.0 59.95±0.35 plain+A (ρ = 0.5), plain 30.81±1.41 50.51±0.83 34.8±1.05 43.25±0.54 60.06±0.49 plain+A (ρ = 0.7), plain 28.32±1.37 50.79±0.87 34.27±0.92 41.42±0.55 59.1±0.5 plain+B (ρ = 0.3), plain 31.03±0.91 40.39±1.5 31.67±1.8 45.16±0.39 62.27±0.63 plain+B (ρ = 0.5), plain 33.58±0.44 45.11±0.85 36.25±0.93 45.1±0.41 62.67±0.78 plain+B (ρ = 0.7), plain 33.42±0.46 49.44±0.96 38.67±0.61 43.07±0.44 61.09±0.5 A+B (ρ = 0.3), A 33.43±1.42 42.07±1.49 35.26±1.1 57.16±1.52 68.04±0.82 A+B (ρ = 0.5), A 33.31±0.57 42.94±2.1 39.27±0.52 58.08±**0.75** 68.43±0.6 A+B (ρ = 0.7), A 35.58±0.4 50.53±1.03 42.12±**0.84** 57.19±0.91 68.76±0.4 plain, plain 42.1±1.03 62.87±0.52 50.58±0.98 53.08±0.85 65.66±0.08 A, A 47.04±1.01 65.42±0.62 55.77±1.19 66.19±0.72 73.9±0.34 B, plain 39.58±2.26 51.17±1.01 40.28±3.55 49.9±1.68 65.31±1.36 plain+A (ρ = 0.3), plain 40.43±1.64 62.41±1.3 49.51±2.78 56.4±1.02 68.15±0.42 plain+A (ρ = 0.5), plain 42.35±1.32 64.37±0.48 51.94±1.06 56.69±0.93 68.73±0.25 plain+A (ρ = 0.7), plain 42.75±2.18 64.52±0.57 53.07±1.79 55.33±1.34 68.37±0.26 plain+B (ρ = 0.3), plain 46.85±1.32 58.0±1.68 50.54±1.71 54.18±1.25 67.03±0.7 plain+B (ρ = 0.5), plain 52.34±0.31 62.07±2.15 55.9±0.5 57.75±0.32 68.22±0.25 plain+B (ρ = 0.7), plain 52.37±0.57 66.39±1.22 57.7±0.71 57.52±0.81 69.04±0.2 A+B (ρ = 0.3), A 52.76±0.82 59.34±1.49 55.52±0.89 66.95±0.82 73.51±0.3 A+B (ρ = 0.5), A 55.29±0.98 62.49±1.2 59.99±0.99 68.41±0.27 74.52±0.44 A+B (ρ = 0.7), A 55.76±1.06 67.09±0.49 62.57±0.47 68.95±0.36 75.11±**0.52** | 1-shot 5-shot | |-----------------| Table 6: Ablations for removing separators in prompts and replacing labels with random noises. All methods are evaluated in episode evaluation protocol for Few-NERD dataset. Table 7: Case study: An illustration of some cases from the WNUT test set. There are 6 entities: person (PER), location (LOC), product (PRO), creative work (CW), miscellaneous (MIS), group (GRO). Here blue color represents correct predictions, while red color represents mistakes. wow emma*P ER* and kaite*P ER* is so very cute and so funny i wish im*P ER* ryan*P ER* these trap came from taiwanLOC . these trap came from taiwanLOC . these trap came from taiwanLOC . great video ! good comparisons between the ipad*P RO* and the ipad*P RO* pro*P RO* ! great video ! good comparisons between the ipad and the ipad pro*P RO* ! thanks for colors superheroes kids videos ! ) like learnCW colorsCW andCW numbersCW ! ) thanks for colorsCOR superheroes kids videos ! ) like learn colors and numbers ! ) i pronounce it nye-on cat i pronounce it nye-on cat i pronounce it nye-on*P RO* cat*P RO* | Method | 1-shot | 5-shot | Avg. | | | |-------------------|-----------------------|-----------------------|-----------------------|-----------|-------| | INTRA | INTER | INTRA | INTER | | | | ProML | 58.08±0.75 | 68.76±0.4 | 68.95±0.36 75.11±0.52 | 67.73 | | | ProML no-sep | 55.66±0.75 68.03±0.27 | 67.82±0.17 74.82±0.32 | 66.58 | | | | ProML noise-label | 51.99±0.84 | 65.8±0.69 | 62.09±0.44 | 72.5±0.43 | 63.10 | | GroundTruth | ProML | CONTaiNER | | | | |----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|-------------|----|------|-------------| | wow emmaP ER and kaiteP ER is so very cute and so funny i wish im ryanP ER great video ! good comparisons between the ipadP RO and the ipadP RO proP RO ! | wow emmaP ER and kaiteP ER is so very cute and so funny i wish im ryanP ER | | | | | | thanks for colors superheroes kids videos ! ) like learnCW colorsCW andCW numbersCW ! ) | great | video | ! | good | comparisons | | between the ipadP RO | and theP RO | | | | | | ipadP RO proP RO ! thanks for colorsCW superheroesCW kids videos ! ) like learnCW colorsCW andCW numbersCW ! ) | | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
wu-etal-2023-openpi
{O}pen{PI}-{C}: A Better Benchmark and Stronger Baseline for Open-Vocabulary State Tracking
https://aclanthology.org/2023.findings-acl.452
Open-vocabulary state tracking is a more practical version of state tracking that aims to track state changes of entities throughout a process without restricting the state space and entity space. OpenPI (Tandon et al., 2020) is to date the only dataset annotated for open-vocabulary state tracking. However, we identify issues with the dataset quality and evaluation metric. For the dataset, we categorize 3 types of problems on the procedure level, step level and state change level respectively, and build a clean dataset OpenPI-C using multiple rounds of human judgment. For the evaluation metric, we propose a cluster-based metric to fix the original metric{'}s preference for repetition. Model-wise, we enhance the seq2seq generation baseline by reinstating two key properties for state tracking: temporal dependency and entity awareness. The state of the world after an action is inherently dependent on the previous state. We model this dependency through a dynamic memory bank and allow the model to attend to the memory slots during decoding. On the other hand, the state of the world is naturally a union of the states of involved entities. Since the entities are unknown in the open-vocabulary setting, we propose a two-stage model that refines the state change prediction conditioned on entities predicted from the first stage. Empirical results show the effectiveness of our proposed model, especially on the cleaned dataset and the cluster-based metric. The code and data are released at \url{https://github.com/shirley-wu/openpi-c}
# Openpi-C: A Better Benchmark And Stronger Baseline For Open-Vocabulary State Tracking Xueqing Wu∗, Sha Li∗**, Heng Ji** University of Illinois Urbana-Champaign {xueqing8,shal2,hengji}@illinois.edu ## Abstract Open-vocabulary state tracking is a more practical version of state tracking that aims to track state changes of entities throughout a process without restricting the state space and entity space. OpenPI (Tandon et al., 2020) is to date the only dataset annotated for open-vocabulary state tracking. However, we identify issues with the dataset quality and evaluation metric. For the dataset, we categorize 3 types of problems on the procedure level, step level and state change level respectively, and build a clean dataset OpenPI-C using multiple rounds of human judgment. For the evaluation metric, we propose a cluster-based metric to fix the original metric's preference for repetition. Model-wise, we enhance the seq2seq generation baseline by reinstating two key properties for state tracking: temporal dependency and entity awareness. The state of the world after an action is inherently dependent on the previous state. We model this dependency through a dynamic memory bank and allow the model to attend to the memory slots during decoding. On the other hand, the state of the world is naturally a union of the states of involved entities. Since the entities are unknown in the open-vocabulary setting, we propose a two-stage model that refines the state change prediction conditioned on entities predicted from the first stage. Empirical results show the effectiveness of our proposed model especially on the cluster-based metric. The code and data are released at https:// github.com/shirley-wu/openpi-c ## 1 Introduction State tracking is the task of predicting the states of the world after an action is performed. Most existing work operate under a simplified **closevocabulary** setting, assuming the state space and involved entities are known (Dalvi et al., 2018; ∗ Equal contribution Bosselut et al., 2018), which limits their applicability. The more practical **open-vocabulary** setting assumes both the entities and the state space are unknown. The OpenPI dataset (Tandon et al., 2020) is, to our knowledge, the first and only dataset for this task. However, we find a series of issues concerning data quality and evaluation, which may hinder progress in this line of research. We identify three types of issues with the dataset: non-procedural documents, out-of-order steps, and ambiguous state changes. In particular, ∼32% of the state changes cannot be reliably inferred from the input, which we find encourages model hallucination. We filter out problematic data points and build a cleaner dataset via crowdsourcing. For evaluation, the greedy matching strategy employed by Tandon et al. (2020) allows matching multiple predicted state changes to a single gold state change, inadvertently inflating the score when the model produces repetitive outputs. We propose a **cluster-based metric** that automatically merges repetitive stage changes and enforces 1-to-1 assignment between clusters. We propose two enhancements to the seq2seq generation model proposed for this task in Tandon et al. (2020). To capture the dependency between world states of consecutive time steps, we introduce an **entity memory** to preserve information about the world state for all previous steps. When predicting the state changes for subsequent actions, the model can access the state information of previous time steps. Additionally, while close-vocabulary setting usually provides a list of involved entities to track, such a list is inaccessible in open-vocabulary setting. This requires the model to jointly identify involved entities and predict their state changes. To make the problem more tractable and help model learning, we propose an **entity-conditioned prediction step** where predictions are conditioned on each single entity extracted from the predictions of the first stage. ![1_image_0.png](1_image_0.png) Our contributions can be summarized as follows: (1) we present a clean dataset OpenPI-C for openvocabulary state tracking which fixes the data quality issues in the original OpenPI dataset; (2) we design a clustering-based metric for state tracking evaluation that mitigates the original metric's preference for repetition; (3) we model temporal dependency and entity awareness by enhancing the generation model for open-vocabulary state tracking with a dynamic memory module and two-stage prediction. ## 2 Related Work Most existing work on entity state tracking (Weston et al., 2016; Dalvi et al., 2018; Bosselut et al., 2018) is closed-vocabulary, assuming that the number of possible states and involved entities is limited and known. Under this setting, state tracking can be modeled as a tagging problem (Gupta and Durrett, 2019; Amini et al., 2020; Huang et al., 2021) which is not applicable for the open-vocabulary case. Tandon and Chatterjee (2022) proposed OpenPI dataset for the more practical open-vocabulary setting. They formulate the task as a generation problem to handle the open vocabulary challenge. The design of an external memory component has already been applied to close-vocabulary state tracking (Bosselut et al., 2018; Yagcioglu et al., 2018; Gupta and Durrett, 2019). However, they rely on known entities and only track a limited set of attributes. In this work, we use a dynamic memory that can handle emerging entities with open-vocabulary attributes. ## 3 Task And Dataset The OpenPI dataset (Tandon et al., 2020) is, to our knowledge, the first and only dataset for openvocabulary state tracking. The texts are collected from WikiHow and the state changes are manually annotated. Dataset Issues We identify 3 types of quality issues in the OpenPI dataset. For input, we find that ∼15% input texts are not procedure texts because the steps do show any temporal continuity (shown in Figure 2a). In valid procedure text inputs, ∼7.4% steps are invalid steps in the context of the procedure texts (shown in Figure 2b). They either do not explicitly describe an executable action, or do not follow the temporal order when combined with other steps. For output, ∼32% state changes cannot be reliably inferred from the input (shown in Figure 2c). Such data will encourage the trained model to generate hallucination. To address these issues and improve data quality, we build a cleaned dataset named OpenPIC through three-stage human cleaning: (1) filtering out non-procedure input texts, (2) filtering out invalid steps, and (3) filtering out unreliable state changes. In the three stages, we assign each data point with 3/3/2 annotators respectively and achieve 69.4%/84.9%/71.0% agreement (defined as the ratio of data points where all annotators agree with each other). To verify the annotation quality, we manually annotate 50 instances for each stage. 90%/92%/84% of the crowd-sourcing annotations match our manual annotations for the three stages respectively. The statistics of the original OpenPI dataset and our OpenPI-C dataset are presented in Table 1. Detailed annotation settings and filtering criteria are in Appendix B. Though our dataset has fewer data samples, as shown in Figure 2, the removed data samples are mostly of low quality. As shown in Figure 4, including such samples in the dataset encourages hallucination and negatively impacts model performance. ![2_image_2.png](2_image_2.png) ![2_image_1.png](2_image_1.png) ![2_image_0.png](2_image_0.png) ![2_image_3.png](2_image_3.png) | OpenPI | OpenPI-C | | | | | | |-------------------|------------|------|-------|-------|------|------| | Train | Dev | Test | Train | Dev | Test | | | # procedure texts | 644 | 55 | 111 | 539 | 50 | 74 | | # steps | 3216 | 274 | 560 | 2403 | 219 | 345 | | # state changes | 23.9k | 1.7k | 4.2k | 13.8k | 1.2k | 2.0k | Evaluation Issues In Tandon et al. (2020), each predicted state is matched to the ground truth state with the highest similarity. As a result, when the model generates near-duplicate state changes, it will artificially boost the model's score. We propose a **cluster-based metric** to address this issue. We cluster the predicted set and the gold-standard set respectively based on Sentence-BERT (Reimers and Gurevych, 2019a) embedding similarity. After obtaining the predicted and gold-standard clusters, we assign a gold-standard cluster for each predicted cluster through maximal matching which enforces one-to-one mapping. Eventually, we use the assignment to calculate precision, recall and F1 scores. ## 4 Method Generation Baseline As shown in Figure 1, the input to the model is the concatenation of the goal, steps, and a prompt "*Now, what happens?*". In Tandon et al. (2020), each state change will be represented as a templated sequence for generation. For example, *(potato, shape, whole, cut in half)* will be converted to *"shape of potato was whole* before and cut in half afterwards". Entity Memory To capture the temporal dependency across steps, we maintain a variable-size memory bank to store historical state changes. For each entity-attribute pair (*e, a*) that appears in the prediction, we allocate a memory slot after it first appears in the predicted state changes. Suppose it first appears at step k0, then we initialize its memory m at the next step mk0+1 = h k0. Here, h k0 represents the hidden states for (*e, a*) at step k0. In the subsequent steps, we update the memory every time the attribute a of entity e changes. Formally, at step *k, k > k*0, if (*e, a*) changes, then mk+1 =mk + h k/2; otherwise, mk+1 = mk. To compute h k, we take the text expressing its state change from the generated sequence at step k and compress their decoder-side hidden states h1*, . . . ,* hn into h k via attention: $$\alpha_{i}=\operatorname{softmax}_{i}\left(\mathbf{W}^{k-k_{0}}\mathbf{h}_{i}\right),\mathbf{h}^{k}=\sum_{i=1}^{n}\alpha_{i}\mathbf{h}_{i}\quad(1)$$ where Wk−k0is a learnable parameter for the (k − k0)-th step after (*e, a*) appears. To reduce the number of parameters, we share the same Wk−k0 among all *k, k* − k0 > 0. That is, we use W0to initialize the memory when (*e, a*) first appears, and use another parameter W>0to update the memory. We incorporate the memory through the decoder side cross-attention. At step k, the keys and values for the cross-attention module include two parts: the encoder-side hidden states h enc 1*. . .* h enc n (n refers to the number of tokens encoded by the encoder) and the memory vectors mk 1 . . . mkM (M refers to the number of created memory slots). We project them into key and value matrices *K, V* with different parameters: $$\begin{array}{c}\{K,V\}=[{\bf W}_{\{K,V\}}^{e n c},{\bf h}_{1}^{e n c},\ldots,{\bf W}_{\{K,V\}}^{e n c},{\bf h}_{n}^{e n c},\\ {\bf W}_{\{K,V\}}^{m}{\bf m}_{1}^{k},\ldots,{\bf W}_{\{K,V\}}^{m}{\bf m}_{M}^{k}]^{\top},\end{array}$$ and feed them into the cross-attention module. In this way, the model can adaptively select between input information and historical state change information stored in the memory. Entity-Conditioned Prediction A challenge for this open-vocabulary task is the lack of access to the entities involved. Compared to directly modeling all state changes p ( Y | x, g ) given the steps x and goal g , we can decompose this problem into first predicting entities, and then modeling the state | full | |--------------------| | in bottle | | cloudy | | shaken | | dissolved | | infused with oils. | change of each entity separately p ( Y e | x, g, e ). Conditioning on the entity simplifies the task and eases model training. We reuse the baseline model and replace the natural language prompt with " Now, what happens to e? ". During inference, we extract all the entities in the prediction and perform entity-conditioned prediction for each entity e . Eventually we merge the N sets of state changes as the final output. Table 2: Main results on OpenPI-C (in %). EMem denotes Entity Memory and ECond denotes Entity- Conditioned prediction. ![3_image_1.png](3_image_1.png) ## 5 Experiments Our experiments are based on pre-trained BART (Lewis et al., 2020). 1 We add another baseline 1 Our proposed techniques can be applied on any encoderdecoder model. Among the base models that we have experithat that concatenates all previous state changes to the input (denoted as "BART + concat states"). Following Tandon et al. (2020), we also use GPT-2 (Radford et al., 2019) as baseline. ![3_image_0.png](3_image_0.png) Figure 4: Outputs of our model (BART+EMem+ECond) trained on OpenPI and OpenPI-C respectively. The model trained on OpenPI produces more hallucination (highlighted in red). The main results are in Table 2 . Overall, our proposed two techniques improve performance on most metrics especially on the cluster-based metrics. Compared to our proposed entity memory (EMem), "BART + concat states" takes the same information (historical steps and historical state changes) as input but significantly decreases the performance compared to the baseline. This is due to the historical state changes being too long and distracting the model. As in Figure 3 , entityconditioned prediction (ECond) is able to produce more accurate outputs based on the same set of entities. We observe that performance gains brought by entity-conditioned prediction are more significant on cluster-based F1 metrics, because the baseline model produces longer and more repetitive outputs (average number of output state changes per step is 7.71 compared to 6.76 of BART+ECond). As a result, the original F1 gives the baseline too much | F1 original | F1 cluster-based | | | | |-----------------------------------------|--------------------|------------------|------------|-------| | Exact BLEU ROUGE Exact BLEU ROUGE | | | | | | GPT-2 | 3.92 20.81 39.73 | 5.72 20.31 | 3.40 | | | BART | 4.88 23.35 41.88 | 7.10 22.72 | 35.44 | | | +concat states | 4.73 21.96 | 40.38 | 6.69 20.61 | 32.88 | | BART+EMem | 5.27 24.06 42.71 | 7.65 23.40 35.79 | | | | +Cond | 5.70 23.81 | 42.14 | 8.27 23.56 | 35.80 | | +EMem+ECond 5.65 23.73 42.15 8.26 22.96 | 35.34 | | | | credit. To analyze the effect of dataset cleaning, we compare the outputs of models trained on the original dataset and cleaned dataset. As in Figure 4 , the cleaned dataset encourages the model to stick to the input text and produce less hallucination. To | empty | |------------| | in dropper | | clear | | still | | cnde | | pull | ![3_image_2.png](3_image_2.png) quantify this effect, we manually examined 50 processes randomly sampled from the test set. Of the mented with, we found BART to work the best and hence our | BART baseline outputs: | | |--------------------------|-------------| | spray bottle | state | | ol | location | | water | clarity | | BART + ECond outputs: | | | spray bottle | state | | Col | composition | | water | composition | experiments are based on BART. | Model trained on OpenPl: | | | |----------------------------|----------|-----------| | spoon | wetness | dry | | spoon | location | in drawer | | bowl | volume | empty | | bowl | weight | lighter | | Model trained on OpenPI-C: | | | | water | location | in kettle | | spoon | state | dry | | spoon | wetness | dry | | wet | |----------| | in water | | full | | heavier | | in sink | | wet | | wetness | 50 processes we examined, each process consists of multiple steps, and each step has multiple output state changes. We did a binary classification on each output state change to classify whether it contains hallucinations or not. Overall, the model trained on OpenPI produced 749 hallucinated state changes while the model trained on OpenPI-C produced 393 (47.53% less). ## 6 Conclusion And Future Work In this paper we study the open vocabulary state tracking problem. We build upon the generation formulation introduced by Tandon et al. (2020) and propose two techniques: (1) *entity memory* that models the temporal dependency by storing world states from previous steps, and (2) *entityconditioned prediction* that simplifies the task by predicting state changes conditioned on each single entity. We conduct human annotation to address data quality issues in the existing OpenPI dataset and thus propose a cleaned version of OpenPI dataset. We propose an improved cluster-based metric to overcome the original metric's preference towards repetition. For future work, we consider using external resources such as ConceptNet (Amigó et al., 2009) to assist entity prediction. ## 7 Limitations The scope of this work is limited by the available data. The OpenPI dataset (Tandon et al., 2020) is derived from WikiHow 2, and focuses on everyday scenarios and contains English only. We would like to see resources that span more domains (e.g. scientific domains) and more languages. ## 8 Ethical Considerations Our work does not involve the creation of new datasets. However, we would like to point out that the existing dataset OpenPI is based on WikiHow, which is primary crowdsourced (with partial expert review). Thus some of the content is influenced by the cultural and educational background of the annotators. In our human cleaning, we recruit annotators from United States and Canada regions only, which may also bring cultural bias to the content. In particular, some procedures are related to healthcare and neither the procedure nor the model output should be regarded as medical advice. 2https://www.wikihow.com/Main-Page ## Acknowledgement This research is based upon work supported by U.S. DARPA DARPA KAIROS Program No. FA875019-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References Enrique Amigó, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. *Inf.* Retr., 12(4):461–486. Aida Amini, Antoine Bosselut, Bhavana Dalvi, Yejin Choi, and Hannaneh Hajishirzi. 2020. Procedural reading comprehension with attribute-aware context flow. In *Automated Knowledge Base Construction*. Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2018. Simulating action dynamics with neural process networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. In *Proceedings* of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1595–1604, New Orleans, Louisiana. Association for Computational Linguistics. Aditya Gupta and Greg Durrett. 2019. Tracking discrete and continuous entity state for process understanding. In *Proceedings of the Third Workshop on Structured* Prediction for NLP, pages 7–12, Minneapolis, Minnesota. Association for Computational Linguistics. Hao Huang, Xiubo Geng, Jian Pei, Guodong Long, and Daxin Jiang. 2021. Reasoning over entity-actionlocation graph for procedural text understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5100– 5109, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Nils Reimers and Iryna Gurevych. 2019a. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019b. Sentencebert: Sentence embeddings using siamese bertnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Kushagri Tandon and Niladri Chatterjee. 2022. Team LRL_NC at SemEval-2022 task 4: Binary and multi-label classification of PCL using fine-tuned transformer-based models. In *Proceedings of the* 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 421–431, Seattle, United States. Association for Computational Linguistics. Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 6408–6417, Online. Association for Computational Linguistics. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. *ICLR*. Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Nazli Ikizler-Cinbis. 2018. Recipeqa: A challenge dataset for multimodal comprehension of cooking recipes. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018*, pages 1358–1368. Association for Computational Linguistics. ## A Clustering Algorithm For output clustering, we use stsb-distilroberta-base-v2 model provided by sentence-transformers package3 to obtain sentence embeddings. We use cosine 3https://github.com/UKPLab/ sentence-transformers similarity to compute similarity scores. The detailed algorithm is in Algorithm 1. We set the threshold th as 0.7. To evaluate the performance, we manually cluster the outputs for 20 processes (containing 85 steps) and use the annotated clusters as gold clusters to evaluate our algorithm. We calculate BCubed metrics (Amigó et al., 2009) and our algorithm achieves 88.00% precision, 88.68% recall, and 87.39% F1. Algorithm 1: The clustering algorithm. The input set y is the gold or the predicted set of state changes. Each output cluster Ck is a subset of y and all output clusters C form a partition of y. Input: input set y = {y1*, . . . , y*n}; similarity scorer S(·, ·); threshold th Output: clusters C = {C1*, . . . , C*K} 1 C ← [] 2 for i ← 1 to n do 3 new_cluster ← *true* 4 for k ← 1 to |C| do 5 if ∀y ∈ Cj , S(yi, y) *> th* **then** /* Assign yito cluster Ck */ 6 Ck.add(yi) 7 new_cluster ← *f alse* 8 **break** $\frac{1}{4}$ 4. 9 if new_cluster **then** ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) 10 C.append({yi}) ## B Data Details OpenPI dataset is released by Dalvi et al. (2018).4 It is an English-only dataset crawled from WikiHow and annotated via crowd-sourcing. As mentioned before, we conduct human annotation to filter out low-quality data. The annotation study is reviewed by an ethics review board and determined to be a not human subjects research. The annotation is conducted on MTurk platform. To ensure that the annotators are native English speakers, we recruit annotators from the United States and Canada. We have informed the annotators in the annotation instructions that we are collecting data for research purpose. The annotation includes 4The original dataset and their baseline and evaluation code are released at https://github.com/allenai/ openpi-dataset. We notice that some data has wrong template ("a of e were v before and v ′afterwards" instead of "was"), which will influence model training and evaluation. Thus, we apply a preprocessing step to fix these errors. three stages: Stage 1: filter out non-procedure texts. Each annotator is presented with an input text and asked to judge whether it is a procedure text or not. Each input text is annotated by three annotators; the reward for annotating each input text is $0.03. We remove input texts that are considered as non-procedure texts by most annotators (i.e., at least two annotators). 15% procedure texts are removed at this stage. Stage 2: filter out invalid steps. Each annotator is presented with an procedure text. For each step in the process, the annotator is asked to judge whether it is a valid step. Each input text is annotated by three annotators; the reward for annotating each input text is $0.2. We then remove steps that are considered as invalid steps by most annotators (i.e., at least two annotators). 7.4% steps are removed at this stage. Stage 3: filter out low-quality state changes. Each annotator is presented with an input procedure text and a state change caused by one of the steps. The annotator is asked to decide whether the state change is certain, *uncertain* and *impossible*. Each state change is annotated by two annotators; the reward for annotating each state change is $0.05. To ensure data quality, we remove state changes that receive at least one uncertain or *impossible* rating from the two annotators, which empirically yield the best results. 32% state changes are removed at this stage. Screenshots of the annotation interface are shown in Figure 5. Eventually, we manually examine the data and conducted rule-based filtering according to the following heuristics. We first remove steps with no state changes, and then remove procedure texts with < 3 steps. ## C Experiment Details We use GPT2-medium and BART-large models for the experiments. The number of parameters for GPT-2 baseline, BART baseline, BART+EMem and BART+ECond models are 355M, 406M, 444M and 406M respectively. Each experiment is run on one Telsa P100 GPU and takes about 4 hours. In training, we use the exact training hyperparameters as Dalvi et al. (2018), i.e., the learning rate of 5×10−5, the batch size of 8, and 30 epochs. In decoding, we use beam search with beam size of 4. The decoding strategy is searched from top-p sampling (0.5 ≤ p ≤ 0.9), top-k sampling (5 ≤ k ≤ 10) and beam search (beam= 4). The best decoding strategy is found by manual tuning on the original OpenPI dataset. Results are in Table 4. We show that using beam search significantly boost the performance over top-p or top-k sampling for all systems. We also show in Figure 6 that length penalty can be used to control the number of outputs, and thus to balance between precision and recall. Compared to Dalvi et al. (2018), our reimplemented GPT-2 baseline is different in that: (1) we include the process goal g in the input, and (2) we use beam search with beam size of 4 instead of top-p sampling. We also run the experiments on the original OpenPI dataset and compare with the results of Dalvi et al. (2018). Results are shown in Table 3. ## D Scientific Artifacts Scientific artifacts we use in this work include: (1) OpenPI dataset (Tandon et al., 2020) and their baseline and evaluation code released under the MIT License. The dataset is collected from WikiHow and focuses on every-day scenarios and contains English only. Our use is consistent with the resource's intended use, which is to facilitate research on open-vocabulary state tracking tasks. (2) Three pre-trained models: GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020) provided by transformers5and SentenceBERT (Reimers and Gurevych, 2019b) provided by sentence-transformers, all licensed under the Apache License 2.0. We use the models for research which is consistent with their intended use. Our code and data are released under the MIT license, which is compatible with the artifacts utilized in our research. ![7_image_0.png](7_image_0.png) | F1 original | F1 cluster-based | | | | | | |-----------------------------|--------------------|-------|-------|------|-------|-------| | Exact | BLEU | ROUGE | Exact | BLEU | ROUGE | | | GPT-2 (Tandon et al., 2020) | 4.3 | 16.1 | 32.4 | - | - | - | | GPT-2 | 5.35 | 19.57 | 36.26 | 6.16 | 18.24 | 29.95 | | BART | 5.51 | 23.19 | 40.45 | 7.40 | 21.44 | 33.22 | | +concat states | 4.65 | 19.58 | 36.82 | 6.22 | 18.20 | 29.99 | | BART+EMem | 6.15 | 23.63 | 40.60 | 7.81 | 21.69 | 33.30 | | +ECond | 6.88 | 23.50 | 40.22 | 9.16 | 22.29 | 33.15 | | +EMem+ECond | 7.38 | 23.71 | 40.33 | 9.69 | 22.38 | 33.02 | Table 3: Results (in %) on the original OpenPI dataset. EMem denotes Entity Memory and ECond denotes Entity-Conditioned prediction. | Top-p sampling | Top-k sampling | | | | | |------------------|------------------|-------|-------|--------|-------| | p=0.9 | p=0.5 | k=10 | k=5 | beam=4 | | | GPT-2 | 16.78 | 17.38 | 16.49 | 17.29 | 18.24 | | BART | 19.94 | 20.31 | 19.96 | 19.45 | 21.44 | | Ours | 19.64 | 21.65 | 20.68 | 20.45 | 22.38 | Beam search p=0.9 p=0.5 k=10 k=5 beam=4 Table 4: Results (in %) of GPT-2 baseline, BART baseline, and our proposed method with different decoding ![7_image_1.png](7_image_1.png) strategies on the original OpenPI dataset. We report clustering-based F1 with BLEU. Among all settings, beam search achieves the best performance.
ma-etal-2023-run
{I} run as fast as a rabbit, can you? A Multilingual Simile Dialogues Datasets
https://aclanthology.org/2023.findings-acl.453
A simile is a figure of speech that compares two different things (called the tenor and the vehicle) via shared properties. The tenor and the vehicle are usually connected with comparator words such as {``}like{''} or {``}as{''}. The simile phenomena are unique and complex in a real-life dialogue scene where the tenor and the vehicle can be verbal phrases or sentences, mentioned by different speakers, exist in different sentences, or occur in reversed order. However, the current simile research usually focuses on similes in a triplet tuple (tenor, property, vehicle) or a single sentence where the tenor and vehicle are usually entities or noun phrases, which could not reflect complex simile phenomena in real scenarios. In this paper, we propose a novel and high-quality multilingual simile dialogue (MSD) dataset to facilitate the study of complex simile phenomena. The MSD is the largest manually annotated simile data ({\textasciitilde}21K) and it contains both English and Chinese data. Meanwhile, the MSD data can also be used on dialogue tasks to test the ability of dialogue systems when using similes. We design 3 simile tasks (recognition, interpretation, and generation) and 2 dialogue tasks (retrieval and generation) with MSD. For each task, we provide experimental results from strong pre-trained or state-of-the-art models. The experiments demonstrate the challenge of MSD and we will release the data/code on GitHub.
## I Run As Fast As A Rabbit, Can You? A Multilingual Simile Dialogue Dataset Longxuan Ma1and **Weinan Zhang**1∗and **Shuhan Zhou**1,2 and **Churui Sun**3and **Changxin Ke**3and **Ting Liu**1 1 Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology 1 lxma,wnzhang,shzhou,tliu@ir.hit.edu.cn 2 School of Information Science, Beijing Language and Culture University 3 School of Computer Science, Harbin Institute of Technology 3 sunchurui@hit.edu.cn, cxke@stu.hit.edu.cn ## Abstract A simile is a figure of speech that compares two different things (called the tenor and the vehicle) via shared properties. The tenor and the vehicle are usually connected with comparator words such as "like" or "as". The simile phenomena are unique and complex in a real-life dialogue scene where the tenor and the vehicle can be verbal phrases or sentences, mentioned by different speakers, exist in different sentences, or occur in reversed order. However, the current simile research usually focuses on similes in a triplet tuple (tenor, property, vehicle) or a single sentence where the tenor and vehicle are usually entities or noun phrases, which could not reflect complex simile phenomena in real scenarios. In this paper, we propose a novel and high-quality multilingual simile dialogue (MSD) dataset to facilitate the study of complex simile phenomena. The MSD is the largest manually annotated simile data (∼20K) and it contains both English and Chinese data. Meanwhile, the MSD data can also be used on dialogue tasks to test the ability of dialogue systems when using similes. We design 3 simile tasks (recognition, interpretation, and generation) and 2 dialogue tasks (retrieval and generation) with MSD. For each task, we provide experimental results from strong pretrained or state-of-the-art models. The experiments demonstrate the challenge of MSD and we will release the data/code on GitHub. ## 1 Introduction Simile plays an important role in human language to make utterances more vivid, interesting, and graspable (Zhang et al., 2021; He et al., 2022) and is an increasingly studied phenomenon in computational linguistics (Song et al., 2021; He et al., 2022). A simile is a figure of speech that compares two things from different categories (called the tenor and the vehicle) via shared properties (Paul, 1970). A tenor and a vehicle are usually connected with ∗*Corresponding author | Examples | Simile | | |--------------------------------------------|--------------------------------------------|-----| | 1 | The boy runs as fast as a rabbit. | Yes | | 2 | The girl looks like her mother. | No | | A: Look that fireman over the street. | | | | 3 | B: Wow, he is so strong. | Yes | | A: I agree, strong as a bull. | | | | 4 | A: Like a monster, right? | Yes | | B: Yes, that man is really rude. | | | | 5 | A: Arguing with parents is not wise. | Yes | | B: It is like throwing an egg at a rock. | | | | 6 | A: He walks into the crowd and disappears. | Yes | | B: It is like a fish swims into the ocean. | | | Table 1: Examples to illustrate simile. The underline font represents **tenors**. The italic font means *vehicles*. A and B are different Speakers. comparator words such as "like" or "as". For example, in the first example of Table 1, the tenor is "The boy", the vehicle is "a rabbit", the event is "run", the comparator is "as ... as" and the shared property is "fast". The current simile research usually focuses on the simile in a triplet (tenor, shared property, vehicle) (Song et al., 2021) or a single sentence (Bizzoni and Lappin, 2018; Liu et al., 2018; Li et al., 2022). For example, the simile recognition (Birke and Sarkar, 2006; Liu et al., 2018) task is judging whether a sentence contains a simile, such as distinguishing which of the first and second examples in Table 1 contains a simile. However, a simile in a triplet or a single sentence is not enough to reflect the complex simile phenomena in the real scenario. In this paper, we study similes in reallife dialogue where a tenor and a vehicle can be mentioned by different speakers, exist in different sentences, or occur in reversed order. The third example in Table 1 shows a simile dialogue where the tenor "That fireman" and the vehicle "a bull" are in different utterances. The fourth example in Table 1 shows a simile where the tenor and the vehicle are mentioned by different speakers and the vehicle occurs before the tenor. What is more, different from previous research where the tenor and vehicle are usually single entities (Song et al., 2021) or simple nominal phrases (Bizzoni and Lappin, 2018), the tenor and vehicle in a real-life dialogue may be a verbal phrase or a long sentence. A verbal phrase can function as the subject or object of a verb, such as the fifth example in Table 1. A sentence is a set of words expressing a statement, a question, or an order, usually containing a subject and a verb. The sixth example in Table 1 shows sentences as the tenor and vehicle. The verbal phrase and sentences can convey richer content and emotions, making the real-life dialogue more vivid and interesting. Studying these complex simile phenomena in a dialogue scenario needs to consider both the dialogue context and the various forms of the tenor and vehicle, and will lead the simile research to a brand new level. However, similes in real-life dialogue scenarios have not been studied by previous research so there are no public benchmarks available nowadays. To facilitate the simile study, we release a humanannotated, high-quality simile dialogue dataset, which contains both English and Chinese data. The complex simile phenomena in real-life dialogue scenarios not only bring more difficulties to traditional simile tasks such as recognition, interpretation (Su et al., 2016), and generation (Li et al., 2022) but also raise challenges for dialogue research, e.g. generation and retrieval tasks. To address the simile dialogue tasks, dialogue models need to understand the simile relations between entities/phrases/sentences. Our contributions are: - To the best of our knowledge, we are the first to study the simile phenomenon in dialogue and propose a high-quality multi-lingual simile dialogue (MSD) dataset to assist both the simile and dialogue research. - There are 5 tasks with the proposed MSD dataset. For simile research, we design the dialogue simile recognition/interpretation/generation tasks. For dialogue research, we design the response retrieval and generation tasks. - We verify how strong pre-trained models and the state-of-the-art simile models perform on the 5 tasks we designed. Experimental results reveal that simile in dialogue is a difficult task and requires further study. Our code and data will be released on GitHub1. | Metaphor Category | Example | |---------------------|-----------------------------------------------------------| | Noun phrase | The nurse is an angel. | | Adjective | These words are cold. The soldier had a warm heart. | | Verbal | The process was killed. They plant the seeds of change. | | Adverb-Verb | He speak fluidly. | | Verbal phrase | Taking care of pets is like raising children. | | Sentence | I rushed to the terminal like a cheetah chasing its prey. | ## 2 Related Work 2.1 Simile And Metaphor The simile is a kind of metaphor that is frequently used in human languages to make utterances more vivid and graspable (Niculae and DanescuNiculescu-Mizil, 2014) and expresses human sentiments (Li et al., 2012; Mohammad et al., 2016). Previous researchers defined different metaphor categories. We present examples for these categories in the first four lines of Table 2. For example, Bizzoni and Lappin (2018) categorized metaphor into Noun phrases, Adjectives, Verbs, and Multi-word; Li et al. (2022) categorized metaphor into Nominal, Verbal (Subject-Verb-Object), Adjective-Noun, and Adverb-Verb. Previous work usually denoted the Noun phrase metaphor as a simile (Li et al., 2022; He et al., 2022; Chen et al., 2022). *Following previous work, we also categorize Noun phrase* metaphor as a simile. Meanwhile, we extend the tenor and vehicle to verbal phrases and sentences according to the simile phenomena in dialogue. The examples of verbal phrases and sentences in simile are shown in the last two lines of Table 2. ## 2.2 Tasks In Metaphor/Simile The tasks in metaphor are also suitable for simile, such as recognition (Birke and Sarkar, 2006; Liu et al., 2018), interpretation (Su et al., 2016), and generation (Li et al., 2022). The recognition task is also called identification (Steen, 2010; Li et al., 2022) or detection (Tsvetkov et al., 2014; Mohler et al., 2016), which aims to identify whether a given phrase or sentence contains a metaphor/simile. The interpretation is also called explanation (Liu et al., 2018) which usually assigns an appropriate inter- | Dataset | Lan. | Form | Task | Size | Man. | | | |------------------------------------------------------------|--------|----------|--------|--------|--------|---------|-------------------| | CM | Ch | sentence | I | 85 | Yes | | | | SRC | Ch | sentence | R | 11,337 | Yes | | | | CMC | Ch | sentence | G | 11,581 | Yes | | | | MCP | En | sentence | I | 1,633 | Yes | | | | SLS | En | sentence | G | 87K | No | | | | WPS | Ch | sentence | G | 5M | No | | | | Ours | Ch/En | Dialogue | R/I/G | 19,565 | Yes | Dataset | Dialogue examples | | Original | Coarse | Fine | Final | | | | | | LCCC | 12M | 20K | 4K | 1,214 | | | | | PchatbotW | 139M | 1M | 82K | 12,830 | | | | | Reddit-dialogue | 15M | 71K | 32K | 8,510 | | | | | Table 4: Statistics of the dialogue datasets we collected. | | | | | | | | pretation to a metaphorical expression (Bizzoni and Lappin, 2018) or infers the shared properties of the tenor and the vehicle (Song et al., 2021; He et al., 2022; Chen et al., 2022). The generation task also has different forms. For example, when giving an input tenor, it can generate a simile sentence conditioned on the input tenor (Li et al., 2022); when giving both the tenor and the shared property in simile, it can generate the vehicle (Song et al., 2021; Chen et al., 2022); when providing a literal sentence, it can generate a metaphoric sentence which paraphrases that input (Chakrabarty et al., 2020; Stowe et al., 2021), or generating a specific simile according to the location where the simile interpolation should happen (Zhang et al., 2021). In this paper, we also define recognition, interpretation, and generation tasks. However, different from previous work that only focused on similes in a triplet tuple or a sentence, we investigate a more challenging scenario where the simile happens in a multi-turn dialogue. ## 2.3 Survey Of Simile Datasets Table 3 shows the comparison between our MSD dataset with the existing simile datasets. Su et al. (2016) constructed a small Chinese Metaphor (CM) data with 85 nominal and 35 verbal metaphors for the interpretation task. Liu et al. (2018) introduced Simile Recognition in Chinese (SRC) data containing sentences with a special comparator 像 (like). The Chinese Nominal Metaphor Corpus (CMC) (Li et al., 2022) data merges other Chinese metaphor datasets (Liu et al., 2018) for simile generation. He et al. (2022) proposed a simile property probing task and constructed Multi-choice Probing (MCP) datasets. Chakrabarty et al. (2020) collected Reddit comments containing similes and then autoconstructed a parallel simile corpus with a pretrained model powered by commonsense knowledge (Bosselut et al., 2019). However, their Selflabeled Similes (SLS) dataset is limited to a "like a" pattern which appears only at the end of a sentence. Zhang et al. (2021) introduced the Writing Polishment with Similes (WPS) dataset where models need to locate the simile position in a sentence and then generate a simile in that position. The SLS and WPS are much larger than other existing data but they are not manually annotated. Our MSD data is extracted from more than 166M dialogue data (shown in Table *4). It is the first multi-lingual simile* dialogue data and the largest **manually annotated** simile data *so far. What's more, benefiting from* the strict annotation schedule, the MSD contains necessary simile components so that it can be used for simile recognition/interpretation/generation simultaneously. ## 3 Multilingual Simile Dialogue Dataset In this section, we introduce the collection, annotation, and statistics of our MSD data. ## 3.1 Data Collection Since we aim to extract the simile in a real-life dialogue, we adopt the existing open-domain dialogue corpus collected from social platforms such as Reddit.com and Weibo.com. For English similes, we use the 3 turns version Reddit Dialogue dataset (Dziri et al., 2018) which contains more than 15 million dialogues. For Chinese similes, we use two datasets: PchatbotW and LCCC. The PchatbotW (Qian et al., 2021) is the largest dialogue data we can find and contains 139 million 2 turns dialogues from Weibo. The LCCC (Wang et al., 2020) is also from Weibo and contains 12 million 2 or 3 turns dialogues. We treat the last utterance in a dialogue as a response and the utterances in front of the response as a dialogue context. We extract dialogues from these large-scale datasets with a rigorous data collection pipeline, which is built based on a set of rules we will introduce in this section. Notice that we do not make any changes to the original dialogue data and only extract those dialogues with comparators in the response. In the first step, we select the dialogue examples where the responses contain comparators such as 比喻 类比类比 推理 ![3_image_0.png](3_image_0.png) 问题 改进 Figure 1: The data collection and annotation process. 知识稀疏条件下的知识筛选 "像...一样"/"like"/"as...as"2. We only select dialogue examples with context lengths between 15 and 30 words so that the dialogue context is both informative and not too long for the annotators to read. These examples are denoted by the coarse version of the simile dialogue data and the statistics are shown in Table 4. In the second step, we use machine translation3 to ensure that a sentence contains a comparator. We only reserve the dialogue examples that still contain comparator when they are translated into another language. For example, an English simile candidate sentence "I run as fast as a rabbit" contains a comparator "as...as". When translating it into Chinese, this sentence is "我跑得像兔子一样快" and still contains a comparator "像"(like). After the machine translation checking, we got *the fine* version of the simile dialogue candidates. The fine version needs further improvement since the candidate tenor/vehicle connected by the comparator is not always a simile. For example, the sentence "The Poodle is as tall as a Corgi" is not a simile since the sentence compares the height of two different kinds of dogs. So we conduct a third step to further remove examples that are not similes. In the third step, we adopt a semantic dependency tool4to locate the candidate tenor/vehicle, then we compute the similarity between them to retain the examples with low similarity so that the remaining candidate tenor/vehicle are from different categories. The similarity is computed with dense representations of the candidate tenor/vehicle from BERT (Devlin et al., 2019). After the above pipeline, we obtain *the final version* of simile dialogue data for annotation. The statistics of the fine/final version we obtained are also shown in Table 4. 非 结 构 化 文 本 增 强 的 开 放 域 对 话 知识筛选和知识注入的融合 知识筛选和知识注入不一致现象 知识增强从语义到语用级别的扩展 Unstructured Text Enhanced Open-Domain Dialogue System: A Systematic Survey. ACM TOIS (2022)(一作) A Compare Aggregate Transformer for Understanding Document-grounded Dialogue. EMNLP (Findings) 2020(一作) Exploiting Side-information for Understanding Document-grounded Dialogue 拟10月份投稿 **TOIS**(一作) 对话冗余信息影响知识筛选准确率 ``` Knowledge-centric Response Selection for Document-grounded Dialogue. EMNLP 2022. Review scores: 4/1.5/3.5/3(一作) Exploiting Dialogue Act for Document-grounded Dialogue. EMNLP 2022. Review scores: 4/3/2(一作) ``` 知识筛选和注入缺少显式信号的指导 知识筛选和知识注入不一致现象 ``` 知识利用的 可解释性 从语义级别到语用级别应用 ``` ## 3.2 Data Annotation 研究问题 研究内容 基于对话冗余信息过滤的知识筛选 (第2章) 基于显式信号指导的知识筛选和注入 (第3章) 语义级别 基于知识为中心视角的对话建模 (第4章) 基于知识类比能力的对话建模 (第5章) 语用级别 We recruited 7 students majoring in English for annotating the English data and recruit other 6 welleducated native speakers (graduate students) for annotating the Chinese data. We randomly select 100 examples in the final version, finding that the vehicle candidates we extracted have an acceptable accuracy (above 80%). However, the accuracy of the tenor candidate is not good (below 60%). Hence, we provided annotators with "dialogue context", "response", "comparator", and "vehicle candidate" for each dialogue. We use the annotation tool proposed by Yang et al. (2018) to simplify the operation so that the annotators can use a mouse and a few shortcuts on the keyboard to annotate. There are some difficulties when annotating similes in the dialogue scenario apart from the fact that the tenor may exist in different sentences or occur after the vehicle. For example, the tenor may not exist in the dialogue even if the response is a simile. We ask the annotators to delete these examples. There are other situations that a dialogue that contains commonly used phrases or slang that makes the dialogue seem like a simile but not. For instance, "make like a tree" is not a simile but slang means "leave". Besides, English words usually have different meanings. For example, according to the Oxford Dictionary, the word "body" means "the whole physical structure of a human or an animal" as well as "a group of people who work or act together, often for an official purpose". So the sentence "This association is like the body that represents its members." is not a simile. Furthermore, there are many abbreviations used on social platforms such as FTW (for the win) and OP (original poster). These difficult linguistic phenomena require the annotators to have a good understanding of the dialogue context so that they could determine whether a response contains a simile. We conduct preliminary training for the recruited annotators so that they are aware of the professional 研究问题 面临挑战 研究内容 基于对话意图理解的知识筛选 (第2章) ``` 基于对话动作指导的知识筛选和注入 (第3章) 知识稀疏性 ``` 基于知识为中心视角的对话建模 (第4章) 基于知识类比能力的对话建模 (第5章) | Category | Ch | En | |-------------------------------|-------|-------| | Simile | 5,515 | 3,576 | | Literal | 5,904 | 4,570 | | Tenor in context | 32.8% | 48.9% | | Tenor in response | 67.2% | 51.1% | | Vehicle before Tenor | 5.7% | 0.9% | | Tenor before Vehicle | 94.3% | 99.1% | | Ave. context words in simile | 20.76 | 22.22 | | Ave. response words in simile | 18.86 | 17.83 | standards. We ask the annotators to first check whether the response in this dialogue example contains a simile. The example will be annotated "Literal" if the response is not a simile. Otherwise, they should check whether the vehicle candidate in the response is correct. They need to annotate the correct vehicle (can be word/phrase/sentence) if the candidate is not accurate. If the candidate vehicle is correct, they can annotate the tenor (can be word/phrase/sentence) if it exists. We present the annotation schedule in Figure 1. Our annotation schedule ensures that the tenor and vehicle are in the data. Quality Evaluation. During the annotation, each time we send a small "*.txt" file containing hundreds of dialogue examples to the annotators and conduct a random sampling test after they return the annotated data5. The annotator who returns a low-quality file will be asked to check their annotation again before we send the next file. The whole annotation takes 35 days, and each dialogue is annotated by 3 annotators. When determining the final result, the majority will be adopted when there is a disagreement among the three annotators6. The overall inter-rater agreement measured by Fliess' Kappa is 0.61, indicating a substantial agreement among the annotators. ## 3.3 Data Statistics After the annotation, we get a total of 19,565 (8,146 English and 11,419 Chinese) dialogues. The MSD has multiple comparators for both English and Chinese data. In MSD English data, the "like" mode is around 52.4% and the "as" mode is around 47.6%. In MSD Chinese data, "像...一样" accounts for the 5During annotation, we randomly selected 5% of the examples from one annotated file and checked if the annotator made accurate annotations for these random examples. The annotators were preliminary trained so that they were expected to make as few errors as possible. We expected no more than 1 error per 20 examples in the random sampling test. Otherwise, the file will be sent back for revision. 6There are a few cases where the three annotators disagree with each other, we decide these cases by ourselves. | Model | Precision | Recall | F1 | |--------------------|-------------|----------|--------| | MSD-En | | | | | ChatGLM(zero-shot) | 0.4793 | 0.8441 | 0.6114 | | BERT(fine-tuned) | 0.7154 | 0.6759 | 0.6951 | | MSD-Ch | | | | | ChatGLM(zero-shot) | 0.4992 | 0.8772 | 0.6363 | | BERT(fine-tuned) | 0.7754 | 0.7519 | 0.7635 | Table 6: Simile recognition results. most7. The proportion of each comparator is similar in simile and literal data. Table 5 shows some of the statistics of the MSD data. Please refer to the data link for more details. ## 4 Tasks And Results In this section, we introduce the 5 tasks defined with our MSD dataset. Including the definition of the task, the baselines, evaluation metrics, experimental results, and analysis. The implementation details are shown in the Appendix A. ## 4.1 Simile Recognition Task Following previous work (Liu et al., 2018; Li et al., 2022), we define simile recognition as a binary classification task where the model needs to distinguish whether an input sequence contains a simile. The input is a multi-turn dialogue and the output is True (simile) or False (literal). We use two baselines: 1) BERT is widely used and proven to be effective in classification tasks. We randomly split our MSD-En/Ch data into train/validation/test (8:1:1) sets and use the train set to fine-tune BERT. We use the output vector of the first input token <cls> of BERT to calculate the classification score for the input dialogue (see Appendix A); 2) a large language model (ChatGLM8). The input to ChatGLM is a concatenation of three parts: the definition of simile "A simile is a figure of speech that compares two different things via their shared properties."; a requirement "answer yes or no to this question: is the following dialogue example contains a simile?"; a simile dialogue examples such as in Table 1. Then we calculate the results according to the prediction of the baselines. Following previous work (Liu et al., 2018), we use Precision/Recall/F1 to measure the results. Table 6 shows the simile recognition results. We can see that BERT(fine-tuned) performs much better on Precision and F1 than ChatGLM on both MSD-En and MSD-Ch9. It is reasonable since the BERT models are fine-tuned on our training set. On the other hand, the ChatGLM is much better on Recall with a zero-shot setting. Overall, the classification results on both BERT and ChatGLM still have a lot of room to improve. Using syntactic structure information to locate simile components may help this task. ## 4.2 Simile Interpretation/Generation Tasks Following the previous simile interpretation task (Song et al., 2021; He et al., 2022) and simile generation task (Song et al., 2021), we define Simile Interpretation/Generation (SI/SG) as a Multi-choice task with the "as...as" mode in our MSD-En10 data (we test with 450 examples) since the shared property naturally exists in the comparator. For **interpretation task**, we have a simile dialogue where the shared property between two "as"s is removed and replaced with a blank. The model needs to select a property from 4 choices (one correct answer and three distractors) for the blank. We construct the distractors with ConceptNet (Speer et al., 2017). In particular, we first use the tenor and some relations to find the related concept to the tenor and then use the HasProperty relation to find the distractors. Notice that for the examples where the tenor is a phrase of a sentence we could not find in ConceptNet, we use keywords (e.g. the subject of the sentence, the noun in the phrase) as the tenor to search ConceptNet. Similar to the simile interpretation task, we remove the vehicle in a simile dialogue and leave a blank for the **simile generation task**. The model needs to select a proper vehicle for this blank from 4 candidates (one correct answer and three distractors). We also construct the distractors with ConceptNet. We use the vehicle and certain relations in the ConceptNet to find the related concepts to the vehicle as the distractors. Notice that for the examples where the vehicle is a phrase or sentence that we could not find in ConceptNet, we use the vehicles from other dialogues in MSD dataset as 9For Chinese, we use https://huggingface.co/bert-basechinese 10We did not conduct simile interpretation/generation on MSD-Ch in this paper since we did not annotate the shared property in Chinese data and we leave it for future work. | Model | Interpretation | Generation | |------------|------------------|--------------| | BERT-large | 0.5603 | 0.2967 | | BERT-Probe | 0.5804 | 0.3375 | | BERT-ANT | 0.4621 | 0.3337 | ![5_image_0.png](5_image_0.png) Table 7: Simile interpretation and generation results (Hit@1) on MSD-En. ## The Distractors. To ensure the distractors are true-negative, we randomly select 50 dialogue examples and manually check the quality of the distractors. We find that 92% of the distractors are well selected and the rest 8% are not as ideal as we expected but can still serve as distractors. More details about using ConceptNets are shown in Appendix C. The first baseline is a BERT-large model which takes the whole dialogue with the shared property or the vehicle masked and predicts the masked words. The second baseline is the BERT-Probe (He et al., 2022) that fine-tunes BERT with the simile interpretation task. To compare both SI and SG tasks with this baseline. We further finetune the BERT-Probe model with the SG task using the data proposed by He et al. (2022). The third baseline is BERT-ANT (Chen et al., 2022) which is trained with masked word prediction with metaphor data and can solve the Simile Interpretation and Generation tasks in a unified framework of simile triple completion. For example, when giving tenor=fireman and vehicle=bull, BERT-ANT can generate a list of words including the shared property like "strong" or "brave". All baselines are based on a BERT-large-uncased model. Since there are multiple masked words in our SI/SG experiments. We encode the predicted words and the candidates into dense vectors with a sentence-transformer (huggingface.co/sentencetransformers/all-MiniLM-L6-v2). Then we compute the cosine similarity between the predicted words and each of the candidates. The candidate with the highest similarity is chosen as the answer. We use Hit@1 to measure the accuracy. Table 7 shows the results of simile interpretation/generation tasks. We can see that BERT-Probe performs better than BERT-large in this task, showing that a model pre-trained on simile data can better align the simile components in an input sequence and predict the missing component, even though the training data is much different from our proposed data. The BERT-ANT performs similarly to the other two models on SG tasks but not as well at SI. It is because the training data of BERTANT is more of a metaphor data rather than simile data, a large portion of the metaphor data does not have shared properties. Hence, BERT-ANT is more powerful in connecting tenor and vehicle but is less powerful when predicting shared properties. Overall, the results on both simile interpretations/generations still have a lot of room to improve. How to exploit the semantic information in context to help these tasks requires further study. ## 4.3 Response Retrieval Task Following previous work in retrieval (Guo et al., 2016), we define Response Retrieval as a ranking task. The input is a multi-turn dialogue context and multiple response candidates (including the correct one) and the model needs to rank all the candidates so that the correct one has the highest score. In particular, for each "dialogue context" in MSD simile data (both English and Chinese), we randomly select 19 responses from other dialogue as the negative examples. We use BERT-base for our baseline in response retrieval since it is widely used and proven to be effective in retrieval tasks. We concatenate dialogue context and each of the response candidates as the input sequence to the pre-trained model. Then we use the output of the first input token <cls> to compute the score for the input sequence as in Appendix A. Finally, the response candidate with the highest score will be chosen as the answer. We first randomly split the Reddit dialogue data into train/validation/test (14.99M/5K/5K) sets. Then we used the BERT model to train an English dialogue retrieval model with this train/validation data. The model is denoted by BERT(Reddit). We choose a checkpoint with the best performance on the validation set. Then we use this checkpoint to compare its performance on both the Reddit Test set and the MSD-En set. Similarly, we combine LCCC and PchatbotW and randomly select 12M/5K/5K from the combined data as train/validation/test sets and train a Chinese dialogue retrieval model. The trained BERT11 model is denoted by BERT(Ch) and used to do the comparison of the performance on the LCCC+PchatbotW 11https://huggingface.co/bert-base-chinese ![6_image_0.png](6_image_0.png) Table 8: Response retrieval results. Test set and the MSD-Ch set. We measure the accuracy of the retrieval with Recall@1/2/5. Table 8 shows the results of the response retrieval task. The performance of BERT(Reddit) and BERT(LCCC) on MSD is lower than their performance on Reddit and LCCC+PchatbotW Test sets, respectively. The results show that the data distribution in MSD is different from the data used to extract it and selecting a simile response is much harder than selecting a proper response. The low Recall results show that the dialogue retrieval task on MSD simile data needs further study. This requires a model that judges not only the relevance between context and response but also the plausibility of similes. ## 4.4 Response Generation Task The traditional response generation task uses dialogue context as input and outputs the response of the context. In this section, we also introduce a new generation task that completes the response sentence behind the comparator. Taking the fifth simile dialogue "Arguing with parents is not wise. It is like throwing an egg at a rock." as an example, we give the model "Arguing with parents is not wise. It is like" as input and ask the model to generate the rest "throwing an egg at a rock.". This is different from the Writing Polishment with Similes Zhang et al. (2021) task since our task is a dialogue scene. The model needs to understand the difference between different speakers and complete the simile sentence. We use the simile data in MSD for the generation experiments. We conduct comparative experiments on the Reddit-dialogue Test set and the LCCC+PchatbotW Test set we used in the response retrieval task to show the difference between datasets. For the traditional response generation task, we use the DialoGPT (Zhang et al., 2020) and GODEL | Model | PPL | BLEU(1/2/3/4)(%) | ROUGE(1/2/L)(%) | METEOR(%) | Distinct(1/2)(%) | |---------------------------------------------|--------|----------------------------|----------------------|-------------|--------------------| | Reddit-dialogue Test set (En) | | | | | | | DialoGPT | 236.74 | 0.01 / 0.00 / 0.00 / 0.00 | 2.05 / 0.00 /1.79 | 1.24 | 6.67 / 23.84 | | GODEL | 3.70 | 0.53 / 0.02 / 0.00 / 0.00 | 2.80 / 0.00 / 1.98 | 2.41 | 6.54 / 36.01 | | MSD-En (simile data) | | | | | | | DialoGPT | 329.55 | 11.29 / 3.58 / 1.45 / 0.70 | 7.53 / 0.57 / 6.39 | 8.48 | 8.39 / 28.16 | | GODEL | 6.10 | 17.10 / 5.99 / 2.61 / 1.37 | 10.91 / 0.87 / 8.94 | 11.78 | 7.00 / 23.37 | | MSD-En (simile data) on Response Completion | | | | | | | DialoGPT | - | 17.29 / 8.50 / 5.24 / 3.35 | 23.71 / 5.13 / 23.04 | 12.85 | 14.64 / 43.51 | | LCCC+PchatbotW Test set (Ch) | | | | | | | CDialGPT(Ch) | 102.00 | 3.01 / 0.64 / 0.16 / 0.05 | 5.42 / 0.21 / 4.77 | 2.24 | 11.10 / 40.41 | | GPT-2(Ch) | 129.28 | 5.20 / 1.50 / 0.59 / 0.26 | 7.09 / 0.87 / 6.14 | 3.04 | 23.23 / 66.14 | | MSD-Ch (simile data) | | | | | | | CDialGPT(Ch) | 113.75 | 3.07 / 0.72 / 0.26 / 0.09 | 5.46 / 0.24 / 4.85 | 2.30 | 11.36 / 40.58 | | GPT-2(Ch) | 101.24 | 5.89 / 1.11 / 0.27 / 0.10 | 6.35 / 0.19 / 5.47 | 2.98 | 12.15 / 48.18 | | T5-base(Ch) | 118.60 | 7.61 / 2.57 / 1.40 / 0.94 | 8.66 / 0.94 / 7.66 | 4.25 | 22.15 / 66.59 | | BART-large(Ch) | 44.28 | 10.16 / 3.34 / 1.64 / 1.00 | 11.13 / 1.09 / 8.82 | 6.56 | 15.26 / 51.91 | Table 9: Dialogue generation and completion results. (Peng et al., 2022) for English data; use T5-base12, BART-large13 (Lewis et al., 2020), GPT-214 (Radford et al., 2019), and CDialGPT15 (Wang et al., 2020) for Chinese data. We choose these baselines since 1) they are widely used and proven to be effective in dialogue generation tasks. For example, GODEL (Grounded Open Dialogue Language Model) is pre-trained for dialogue and is initiated from T5 (Raffel et al., 2020). CDialGPT and BARTlarge are pre-trained with LCCC-large; 2) the different size models can provide more insight into the experiments. For our proposed response generation (completion) task, we conduct the experiment on English data with DialoGPT. We use the following automatic evaluation metrics employed by dialogue research. Perplexity (PPL), BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Lavie and Agarwal, 2007), and Distinct (Li et al., 2016). PPL measures the probability of the model predicting the real response. BLEU measures the n-gram overlap between the generated response and the reference one. ROUGE is based on the calculation of the recall rate of the common sub-sequence of generating response and the real one. METEOR further considers the alignment between the generated and the real responses to improve BLEU. Distinct measures the diversity of responses by calculating the proportion of distinct n-grams in the total number of n-grams. Higher BLEU/ROUGE/METEOR/Distinct means better performance. The PPL is provided for comparing models with the same vocabularies, and the results are also useful for future research. 12huggingface.co/shibing624/prompt-t5-base-chinese 13huggingface.co/HIT-TMG/dialogue-bart-large-chinese 14huggingface.co/shibing624/gpt2-dialogbot-base-chinese 15huggingface.co/thu-coai/CDial-GPT_LCCC-large Table 9 shows the generation and completion results. On most metrics of English data, DialoGPT and GODEL perform better on MSD-En than on Reddit-dialogue. CDialoGPT and GPT-2 have comparable performance on the LCCC+PchatbotW Test set and MSD-Ch. This is different from the response retrieval tasks where the MSD data is more difficult than the original data used to extract MSD. The reason may be the dialogue context in MSD provides more information than the context in the original data, so the generation models could leverage the rich context information to construct an informative response. Experiments also verify that larger models (GODEL/T5/BART) have a better performance. However, even the performance of the best baseline can still be improved. We analyze the generation results. Although there are some interesting cases, most of the results are not similes. It means the simile dialogue generation task requires a specific model design to capture the simile relations in context. We provide a case study in Appendix D. For the response completion task, when giving the comparator, DialoGPT has a big performance gain. It proves that the simile generation can benefit from the guide. Please refer to our code/data link for more experimental results about this simile dialogue completion task. ## 5 Conclusion We propose manually annotated multilingual simile dialogue (MSD) data for both simile and dialogue research. We design 3 simile tasks (recognition, interpretation, and generation) and 2 dialogue tasks (retrieval and generation) with MSD. Experiments with strong baselines show the challenge of each task. Future works include but are not limited to 1) Dataset enlargement (e.g., more annotated examples with more kinds of comparators); 2) Model designing (e.g., models with a specific structure to address the proposed tasks); 3) New task designing (e.g., detecting tenor in the coarse/fine data). We encourage using the MSD in future simile and dialogue research. ## Limitations Due to time constraints, we were unable to implement some unreleased models as baselines for the proposed tasks. We did not conduct simile interpretation/generation on MSD-Ch in this paper since we could not automatically annotate the shared property in Chinese data like the "as...as" mode in English. We are currently working on this annotation and plan to release the Chinese simile interpretation/generation results on the data link. The coarse/fine version data we introduced in this paper can still be used for enlarging the MSD data. We will study to utilize them for more simile data and richer language phenomena. ## Ethics Statement We provide and emphasize some details of our work to address potential ethical concerns. First, all the data sources used in the data collection process are publicly available. We did not make any changes to the data sources and only extracted dialogue examples from these data. We carried out strict quality control during the extraction and annotation process. We made sure that there are no sensitive words even though the original data sources have already conducted this kind of checking. However, using our data to train or fine-tune a pre-trained generation model may still generate semantic errors or unpleasant similes or responses. One reason is that simile is a difficult task that compares two different things, mistakes could happen even when humans use similes. The other reason is that the knowledge stored in the original parameters of the pre-trained models may dominate the generation. We protect the privacy rights of annotators and paid 0.55 Chinese Yuan for annotating each dialogue data. The income of each annotator was above 100 Chinese Yuan per hour (On January 20, 2023, 100 yuan can be converted into 14.73 dollars). ## Acknowledgements This paper is supported by the Science and Technology Innovation 2030 Major Project of China (No. 2021ZD0113302) and the National Natural Science Foundation of China (No. 62076081, No. 61772153, and No. 61936010). ## References Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In *EACL 2006, 11st Conference of* the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference, April 3-7, 2006, Trento, Italy. The Association for Computer Linguistics. Yuri Bizzoni and Shalom Lappin. 2018. Predicting human metaphor paraphrase judgments with deep neural networks. In Proceedings of the Workshop on Figurative Language Processing, Fig-Lang@NAACLHLT 2018, New Orleans, Louisiana, 6 June 2018, pages 45–55. Association for Computational Linguistics. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4762–4779. Association for Computational Linguistics. Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng. 2020. Generating similes effortlessly like a pro: A style transfer approach for simile generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6455– 6469. Association for Computational Linguistics. Weijie Chen, Yongzhu Chang, Rongsheng Zhang, Jiashu Pu, Guandan Chen, Le Zhang, Yadong Xi, Yijiang Chen, and Chang Su. 2022. Probing simile knowledge from pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5875–5887. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Nouha Dziri, Ehsan Kamalloo, Kory W. Mathewson, and Osmar R. Zaïane. 2018. Augmenting neural response generation with context-aware topical attention. *CoRR*, abs/1811.01063. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In *Proceedings of the 25th ACM* International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, pages 55–64. ACM. Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, and Yanghua Xiao. 2022. Can pre-trained language models interpret similes as smart as human? In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7875–7887. Association for Computational Linguistics. Alon Lavie and Abhaya Agarwal. 2007. METEOR: an automatic metric for MT evaluation with high levels of correlation with human judgments. In WMT@ACL, pages 228–231. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Bin Li, Haibo Kuang, Yingjie Zhang, Jiajun Chen, and Xuri Tang. 2012. Using similes to extract basic sentiments across languages. In Web Information Systems and Mining - International Conference, WISM 2012, Chengdu, China, October 26-28, 2012. Proceedings, volume 7529 of *Lecture Notes in Computer Science*, pages 536–542. Springer. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In HLT-NAACL, pages 110–119. The Association for Computational Linguistics. Yucheng Li, Chenghua Lin, and Frank Guerin. 2022. Cm-gen: A neural framework for chinese metaphor generation with explicit context modelling. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 6468–6479. International Committee on Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Lizhen Liu, Xiao Hu, Wei Song, Ruiji Fu, Ting Liu, and Guoping Hu. 2018. Neural multitask learning for simile recognition. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1543–1553. Association for Computational Linguistics. Saif M. Mohammad, Ekaterina Shutova, and Peter D. Turney. 2016. Metaphor as a medium for emotion: An empirical study. In *Proceedings of the Fifth Joint* Conference on Lexical and Computational Semantics, *SEM@ACL 2016, Berlin, Germany, 11-12 August 2016. The *SEM 2016 Organizing Committee. Michael Mohler, Mary Brunson, Bryan Rink, and Marc T. Tomlinson. 2016. Introducing the LCC metaphor datasets. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portorož, Slovenia, May 2328, 2016. European Language Resources Association (ELRA). Vlad Niculae and Cristian Danescu-Niculescu-Mizil. 2014. Brighter than gold: Figurative language in user generated comparisons. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 2008–2018. ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318. ACL. Anthony M Paul. 1970. Figurative language. In *Philosophy & Rhetoric*, page 225–248. Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, and Jianfeng Gao. 2022. GODEL: largescale pre-training for goal-directed dialog. *CoRR*, abs/2206.11309. Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, and Ji-Rong Wen. 2021. Pchatbot: A largescale dataset for personalized chatbot. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 2470– 2477. ACM. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Wei Song, Jingjin Guo, Ruiji Fu, Ting Liu, and Lizhen Liu. 2021. A knowledge graph embedding approach for metaphor processing. IEEE ACM Trans. Audio Speech Lang. Process., 29:406–420. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *Proceedings of the Thirty-First* AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444–4451. AAAI Press. Gerard Steen. 2010. A method for linguistic metaphor identification: From mip to mipvu. volume 14. John Benjamins Publishing. Kevin Stowe, Nils Beck, and Iryna Gurevych. 2021. Exploring metaphoric paraphrase generation. In *Proceedings of the 25th Conference on Computational* Natural Language Learning, CoNLL 2021, Online, November 10-11, 2021, pages 323–336. Association for Computational Linguistics. Chang Su, Jia Tian, and Yijiang Chen. 2016. Latent semantic similarity based interpretation of chinese metaphors. *Eng. Appl. Artif. Intell.*, 48:188–203. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 2227, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 248–258. The Association for Computer Linguistics. Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020. A large-scale chinese short-text conversation dataset. In *Natural Language Processing and Chinese Computing - 9th CCF International Conference, NLPCC* 2020, Zhengzhou, China, October 14-18, 2020, Proceedings, Part I, volume 12430 of *Lecture Notes in* Computer Science, pages 91–103. Springer. Jie Yang, Yue Zhang, Linwei Li, and Xingxuan Li. 2018. YEDDA: A lightweight collaborative text span annotation tool. In Proceedings of ACL 2018, Melbourne, Australia, July 15-20, 2018, System Demonstrations, pages 31–36. Association for Computational Linguistics. Jiayi Zhang, Zhi Cui, Xiaoqiang Xia, Yalong Guo, Yanran Li, Chen Wei, and Jianwei Cui. 2021. Writing polishment with simile: Task, dataset and A neural approach. In *Thirty-Fifth AAAI Conference* on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14383–14392. AAAI Press. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 270–278. Association for Computational Linguistics. ## A Implementation Appendix The implementations of the pre-trained models in this paper are all based on the public Pytorch implementation 16. The hyper-parameters follow the default settings. We did not truncate any of the dialogue because the dialogue length in MSD data is much smaller than the maximum input length of the pre-trained models. We use a single Tesla v100s GPU with 32gb memory to conduct experiments, the batch size is 8 for all experiments. Checkpoints are chosen with the best performance on the corresponding validation set. In simile recognition and dialogue retrieval tasks, the first input position of the model G is a special token "<cls>", and the corresponding output vector Ecls is fed into a nonlinear layer to compute the final score of the input sequence: $${\mathcal{G}}(i p p u t){\mathrm{=}}\,\sigma(W_{2}\cdot\mu(W_{1}\cdot E_{c l s}+b_{1})+b_{2}),\,\,\,\,(1)$$ where W1,2 and b1,2 are training parameters; σ/µ is the sigmoid/tanh function, respectively. When training the simile recognition model, the loss is cross-entropy between predicted labels yi and ground-truth label y¯i: $${\mathcal{L}}_{s i m i l e}=-{\frac{1}{N}}\sum_{i=1}^{N}({\bar{y}}_{i}l o g P(y_{i}))\qquad(2)$$ Where N is the number of simile examples. When training the dialogue retrieval model, the loss is calculated as follows: $${\mathcal{L}}_{d r}{=}\,{\overset{N}{\underset{i=1}{\sum}}}\log(\frac{e^{{\mathcal{G}}({\mathbf{C}}_{i},{\mathbf{R}}_{i}^{+})}}{e^{{\mathcal{G}}({\mathbf{C}}_{i},{\mathbf{R}}_{i}^{+})}{+}\sum_{j=1}^{\alpha}e^{{\mathcal{G}}({\mathbf{C}}_{i},{\mathbf{R}}_{j}^{-})}}),\tag{3}$$ where C is the dialogue context, R is the response, and α is a hyper-parameter meaning the number of different negative samples for a positive one. We set α = 9 in our training. 16https://github.com/huggingface/transformers | Comparators | Proportion (%) | |---------------|------------------| | 像...一样 | 49.5 | | 跟...一样 | 34.8 | | 跟...似的 | 11.6 | | 像...似的 | 2.7 | | 像 | 0.3 | | 仿佛 | 0.3 | | 简直是 | 0.3 | | 如...般 | 0.2 | | 像...般 | 0.1 | | 如...一样 | 0.1 | | 仿佛...一样 | 0.1 | Table 10: Comparators in the Chinese MSD data. Relation: *Definition* RelatedTo: *The most general relation. There is some* positive relationship between A and B, but ConceptNet can't determine what that relationship is based on the data. Symmetric. learn <-> erudition Causes: *A and B are events, and it is typical for A to* cause B. exercise -> sweat Desires: *A is a conscious entity that typically wants* B. Many assertions of this type use the appropriate language's word for "person" as A. person -> love DistinctFrom: *A and B are distinct member of a set;* something that is A is not B. Symmetric. red <-> blue; August <-> September SymbolOf: *A symbolically represents B. red -> fervor* MannerOf: A is a specific way to do B. Similar to "IsA", but for verbs. auction -> sale LocatedNear: A and B are typically found near each other. Symmetric. chair <-> table CausesDesire: *A makes someone want B. having no food* -> go to a store MadeOf: *A is made of B. bottle -> plastic* Table 11: Relations in ConceptNet we used to find distractors. "<->" means Symmetric relation for A and B. "->" means Asymmetric relation that A entails B. ## B Statistic Appendix In Table 10, we present all the comparators and their proportions in MSD-Chinese. ## C Conceptnet Appendix We use ConceptNet to construct the distractors in simile interpretation and generation tasks. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges (Speer et al., 2017). Two concepts (A and B) are connected with relations such as "IsA" or "PartOf". In the *simile interpretation* task, we need to find three distractors for the shared property of tenor and vehicle. We use the "Antonym" relation to extract the antonym of the property as the first distractor. We adopt the nine relations in Table 11 to find the related concepts to the tenor and then use the "HasProperty" relation to find the properties of these related concepts. Finally, we randomly choose two of the nine properties as the distractors. For example, if the (tenor, property, vehicle) is (fireman, strong, bull). We first have "weak" as the Antonym and the first distractor of "strong". Then we find that "fireman" is related to (RelateTo) "fire" and "fire" has a property (HasProperty) "hot". So "hot" is the second distractor for "strong". We can get up to nine distractors and choose two of them along with the Antonym as the final three distractors. Notice that for the examples where the tenor is a phrase or sentence that we could not find in ConceptNet, we use keywords (e.g. the subject of the sentence, the noun in the phrase) as the tenor to do the search in ConceptNet. In the *simile generation* task, we also use the relations in Table 11 to find the related concepts to the vehicle as the distractors. Different from the interpretation task, for the examples where the vehicle is a phrase or sentence that we could not find in ConceptNet, we use the vehicles from other dialogues in MSD dataset as the distractors. ## English Example In Msd Dialogue context: You'll never love someone exactly the same as her, because there will never be another person exactly like her. But, that doesn't mean you won't ever be as in love with someone else. Ground truth response: It will just be a different kind of love, as unique to that person as a fingerprint, yet just as strong and deep as before, or maybe even stronger and deeper. Response from DialoGPT: I know, I know. I just don't want to be like her. Completion from DialoGPT (additionally input "it will just be a different kind of love, as unique to that person as"): a gift which will never be forgotten. Response from GODEL: I'm not sure if this is true, but I'm pretty sure I've been in love with someone exactly the same as her. Chinese example in MSD Dialogue context: 鲜艳的月季花,伴你快乐的生活。 (Bright Chinese rose flowers accompany your happy life.) Ground truth response: 愿我们的生活像花一样美好! (May our life be as beautiful as flowers!) Response from CDialGPT: 谢谢*(Thanks!)* Response from GPT-2: 谢谢姐姐*(Thanks. Sister!)* Response from T5-base: 花攒锦簇*(The flowers gather* in abundance.) Response from BART-large: 月季花开的时候,我们 的心情也会像花一样美丽!(When the Chinese rose opens, our mood will be as beautiful as the flower!) Table 12: Case study of simile response generation task. ## D Generation Cases Appendix As we introduced in the simile response generation section, most of the generated results are not similes since the baselines are not designed for this task. In Table 12, we provide two cases to show the dialogue cases in MSD and the generation results from different models. In the first English example, both DialoGPT and GODEL generate fluent responses and contain the comparator "like" or "as". However, both models fail to generate a simile response like the ground truth one. The Chinese example is extracted from the LCCC data, we can see BARTlarge performs the best and gives an informative response with a simile in it. The GPT-2 gives a general response and T5-base gives an informative response. The CDialGPT also gives a general response even if it is trained with the LCCC dataset. The two cases in Table 12 further verify that simile dialogue generation is challenging. However, in the response completion task, when adding the comparator in the input, we can see the DialoGPT outputs a simile and makes the dialogue more vivid and interesting. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
chen-yang-2023-controllable
Controllable Conversation Generation with Conversation Structures via Diffusion Models
https://aclanthology.org/2023.findings-acl.454
Generating coherent conversation is an important and challenging long text generation task, as it has various applications such as daily entertainment, children education or building conversational AI to facilitate human-computer interaction. However, current generation models often fail to effectively utilize rich linguistic and world knowledge to generate conversations just like human. In this work, we introduce a novel conversation generation framework to effectively incorporate human knowledge and conversation structures with both controllability and interpretability for better conversation generation. Specifically, we first generate the prototype conversations from short descriptions. We then gradually and strategically incorporate different levels of conversation structures including the action triples, dialogue acts and discourse relations via diffusion models to directly edit the prototype conversations. We demonstrate the effectiveness of our framework through experiments on two datasets by comparing our method with the state-of-the-art baseline models.
# Controllable Conversation Generation With Conversation Structures Via Diffusion Models Jiaao Chen Georgia Institute of Technology jchen896@gatech.edu Diyi Yang Stanford University diyiy@cs.stanford.edu ## Abstract Generating coherent conversation is an important and challenging long text generation task, as it has various applications such as daily entertainment, education, or building conversational AI to facilitate human-computer interaction. However, current generation models often fail to effectively utilize rich linguistic and world knowledge to generate conversations just like humans. In this work, we introduce a novel conversation generation framework to effectively incorporate human knowledge and conversation structures with both controllability and interpretability for better conversation generation. Specifically, we first generate the prototype conversations from short descriptions. We then gradually and strategically incorporate different levels of conversation structures including the action triples, *dialogue acts*, and discourse relations via diffusion models to directly edit the prototype conversations. We demonstrate the effectiveness of our framework through experiments on two datasets by comparing our method with the state-of-the-art baseline models1. ## 1 Introduction Generating long-form and coherent text is an important step in many natural language generation (NLG) applications (Guan et al., 2022). While recent research has shown impressive progress in generating short texts, it is still challenging for generation models to write coherent long text which requires comprehensively incorporating linguistic and world knowledge (Charniak, 1972). Our work takes a closer look at long conversation generation (Gunasekara et al., 2021), one of the most challenging long text generation tasks. The task is to generate an entire coherent conversation from a given short description, i.e., a summary, of it. Conversation generation has various applications from daily 1The code is available at https://github.com/ SALT-NLP/Conversation_Generation_Diffusion entertainment, and story generation, to customer services. However, real human/human conversation logs are scarce; crowdsourcing conversational data is time-consuming, costly, and hard to ensure data quality (Gunasekara et al., 2021). Thus, better conversation generation models would allow us to generate massive natural conversational data more automatically and efficiently, which further helps build better conversational AI systems. Even though there are a growing number of studies that focused on long text generation such as story generation (Guan et al., 2022; Yang et al., 2022; Fan et al., 2018; Li et al., 2022a) using large pre-trained models (Fan et al., 2018; Yang et al., 2022), event planning (Guan et al., 2022; Fan et al., 2018; Li et al., 2022a) and recursive revision (Yang et al., 2022), directly applying them to generate long conversation may not work well due to the inherent different structures between stories and conversations. For instance, previous long text generation usually focused on generating stories that talk about one single topic with five sentences to one paragraph. They are shorter compared to conversations, which usually cover multiple topics between different speakers (over ten turns) (Feng et al., 2020). Furthermore, there are diverse discourse relations between different speakers (Chen and Yang, 2021b), making it even more challenging to generate long and coherent conversations. While there is a line of work about dialogue generation, they are mainly concentrated on generating the next utterance autoregressively based on the given context (Ji et al., 2021; Liu et al., 2020; Saha et al., 2022; Zhang et al., 2020; Ramakrishnan et al., 2022) with sequence-to-sequence models. Such methods usually neglect the conversation structures(Adewumi et al., 2022), and thus might easily lose focus to produce long and coherent conversations after several rounds of generation (Gunasekara et al., 2021). Moreover, the formerly generated utterances could not be further edited to ![1_image_0.png](1_image_0.png) adapt the later generated utterances. It is also unclear whether and how these sequence-to-sequence models are "gradually planning" to produce the long conversations. Therefore, how to design controllable methods tailored to the structures in conversations for generating long and coherent conversations becomes especially important. To this end, our work introduces a Controllable Conversation Generation Framework with Diffusion Models (**Diffuse-CG**, shown in Figure 1) to incorporate different conversational structures in a non-autoregressive manner, inspired by recent advances in deep generative models (Li et al., 2022b; Gong et al., 2022; He et al., 2022). Specifically, we first generate a prototype conversation using pre-trained sequence-to-sequence model based on the input description. Then we leverage the diffusion models to gradually enrich the prototype conversation with conversation structures. The diffusion process allows a more flexible conversation generation by not limiting a fixed left-to-right generation order; it also allows the model to gradually incorporate different levels of conversation structures to control the granularities, including the use of *action triples* to add more specific topics and events (Gee, 2014; Chen and Yang, 2021b), *dialogue acts* to make the utterances more like human (Allen and Core, 1997; Sacks et al., 1978; Chen and Yang, 2021a), and *discourse relations* to generate longer conversations with better coherency (Kirschner et al., 2012; Stone et al., 2013; Asher et al., 2016a). To make the diffusion process more adapted to conversation generation and more stable, we further improve the general diffusion model (Li et al., 2022b) with *linguistic-informed noise* where we perturb the prototype conversation in the forward process with noise including soft-masking action words, soft-masking utterances, and shuffling discourse relations, rather than pure Gaussian noise (Li et al., 2022b). Experiments on two conversation datasets, SAMSum (Gliwa et al., 2019) and DialogSum (Chen et al., 2021)by visualizing the intermediate-generated conversations, we show that Diffuse-CG achieves better interpretability for understanding how the model is structuring and generating long-form conversations. ## 2 Related Work Long Text Generation Long-form text generation has been a longstanding challenge in natural language generation where models need to generate long, coherent and open-ended narratives (Guan et al., 2022; Yang et al., 2022; Fan et al., 2018; Li et al., 2022a; Guan et al., 2021). Recent studies have shown impressive success in generating more coherent stories through adopting hierarchical model structures (Li et al., 2015), leveraging large pre-trained models (Fan et al., 2018; Yang et al., 2022), planing first and then generating framework (Shao et al., 2019; Tan et al., 2020; Goldfarb-Tarrant et al., 2020; Li et al., 2022a) and incorporating external knowledge (Guan et al., 2022; Fan et al., 2018; Xu et al., 2020). However, previous studies mainly focus on generating singlespeaker stories and neglect one important form of long text—conversations. Such methods cannot be directly applied to generate multi-speaker conversations because of the complex linguistic structures in conversations such as back-and-forth interactions (Feng et al., 2020; Chen and Yang, 2021b). Our work fills this gap by utilizing conversation structures to generate coherent conversations. Dialogue Response Generation Numerous studies have been conducted on generating short responses conditioned on previous context (Ji et al., 2021; Liu et al., 2020; Saha et al., 2022; Zhang et al., 2020; Ramakrishnan et al., 2022) such as adding user's persona (Wolf et al., 2019), paraphrasing template responses (Lippe et al., 2020) and using example guidance (Gupta et al., 2021; Cai et al., 2020). While achieving state-of-the-art performances, they suffer from generating the entire conversation because they can only generate one utterance at a time and easily lose focus when generating multiple rounds of utterances or the entire conversation (Gunasekara et al., 2021). This is largely due to the fact that former errors cannot be corrected when generating utterance by utterance autoregressively, and the lack of awareness towards rich conversation structures like long-distance relations in conversations (Stone et al., 2013; Asher et al., 2016a). To this end, we design a controllable and interpretable conversation generation framework that makes use of rich structures to generate the entire conversation in a non-autoregressive way. Diffusion Model Diffusion models (SohlDickstein et al., 2015; Ho et al., 2020; Song et al., 2021) are recently-introduced state-of-the-art non-autoregressive generative models and have shown substantial success for visual modalities (Ramesh et al., 2022; Rombach et al., 2022). They are generally more interpretable and controllable as they gradually denoise random vectors to desired output via multiple intermediate steps (He et al., 2022; Austin et al., 2021). However, it is still difficult to apply diffusion models to textual data, because the input space in text is discrete and text is generally more complex in structures. Although there are a few exceptions to model language generation with diffusion process (Li et al., 2022b; Gong et al., 2022; He et al., 2022; Austin et al., 2021; Hoogeboom et al., 2021) where continuous and discrete space is bridged through embedding and rounding (Li et al., 2022b; Gong et al., 2022; Dieleman et al., 2022), such approaches often utilize Gaussian noise in the forward process, which usually fails to leverage the linguistic structure in text to noise the input textual data and makes the diffusion models unstable and costly (He et al., 2022). Building upon these prior works, we utilize diffusion models for interpretable and controllable conversation generation and design a novel *linguistic-informed noise* for adapting diffusion models to generate textual conversations. ## 3 Background: Diffusion Models Diffusion models are the recent state-of-the-art deep generative models via iterative denoising the latent variables (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021). Basically, corruption (usually Gaussian noise) is added to the input data distribution gradually during a forward process. Then a diffusion model is trained through learning to recover the corrupted distribution to the original input data distribution step by step. A small amount of information that is perturbed during the corresponding forward process is reconstructed in every diffusion step. There is usually a forward noising process and a diffusion denoising process in a diffusion model. For a given sampled input data, x0 ∼ q(x0), a Markov chain of latent variables {x1, · · ·, xT } are generated in the forward noising process (q (xt| xt−1)) by progressively adding a small amount of Gaussian noise to perturb the input data: $$q\left(x_{t}\mid x_{t-1}\right)={\mathcal{N}}\left(x_{t};{\sqrt{1-\beta_{t}}}x_{t-1},\beta_{t}I\right)$$ where {βt ∈ (0, 1)} T t=1 is a noise schedule controlling the amount of added noise in every step. Through the forward process, xT becomes an isotropic Gaussian distribution. Note that there are no trainable parameters in the forward process. Then a reversed diffusion process, which is learned by a parameterized model (p(xt−1|xt)), is learned to denoise xT to the original data x0: $$p_{\theta}\left(x_{t-1}\mid x_{t},t\right)={\mathcal{N}}\left(x_{t-1};\mu_{\theta}\left(x_{t},t\right),\Sigma_{\theta}\left(x_{t},t\right)\right),$$ where µθ(.) and Σθ(.) are the learned models. The diffusion model is trained to maximize the marginal likelihood of log pθ(x0). And Ho et al. expand and reweight the objectives to obtain a meansquared error (L2) loss: $${\mathcal{L}}_{\mathrm{d}}\left(x_{0}\right)=\sum_{t=1}^{T}\mathbb{E}_{q\left(x_{t}|x_{0}\right)}\left\|\mu_{\theta}\left(x_{t},t\right)-{\hat{\mu}}\left(x_{t},x_{0}\right)\right\|^{2}$$ where µˆ is the mean of the posterior q(xt−1|x0, xt), and µθ is the predicted mean of pθ(xt−1|xt), which is predicted by the parameterized neural models. ## 4 Our Approach This section introduces our controllable conversation generation model to generate natural and coherent conversations, as shown in Figure 1. Basically, we first utilize a sequence-to-sequence model to generate a prototype version of the conversation based on the given short description (Section 4.1). We then gradually incorporate the conversation structure guidance to edit the prototype conversation in order from lower levels to higher levels (action triples, dialogue acts, and discourse relations) through diffusion models (Section 4.2). ## 4.1 Prototype Conversation Generation We first train a sequence-to-sequence model f(F(.)) to generate the prototype conversation Cp based on the given conversation summary s, Cp = f(F(s)), where F(.) is an encoder-decoder network and f(.) is a feed-forward network to map the hidden representations to actual words. We initialize f(F(.)) with a pre-trained encoderdecoder model, i.e., BART-base (Lewis et al., 2020). f(F(.)) is learned using the ground truth summary-conversation pairs, (*s, C*g) through minimizing the cross entropy L = −Plog P(Cg|s). Once the prototype conversation generation model is learned, we utilize F(.) to generate the hidden representations X0 = {w0*, ..., w*l} of the prototype conversation C with l words: X0 = {w0*, ..., w*l} = F(s). Note that X0 ∈ Rl×dis a matrix used as the initiate latent variable in Section 4.2, where l is the number of words in the conversation and d is the dimension of the hidden representation. ## 4.2 Editing With Diffusion Models With the hidden representation, X0, of the prototype conversation, we then introduce our diffusion model that gradually edits the prototype conversation to form the desired long conversation. Specifically, we first add linguistic noise to X0 to get the noisy intermediate latent variables X1:T in the forward process (Section 4.2.2), and then gradually denoise XT to Xˆ0 with different levels of conversation structure information in the diffusion process (Section 4.2.3). Last, we generate the long conversation Cl with the denoised Xˆ0: Cl = f(Xˆ0). ## 4.2.1 Structures In Conversations This part introduces the three types of widely-used structures with different granularity in conversations utilized in our work2: the action triples, dialogue acts, and discourse relations. The **action** triples are the "WHO-DOING-WHAT" triplets (e.g., "Sam-Asking for-Betty's number") in conversations that express specific socially situated identities and activities (Chen and Yang, 2021b). The dialogue acts describe the functions and roles of every utterance in one conversation. For example, natural conversations might often have interruption utterances with dialogue acts like *acknowledgment*, backchannel, *response acknowledgment* and etc. (Allen and Core, 1997; Sacks et al., 1978). **Discourse relations** describe the relations between different utterances in one conversation (Asher et al., 2016b). For example, two utterances may be related to each other with the *Question Answer Pair*. 4.2.2 Forward Process We first add noise to prototype conversation X0 = {w0*, ..., w*l} to generate the noisy intermediate latent variables X1:T in the forward process: Xt+1 = q(Xt). To make the diffusion process more stable and efficient, the added noise needs to corrupt the prototype conversation and gives the later diffusion process appropriate flexibility to generate conversations, while avoiding removing all the prior knowledge in X0. Thus we design and apply different types of linguistic-informed noises to perturb the structured information in conversation. Here we introduce three types of noise strategies based on the conversation structures into the forward process: Soft-Masking Action Words For soft-masking action words, we only add noise to the action words wiin the prototype conversation in order to perturb the action information. These action words are the words that appear in the action triples extracted from the prototype conversation using OpenIE 3 (Angeli et al., 2015; Chen and Yang, 2021b). At step t, we add a small amount of Gaussian noise to the action words wiin the prototype conversation: $$q_{a}(w_{i,t+1}|w_{i,t})=N(w_{i,t+1};\sqrt{(1-\beta_{t})}w_{i,t},\beta_{t}I)\tag{1}$$ where βtis the amount of noise added at step t. Soft-Masking Utterances For soft-masking utterances, we only add noise to all the words wiin one utterance u in the prototype conversation so that the dialogue acts of the utterance are perturbed. The utterance to mask is consistent for all the steps for one prototype conversation, while we randomly reselect the utterance to mask in different epochs. At step t, we add a small amount of Gaussian noise to all the words wiin the utterance u: $$q_{u}(w_{i,t+1}|w_{i,t})=N(w_{i,t+1};\sqrt{(1-\beta_{t})}w_{i,t},\beta_{t}I)\tag{2}$$ Shuffling Discourse Relations We further randomly switch the positions of two random utterances ui and uj in the conversation to perturb the discourse relations in the prototype conversation. At step t, we randomly shuffle X0: $$q_{r}(X_{t+1}|X_{t})={\mathrm{Shuffle}}(X_{t})$$ In practice, we apply these three types of noises at the same time at every diffusion step t to model q(Xt+1|Xt). Note that the forward process does not contain any trainable parameters. ## 4.2.3 Diffusion Process After corrupting the hidden representations of the prototype conversation X0 to latent variables X1:T , we then gradually denoise XT to Xˆ0 through diffusion steps, Xˆt−1 = p(Xˆt|θ), where θ is the learned parameter to model the state transition. In practice, the transition is modeled by transformers. After every diffusion step t ∈ (0, T], we minimize the cross entropy between the predicted conversation from Xˆt−1 and the ground truth conversation Cg: $${\mathcal{L}}_{t}=C E(f({\hat{X}}_{t-1}),C_{g};\theta),t\in(0,T]\quad(4)$$ To generate desired conversation in a more controlled way, we incorporate three levels of conversation-structured information introduced in Section 4.2.1 to control the generation and we describe each of them in detail below. Action Triples By incorporating action triples information, the conversation could include more details with diverse desired actions/events from the token-level. During training, we first extract such action triples A = {a0*, ..., a*m} from the ground truth conversation Cg using OpenIE, where aiis a "(WHO, DOING, WHAT)" triple. We then represent every triple ai ∈ A with the average of the output embeddings from the above F(.). In order to encourage the generated conversation to describe the given actions triples, after every diffusion step Xˆt−1 = p(Xˆt|θ), t ∈ (ta, T], we also minimize the sum of cosine distances between the average of every token's representation in Xˆt−1 and every action triple's representation: $$\mathcal{L}_{t}^{a}=\sum_{i}||\text{avg}(\hat{X}_{t-1}),F(a_{i})||_{\text{cos}},t\in(t_{a},T]\tag{5}$$ $\eqref{eq:walpha}$. Dialogue Acts Editing the generated conversation with the desired dialogue acts information could encourage the generated conversation to be more diverse and more like human from the utterance-level (Allen and Core, 1997; Sacks et al., 1978). During training, we first extract the dialogue acts D = {d0*, ..., d*m} in every ground truth conversation Cg with a learned linear dialogue acts classifier 4, where diis a one-hot vector that indicates the dialogue act for i-th utterance. We sum them up to represent the dialogue acts distribution in the ground truth conversation, ˆd =Pi di. In order to encourage the generated conversation to include utterances with the given dialogue acts, we force the generated conversation to have the same dialogue acts distribution with the ground truth conversation. Specifically, after every diffusion step, Xˆt−1 = p(Xˆt|θ), t ∈ (td, ta], we first predict the dialogue acts Dt−1 = {d t−1 0*, ..., d*t−1 n } for every utterance in Xˆt−1 with the learned classifier, where d t−1 iis the predicted vector that includes the probabilities of the i-th utterance is classified as different dialogue acts. We sum the predictions ˆdt−1 =Pi d t−1 i, where the j-th element in ˆdt−1 denotes the total number j-type utterance in the conversation. We then minimize the L2 distance between the ground-truth distribution and the predicted distribution from the generated conversation: $${\mathcal{L}}_{t}^{d}=||{\hat{d}},{\hat{d}}_{t-1}||_{2},t\in(t_{d},t_{a}]$$ $\mathbf{M}$ Discourse Relations Controlling the generated conversation with the discourse relation information would encourage the utterances in it to be more related, leading to a more coherent conversation from a *conversation level*. During training, we first pre-train a discourse parsing model on a humanannotated multiparty dialogue corpus (Asher et al., 2016b) following (Shi and Huang, 2018). 5. Via 4We use the hidden representations from the above F(.) as inputs and we achieve the accuracy with 81.6% on Switchboard corpus, which is comparable to the state-of-the-art results (Raheja and Tetreault, 2019). 5We treat the hidden representations from F(.) as the input. We achieve 0.781 F1 score on link predictions and 0.575 F1 | Dataset | # Turns | |Conv| | |Sum| | |-----------|-----------|----------|---------| | SAMSum | 10.8 | 129.6 | 23.4 | | DialogSum | 9.8 | 131.0 | 23.6 | this parser, we extract the discourse relation matrix M ∈ Rm×m×kfrom the ground truth conversation, where m is the number of utterances and k is the total number of different discourse relations. We sum the matrix in the first two dimensions to represent the discourse relation distribution in the ground truth conversation: rˆ =Pi Pj M*i,j,k*, where the l-th element in rˆ means the total number of l-th discourse relation in the conversation. We regularize the generated conversation to have the same discourse relation distribution with the ground truth conversation. After every diffusion step, Xˆt−1 = p(Xˆt|θ), t ∈ (0, td], we first predict the discourse relation matrix Mt−1 ∈ Rn×n×k with the pre-trained parser. We also sum Mtin the first two dimensions rˆt−1 =Pi Pj M*i,j,k* and minimize the L2 distance between it and the ground-truth distribution: $${\mathcal{L}}_{t}^{r}=||{\hat{r}},{\hat{r}}_{t-1}||_{2},t\in(0,t_{d}]$$ Objectives In practice, we sequentially use all three conversation structures, from lower levels to higher levels, i.e., action triples → dialogue acts → discourse relations. The order is selected through an ablation study (in Section 5.4). During training, we minimize the loss: $${\mathcal{L}}=\sum_{t=1}^{T}{\mathcal{L}}_{t}+\sum_{t=t_{a}}^{T}{\mathcal{L}}_{t}^{a}+\sum_{t=t_{d}}^{t_{a}}{\mathcal{L}}_{t}^{d}+\sum_{t=1}^{t_{d}}{\mathcal{L}}_{t}^{r}\quad(8)$$ ## 5 Experiments 5.1 Datasets And Baselines We perform experiments on two widely-used datasets, SAMSum (Gliwa et al., 2019) and DialogSum (Chen et al., 2021), as shown in Table 1. They are originally introduced for conversation summarization, which contains open-domain reallife daily conversations with human written summaries. In this work, we reverse the datasets where score on relation classifications, which are comparable to the state-of-the-art results (Shi and Huang, 2018). we utilize the summary as input and learn the generation model to generate the long conversation. During pre-processing, we add a special token ("<s>") to indicate the begging of every utterance. We truncate the conversation into 800 tokens. We compare our Diffuse-CG framework with several baselines: - **BART-base** (Lewis et al., 2020): We use BART-base as our backbone model. The input only contains the summary. - **BART-Concat**: We improve pure BART by directly concatenating controlling information including the action triples, dialogue acts and discourse relations to the end of the input summary. - **Diffuse-CG-Con**: We use a framework similar to our Diffuse-CG while the different levels of information are combined concurrently instead of sequentially. ## 5.2 Experimental Setting $$(7)$$ We initialize the prototype conversation generation model with BART-base and learn the model for 20 epochs with 3e-5 learning rate, and 0.15 warm-up ratio. The batch size is 4. For the DiffuseCG, we utilize a 4-layer transformer whose hidden dimension is 512 to model p(.|θ). We set the diffusion steps to be T = 500 (ta = 300 and td = 100, which means that we use 300 steps for action triples, 100 steps for dialogue acts, and 100 steps for discourse relations). We follow (Li et al., 2022b) to use an sqrt schedule in the forward process. The learning rate is set to be 3e-4 with a 0.1 warm-up ratio. The batch size is 4 and we train Diffuse-CG for 200k iterations. During inference, the beam size is set to 4. We perform all the experiments on 4 NVIDIA V100 GPUs. For diffuse-CG, the training takes around 4.8 hours, and the inference speed is 1.4second per dialogue generation. ## 5.3 Results Automatic Evaluation We first evaluated all the models with: - **ROUGE scores** (Lin and Och, 2004) measure the n-gram overlap between the generated conversation and the ground-truth conversation. - Action coverage rate, Dialogue acts coverage rate, discourse Relation coverage measure the | Model | Control | R-1 | R-2 | R-L | A Cov. | D Cov. | R Cov. | LM score | Length | |----------------|-----------|-------|-------|-------|----------|----------|----------|------------|----------| | BART | - | 33.15 | 12.35 | 23.60 | 18.9 | 37.5 | 11.3 | 71.23 | 53.28 | | BART-Concat | a + d + r | 35.32 | 13.38 | 24.75 | 31.5 | 55.5 | 14.9 | 68.14 | 81.36 | | Diffuse-CG-Con | a + d + r | 38.32 | 17.15 | 26.55 | 33.1 | 72.4 | 23.8 | 69.16 | 83.12 | | a | 38.12 | 18.45 | 27.38 | 38.2 | 56.3 | 15.1 | 67.76 | 82.42 | | | d | 36.82 | 12.11 | 25.92 | 24.7 | 76.6 | 16.4 | 70.28 | 73.29 | | | r | 37.33 | 15.92 | 24.73 | 20.6 | 68.8 | 27.1 | 69.37 | 78.18 | | | a → d | 38.76 | 19.16 | 27.46 | 37.1 | 77.9 | 21.8 | 67.43 | 85.34 | | | a → d → r | 40.54 | 19.43 | 28.57 | 36.0 | 75.3 | 27.4 | 66.15 | 90.38 | | | Diffuse-CG † | | | | | | | | | | Table 2: ROUGE-1 (↑), ROUGE-2 (↑), ROUGE-L (↑) scores, Action coverage rate (↑), Dialogue acts coverage rate (↑), discourse Relation coverage rate (↑), language model scores (↓) and the length (↑) of the generated conversation for different models on the SAMSum Corpus test set. † means our model and the extra information is added in an order of action triples, dialogue acts, and discourse relations. Model Control **R-1 R-2 R-L A Cov. D Cov. R Cov. LM score Length** BART - 32.15 11.43 22.42 17.5 32.7 10.1 74.23 48.46 BART-Concat a + d + r 32.32 14.23 23.55 30.2 51.2 16.8 70.44 83.14 Diffuse-CG-Con a + d + r 34.52 15.18 23.22 32.0 70.8 21.6 72.16 80.34 Diffuse-CG † a 36.56 15.45 27.38 **37.2** 58.3 15.1 71.84 82.32 a → d 37.39 17.32 26.46 36.1 **76.9** 21.8 69.53 83.34 a → d → r **39.84 18.23 27.57** 35.0 75.3 **25.4 68.45 84.23** coverage rate of the actions triples, dialogue acts, and discourse relations in the generated conversation compared to the ground-truth conversation. - **LM score** measure the fluency by computing the perplexity from a GPT-2 pre-trained on SAMSum and DialogSum. - **Length** measures the length of the generated conversation. As shown in Table 2 and Table 3, we find that after adding the controlling structured information directly to the input, BART-Concat is generating better conversations compared to naive BART. This shows that our introduced conversation-structured guidance can help conversation generation by providing effective information. By applying the diffusion process, Diffuse-CG-Con and Diffuse-CG further consistently improve the performances (e.g., 8%/28%/7% improvements in ROUGE scores), which shows the effectiveness of our introduced controllable conversation generation framework. Because it makes better use of both the input summary and the controlling signals by first generating the prototype conversation and then further enriching it with the extra information using a diffusion process, which prevents the distraction from different information. Among different noise and control signals, the soft-masking action words noise and action triples diffusion worked the best, followed by shuffling discourse relations with discourse diffusion and then soft-masking utterances noise with dialogue acts diffusion. Compared to the concurrent way, our sequential Diffuse-CG works the best, indicating that editing the long conversation with a suitable order (from token levels to utterance levels and to conversation levels) is important. By gradually incorporating different levels of structure, the overall performances are improving (e.g., the ROUGE scores are increasing from 38.12/18.45/27.38 to 40.54/19.43/28.57), suggest- | Model | Coh. | Flu. | Fac. | |----------------|--------|--------|--------| | BART | 3.44 | 1.58 | 1.66 | | BART-Concat | 2.89 | 2.17 | 2.14 | | Diffuse-CG-Con | 2.43 | 3.88 | 3.52 | | Diffuse-CG † | 1.34 | 1.46 | 1.54 | | Noise | R-1 | R-2 | R-L | A Cov. | D Cov.e | R Cov. | LM score | Length | |-----------------------|-------|-------|-------|----------|-----------|----------|------------|----------| | Gaussian | 33.14 | 14.24 | 23.45 | 34.3 | 70.4 | 23.3 | 75.13 | 75.46 | | Linguistic-informed † | 40.54 | 19.43 | 28.57 | 36.0 | 75.3 | 27.4 | 66.15 | 90.38 | | Control Orders | R-1 | R-2 | R-L | A Cov. | D Cov. | R Cov. | LM score | Length | |------------------|-------|-------|-------|----------|----------|----------|------------|----------| | a → d → r † | 39.84 | 18.23 | 27.57 | 35.0 | 75.3 | 25.4 | 68.45 | 84.23 | | a → r → d | 37.18 | 16.34 | 24.33 | 34.2 | 76.0 | 24.3 | 70.13 | 82.48 | | d → a → r | 35.87 | 14.11 | 25.92 | 33.9 | 73.0 | 24.8 | 71.53 | 82.34 | | d → r → a | 36.84 | 14.24 | 23.38 | 34.1 | 72.6 | 23.1 | 72.45 | 80.58 | | r → a → d | 37.14 | 15.38 | 25.45 | 33.5 | 75.0 | 26.0 | 70.18 | 80.38 | | r → d → a | 38.42 | 16.87 | 26.88 | 31.8 | 74.3 | 25.5 | 69.15 | 78.33 | ing that the sequential diffusion steps can edit the prototype conversation to higher qualities step by step, and all the introduced structures are making contributions. Human Evaluation We conduct a human evaluation to evaluate the generated conversations qualitatively. We ask Amazon Mechanical Turk to rank the quality of 100 generated conversations (randomly sampled) from a given summary with 4 different models. Specifically, we ask them to rank them in terms of **Coherency** (the generated conversation is logical and consistent), **Fluency** (the generated conversation is reader-friendly) and **Factualness** (the generated conversation is not changing the fact from the given short descriptions). To increase annotation quality, we require turkers to have a 98% approval rate with over 10,000 approved tasks for their previous work. The pay rate was 0.5$ per hit. The rank for every summary was aggregated by majority voting. The Intra-Class Correlation (*ICC1k*) was 0.511, indicating moderate agreement (Koo and Li, 2016)). The average rank is shown in Table 4. Our Diffuse-CG achieves the best average rankings, indicating the effectiveness of incorporating conversation structures. ## 5.4 Ablation Study This part describes our ablation studies on how our introduced linguistic-informed noises and the diffusion orders affect the model performances. Noise Strategy We first visualize the performances of Diffuse-CG with different types of noise strategy in Table 5. Gaussian Noise adds Gaussian noise to all the tokens in the prototype conversation in the forward process, following previous work (Li et al., 2022b), while our introduced Linguisticinformed Noise only adds Gaussian noise to action words and random utterance as well as shuffling the conversations. Our introduced noise shows significantly better performances on SAMSum test set, indicating that our introduced noise strategy which considers the conversation structures is providing more appropriate perturbation to the prototype conversation for the diffusion process. This is because our strategy could provide flexibility to edit the prototype conversation as well as preserve the prior knowledge in the prototype conversation. Diffusion Orders In terms of the impact of different orders to add different structured information during the diffusion process, as shown in Table 6, we find that the best overall performance is achieved by the order: action triples → dialogue acts → discourse relations, from a lower level (token/action level) to higher level (conversationlevel). This might be because, in this structured order, more specific information can be introduced at the early stages when the conversations are more flexible to adopt a large amount of detailed information. When the conversation has enough information, it is then more effective to operate at a higher level like the relations between different utterances. This also indicates the effectiveness of structured ordering in general, especially when there are multiple levels of controlling information. ![8_image_1.png](8_image_1.png) ![8_image_0.png](8_image_0.png) ## 5.5 Case Study We further visualize the intermediate outputs in the diffusion process of our Diffuse-CG to interpret the generation process in Figure 2. As it shows, the prototype conversation is short and coarse. When the action information is incorporated through the first diffusion stage, the conversation is enriched by more specific action information like " Amanda text Larry ". After the dialogue act diffusion stage, the conversation is further modified to have utterance with dialogue acts like backchannel (" Urgh. All right"). At last, with the discourse relation information being utilized, the conversation is more interactive and coherent with more intra-utterance relations like QA pairs. These coarse-to-fine steps show how Diffuse-CG is editing and generating better and longer conversations over time. ## Conclusion 6 In this work, we introduce a novel controllable conversation generation framework that utilizes different levels of conversation structures to generate long and coherent conversations based on a given short description. Specifically, we first generate the prototype conversation and then enrich it with structure information like action triples, dialogue acts, and discourse relations, together with novel linguistic-informed noises for further adapting diffusion models to generate conversations. Experiments on SAMSum and DialogSum show the effectiveness of our framework by significantly improving over the baselines. Our proposed method also provides interpretability of how the model is gradually generating longer and better conversations. ## 7 Limitation In this work, we mainly leverage control guidance such as action triples, dialogue acts, and discourse relations in structured forms that are extracted automatically from the corpus for training. We encourage future work to explore how to incorporate control information in natural language forms (for example, the natural language descriptions of the action information instead of triples). We also compose multiple modules (like the prototype generation, discourse classifier, etc.) to generate the final conversation which might lead to a larger error cascade if there is some early noise. So future work might explore how to make the pipeline learned in an end-to-end manner. What's more, we mainly focus on using three major conversation structures to help the entire conversation generation, future work might continue to explore other types of linguistic and human knowledge to further improve the conversation generation qualities. ## Acknowledgements We thank members of John Thickstun, the SALT Lab, and reviewers for their helpful feedback. This work was supported in part by an Amazon Faculty Research Award and an NSF grant IIS-2247357. ## References Tosin Adewumi, Foteini Liwicki, and Marcus Liwicki. 2022. State-of-the-art in open-domain conversational ai: A survey. James Allen and Mark Core. 1997. Draft of DAMSL: Dialog act markup in several layers. Unpublished manuscript. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354, Beijing, China. Association for Computational Linguistics. Nicholas Asher, Julie Hunter, Mathieu Morey, Benamara Farah, and Stergos Afantenos. 2016a. Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In *Proceedings of the Tenth* International Conference on Language Resources and Evaluation (LREC'16), pages 2721–2727. Nicholas Asher, Julie Hunter, Mathieu Morey, Benamara Farah, and Stergos Afantenos. 2016b. Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2721–2727, Portorož, Slovenia. European Language Resources Association (ELRA). Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. 2021. Structured denoising diffusion models in discrete state-spaces. Hengyi Cai, Hongshen Chen, Yonghao Song, Xiaofang Zhao, and Dawei Yin. 2020. Exemplar guided neural dialogue generation. In *International Joint Conference on Artificial Intelligence*. Eugene Charniak. 1972. Toward a model of children"s story comprehension. Technical report, USA. Jiaao Chen and Diyi Yang. 2021a. Simple conversational data augmentation for semi-supervised abstractive dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6605–6616, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiaao Chen and Diyi Yang. 2021b. Structure-aware abstractive conversation summarization via discourse and action graphs. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1380–1391, Online. Association for Computational Linguistics. Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. DialogSum: A real-life scenario dialogue summarization dataset. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5062–5074, Online. Association for Computational Linguistics. Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, and Jonas Adler. 2022. Continuous diffusion for categorical data. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. Xiachong Feng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2020. Incorporating commonsense knowledge into abstractive dialogue summarization via heterogeneous graph networks. *arXiv preprint* arXiv:2010.10044. James Paul Gee. 2014. *An introduction to discourse* analysis: Theory and method. Routledge. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics. Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, and Nanyun Peng. 2020. Content planning for neural story generation with aristotelian rescoring. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4319–4338, Online. Association for Computational Linguistics. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong. 2022. Diffuseq: Sequence to sequence text generation with diffusion models. Jian Guan, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. Jian Guan, Zhenyu Yang, Rongsheng Zhang, Zhipeng Hu, and Minlie Huang. 2022. Generating coherent narratives by learning dynamic and discrete entity states with a contrastive framework. Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Sachindra Joshi, and David Konopnicki. 2021. Summary grounded conversation generation. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3748–3756, Online. Association for Computational Linguistics. Prakhar Gupta, Jeffrey Bigham, Yulia Tsvetkov, and Amy Pavel. 2021. Controlling dialogue generation with semantic exemplars. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3018–3029, Online. Association for Computational Linguistics. Zhengfu He, Tianxiang Sun, Kuanning Wang, Xuanjing Huang, and Xipeng Qiu. 2022. Diffusionbert: Improving generative masked language models with diffusion models. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. 2021. Argmax flows and multinomial diffusion: Learning categorical distributions. Changzhen Ji, Yating Zhang, Xiaozhong Liu, Adam Jatowt, Changlong Sun, Conghui Zhu, and Tiejun Zhao. 2021. A neural conversation generation model via equivalent shared memory investigation. In Proceedings of the 30th ACM International Conference on Information &amp Knowledge Management. ACM. Paul A Kirschner, Simon J Buckingham-Shum, and Chad S Carr. 2012. *Visualizing argumentation: Software tools for collaborative and educational sensemaking*. Springer Science & Business Media. Terry K Koo and Mae Y Li. 2016. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. *Journal of chiropractic* medicine, 15(2):155–163. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. Qintong Li, Piji Li, Wei Bi, Zhaochun Ren, Yuxuan Lai, and Lingpeng Kong. 2022a. Event transition planning for open-ended text generation. In *Findings of* the Association for Computational Linguistics: ACL 2022, pages 3412–3426, Dublin, Ireland. Association for Computational Linguistics. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B. Hashimoto. 2022b. Diffusion-lm improves controllable text generation. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 605. Association for Computational Linguistics. Phillip Lippe, Pengjie Ren, Hinda Haned, Bart Voorn, and Maarten de Rijke. 2020. Diversifying taskoriented dialogue response generation with prototype guided paraphrasing. Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. 2020. You impress me: Dialogue generation via mutual persona perception. Vipul Raheja and Joel Tetreault. 2019. Dialogue Act Classification with Context-Aware Self-Attention. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3727–3733, Minneapolis, Minnesota. Association for Computational Linguistics. Ramya Ramakrishnan, Hashan Narangodage, Mauro Schilman, Kilian Weinberger, and Ryan McDonald. 2022. Long-term control for dialogue generation: Methods and evaluation. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 738–753, Seattle, United States. Association for Computational Linguistics. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *2022 IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pages 10674– 10685. Harvey Sacks, Emanuel A Schegloff, and Gail Jefferson. 1978. A simplest systematics for the organization of turn taking for conversation. In *Studies in the* organization of conversational interaction, pages 7– 55. Elsevier. Sougata Saha, Souvik Das, and Rohini Srihari. 2022. Stylistic response generation by controlling personality traits and intent. In *Proceedings of the 4th Workshop on NLP for Conversational AI*, pages 197–211, Dublin, Ireland. Association for Computational Linguistics. Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, and Xiaoyan Zhu. 2019. Long and diverse text generation with planning-based hierarchical variational model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3257–3268, Hong Kong, China. Association for Computational Linguistics. Zhouxing Shi and Minlie Huang. 2018. A deep sequential model for discourse parsing on multi-party dialogues. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pages 2256– 2265, Lille, France. PMLR. Jiaming Song, Chenlin Meng, and Stefano Ermon. 2021. Denoising diffusion implicit models. In *International* Conference on Learning Representations. Matthew Stone, Una Stojnic, and Ernest Lepore. 2013. Situated utterances and discourse relations. In *Proceedings of the 10th International Conference on* Computational Semantics (IWCS 2013)–Short Papers, pages 390–396. Bowen Tan, Zichao Yang, Maruan AI-Shedivat, Eric P. Xing, and Zhiting Hu. 2020. Progressive generation of long text with pretrained language models. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. MEGATRON-CNTRL: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2831–2845, Online. Association for Computational Linguistics. Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan Klein. 2022. Re3: Generating longer stories with recursive reprompting and revision. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 4 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 4 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
pei-etal-2023-shot
Few-shot Low-resource Knowledge Graph Completion with Reinforced Task Generation
https://aclanthology.org/2023.findings-acl.455
Despite becoming a prevailing paradigm for organizing knowledge, most knowledge graphs (KGs) suffer from the low-resource issue due to the deficiency of data sources. The enrichment of KGs by automatic knowledge graph completion is impeded by the intrinsic long-tail property of KGs. In spite of their prosperity, existing few-shot learning-based models have difficulty alleviating the impact of the long-tail issue on low-resource KGs because of the lack of training tasks. To tackle the challenging long-tail issue on low-resource KG completion, in this paper, we propose a novel few-shot low-resource knowledge graph completion framework, which is composed of three components, i.e., few-shot learner, task generator, and task selector. The key idea is to generate and then select the beneficial few-shot tasks that complement the current tasks and enable the optimization of the few-shot learner using the selected few-shot tasks. Extensive experiments conducted on several real-world knowledge graphs validate the effectiveness of our proposed method.
# Few-Shot Low-Resource Knowledge Graph Completion With Reinforced Task Generation Shichao Pei1,∗, Qiannan Zhang2,∗**, Xiangliang Zhang**1 1University of Notre Dame 2King Abdullah University of Science and Technology {spei2, xzhang33}@nd.edu, qiannan.zhang@kaust.edu.sa ## Abstract Despite becoming a prevailing paradigm for organizing knowledge, most knowledge graphs (KGs) suffer from the low-resource issue due to the deficiency of data sources. The enrichment of KGs by automatic knowledge graph completion is impeded by the intrinsic long-tail property of KGs. In spite of their prosperity, existing few-shot learning-based models have difficulty alleviating the impact of the longtail issue on low-resource KGs because of the lack of training tasks. To tackle the challenging long-tail issue on low-resource KG completion, in this paper, we propose a novel fewshot low-resource knowledge graph completion framework, which is composed of three components, i.e., few-shot learner, task generator, and task selector. The key idea is to generate and then select the beneficial few-shot tasks that complement the current tasks and enable the optimization of the few-shot learner using the selected few-shot tasks. Extensive experiments conducted on several real-world knowledge graphs validate the effectiveness of our proposed method. ## 1 Introduction Recent years have witnessed a rapidly increasing amount of research attention and industry demand on knowledge graphs (KGs), which organize knowledge in the form of triples and have been playing a crucial role in many knowledge-intensive applications, such as question answering (Zhang et al., 2018), recommendation systems (Guo et al., 2020), and dialogue systems (Yang et al., 2020). Despite the fact that several large-scale KGs, such as NELL (Mitchell et al., 2018) and Freebase (Bollacker et al., 2008), have been well-developed for organizing general knowledge, most of the existing KGs are built for domain-specific knowledge using domain-specific data sources. These KGs inevitably suffer from the **low-resource issue**, i.e., *Equal contribution. the incompleteness in terms of entities, relations, and triples that exist in the real world but are absent in KGs, caused by the deficiency of data sources, especially those in non-global languages. For example, the number of articles in Norwegian Wikipedia is 10 times less than that in English Wikipedia1. A KG built using Norwegian Wikipedia would have a severer incompleteness issue compared with the KG extracted from English Wikipedia. Although automatic KG completion (KGC) (Zhang et al., 2020c; Kazemi and Poole, 2018; Bordes et al., 2013; Tang et al., 2022) has empowered the discovery of more triples to enrich KGs, it is far from perfect to enrich low-resource KGs due to the difficulty in the discovery that stems from the relations associated with missing triples and varies with the change of relation frequency. Since most KGs generally follow **long-tail distribution** (Xiong et al., 2018; Zhang et al., 2020a; Nguyen et al., 2018), where a large fraction of relations have only a few triples, the existence of rare relations on low-resource KGs leads to dramatic performance degradation on the discovery of missing triples and impede the development of an effective completion model. A line of efforts (Xiong et al., 2018; Zhang et al., 2020a; Chen et al., 2019; Lv et al., 2019) attempt to improve the capability of inference on rare relations by formulating the KG completion problem into a few-shot learning framework and exploiting inference models for frequent relations to facilitate inference on rare relations. Such kind of methods require abundant training tasks to ease the effect of memorization and improve the generalization (Rajendran et al., 2020). However, low-resource KGs do not always have adequate training tasks for mimicking the few-shot learning scenarios of rare relations. For example, the Greek KG (Chen et al., 2020) used in experiments only contains 21 training tasks, as many relations are unknown or cannot offer triples for 1https://en.wikipedia.org/wiki/List_of_Wikipedias training task construction. Therefore, the few-shot KG completion models tend to overfit on the scarce tasks and engender unsatisfactory results. Motivated by the above-discussed limitation on low-resource KGs, we propose to **generate new** few-shot tasks to augment current tasks for easing the impact of memorization and improving the generalization. Although straightforward, the materialization of this idea is non-trivial due to two crucial challenges. The first challenge (C1) is to design an effective strategy to generate new few-shot tasks. A few-shot task is formed as a set of pairs corresponding to the same relation, each of which includes two entities, i.e., head entity and tail entity. On the one hand, the discrete nature of tasks to generate hinders the harness of prevailing generative models. On the other hand, the generation of few-shot tasks with novel relations is arduous due to the lack of related data samples. If possible to generate, the second challenge (C2) is to ensure the authenticity of generated few-shot tasks. The generated tasks ineluctably consist of noisy tasks which cannot represent the designated relation and would be detrimental to few-shot learning. Thus, how to select beneficial tasks is an essential problem. In this paper, we propose a novel few-shot low-resource KG completion framework with reinforced task generation, named **FLow-KGC**, to promote the KG completion on rare relations in low-resource KGs. Specifically, we formulate KG completion into a few-shot learning framework and train a simple yet effective *few-shot learner* to learn representations for rare relations with a small support set. The learned representation can be further used for inference. To address challenge C1, we represent few-shot tasks in a latent continuous space and then build a *task generator* in this latent space, rather than generating original discrete structure samples. Then the generated tasks can be utilized to update the few-shot learner. To select beneficial synthetic few-shot tasks and tackle challenge (C2), we design an adaptive *task selector*, which makes decisions on keeping or discarding generated tasks and receives feedback from the few-shot learner and conducts optimization using reinforcement learning. The task generator and task selector collaboratively complete the process of task generation. These three components, i.e., few-shot learner, task generator, and task selector, constitute our proposed method **FLow-KGC** and can be updated in an alternative optimization way. Overall, our contributions in this work include: (1) We study the crucial few-shot KG completion on low-resource KGs and propose a novel model called **FLow-KGC** to mitigate the impact of the long-tail problem on completion tasks. (2) We design a task generator to create synthetic few-shot tasks and a task selector to achieve adaptive beneficial task selection, in order to improve the generalization of the few-shot learner. (3) We perform extensive experiments on several real-world low-resource KGs. The experimental results show the superior performance of FLow-KGC over the state-of-the-art with significant improvement in few-shot low-resource KG completion. ## 2 Related Work 2.1 Knowledge Graph Completion Early efforts address KGC by designing algorithms based on rule learning (Galárraga et al., 2015) to discover logical rules from KGs for facilitating inductive link prediction. Nowadays, the prevailing learning paradigm for KG completion tasks (Zhang et al., 2020c) is to learn the distributed representation of entities and relations in KGs. These methods roughly fall into three categories (Bonner et al., 2021): 1) Tensor decomposition methods, such as SimplE (Kazemi and Poole, 2018), RESCAL (Nickel et al., 2012), and ComplEx (Trouillon et al., 2017). 2) Geometric methods, such as TransE (Bordes et al., 2013), RotatE (Sun et al., 2018b), and CrossE (Zhang et al., 2019). 3) Deep learning methods (Nguyen et al., 2018; Dettmers et al., 2018; Vashishth et al., 2020). However, these methods fail to handle the long-tail problem of KG completion and suffer from performance degradation of prediction on rare relations. A recent approach (Zhang et al., 2020b) attempts to alleviate this issue, but it requires a relatively large set of triples for rare relations and is incapable of handling the few-shot scenario. Recently, a few works (Chen et al., 2020; Zhou et al., 2021) design strategies to assist the KG completion by leveraging complementary knowledge from other related KGs in different languages (Chen et al., 2020; Conneau et al., 2020; Zhou et al., 2021). Although effective, they only rely on cross-lingual links (Chen et al., 2017; Sun et al., 2018a; Pei et al., 2019a; Cao et al., 2019; Pei et al., 2019b, 2020) and cannot handle new relations or rare relations with few triples. ## 2.2 Few-Shot Knowledge Graph Completion With the goal to overcome the shortcoming of canonical KG completion methods (Kazemi and Poole, 2018; Bordes et al., 2013; Nguyen et al., 2018) on inferring the missing triples associated with rare relations, several few-shot KG completion (FSKGC) algorithms have been developed for improving completion performance on rare relations. Recent attempts (Xiong et al., 2018; Zhang et al., 2020a; Chen et al., 2019; Lv et al., 2019) on fewshot KG completion formulate the problem into a few-shot learning framework and propose metricbased approaches (Xiong et al., 2018; Zhang et al., 2020a; Sheng et al., 2020; Niu et al., 2021) and meta-learning-based approaches (Chen et al., 2019; Lv et al., 2019). Yet these few-shot KG completion models require plenty of training tasks to train a few-shot learner and cannot generalize well to rare relations in low-resource KGs due to the deficiency of relations and corresponding tasks. ## 3 Problem Formulation A knowledge graph G can be denoted as G = (E, R, T P), where E refers to the set of entities and R denotes the set of relations, and T P is represented as a set of triples {(h, r, t)*} ⊆ E × R × E*, each of which includes a head entity h, a relation r, and a tail entity t. The KG completion problem is to infer the most plausible missing triples from the candidate set {(h, r, t)|t ∈ E ∧ (h, r, t) *∈ T P}* / for each incomplete triple (*h, r,* ?) (or inferring from {(h, r, t)|h ∈ E ∧ (h, r, t) *∈ T P}* / for (?*, r, t*)). In our few-shot low-resource KGC problem, the majority of relations have a few training triples. Formally, for a rare relation ri, there are a support set Si = {(h, t)|(h, ri, t) *∈ T P}* including only |S| training triples. The task is to predict the tail entity for a query triple (*h, r*i, ?), i.e., ranking all tail entity candidates such that the true tail entity t is ranked higher than other candidates in Ch,ri , which is defined as Ch,ri = {tc|tc ∈ E ∧ (h, ri, tc) *∈ T P}* / . Following the notations in meta-learning, a few-shot task with K = |S| triples is called K-shot KG completion. Since this K for a rare relation riis often small (e.g., less than 10), the learning problem is organized by working on meta-tasks Ti = {Si, Qi}, where a query set Qi = {(h, t, Ch,ri )|(h, ri, t) *∈ T P}*. In the meta-training stage, there are a set of fewshot tasks T*train* = {Ti} M i=1, where each task Ti represents a relation ri ∈ R*train*. A few-shot learning model can be optimized using meta-training tasks following the meta-learning principle (Finn et al., 2017). In the meta-testing stage, with a set of testing tasks T*test* = {Tj} J j=1, the model should make inferences on new relation rj ∈ R*test* corresponding to task Tj . Note that all relations that appear in the meta-testing tasks are unseen during meta-training, i.e., Rtrain ∩ R*test* = ϕ. Low-resource KGs suffer from severer incompleteness issue and consist of fewer relations. Due to the long-tail distribution of these relations, there are only a small number of relations with a high frequency qualified to offer triples for task construction, leading to insufficient T*train*. Our proposed solution enriches T*train* by generating and selecting beneficial new tasks for virtual relations which are undiscovered and not present in KGs. ## 4 The Proposed Method In this section, we introduce the proposed method FLow-KGC, describe each component in detail, and elaborate on the optimization and inference, following the overview shown in Figure 1. ## 4.1 Few-Shot Learner A few-shot learner is a basic yet essential component in few-shot learning for generalizing a inference model over rare relations with only a few triples. In particular, the few-shot learner should be able to learn accurate representation for rare relations using a small support set, then the learned representation will be applied for subsequent inference. We adopt a simple few-shot learner based on metalearning (Chen et al., 2019) to learn representation for relations. Specifically, given a meta-training task Ti with support set Si and its corresponding relation ri ∈ R*train*, a few-shot learner FS(·) aims to generate a representation for ri by taking entities in associated triples as input. The representation of ri can be obtained by the following: $$\mathbf{r}_{\mathcal{T}_{i}}={\frac{\sum_{k=1}^{|\mathcal{S}_{i}|}\mathrm{FS}(\mathbf{e}_{h_{k}}\oplus\mathbf{e}_{t_{k}})}{|\mathcal{S}_{i}|}},\qquad\qquad(1)$$ where (hk, tk) denotes the k-th pair in Si, ehk and etk are embeddings for entity hk and tk, and rTi refers to the learned representation of relation ri. x⊕y denotes the concatenation of the embedding x and y. The few-shot learner FS(·) is implemented by an L-layers fully connected neural network. After obtaining the representation rTi of relation ri, we measure if rTi can well represent relation ![3_image_0.png](3_image_0.png) ri by a score function f s(·) and define the metatraining loss on support set Si as follow: $${\mathcal{L}}_{S_{i}}$$ LSi = $$=\sum_{k=1}^{|{\mathcal{S}}_{i}|}[\gamma+f^{s}(h_{k},r_{i},t_{k})-f^{s}(h_{k},r_{i},t_{n})]_{+},$$ where f s(·) is based on TransE (Bordes et al., 2013) algorithm and measures the plausibility of triple (hk, ri, tk) by f s(hk, ri, tk) = ||ehk + rTi − etk||. f s(hk, ri, tn) is the score for negative sample (hk, tn) sampled from {(hk, t′)|t′ ∈ E ∪ (hk, ri, t′) *∈ T P}* / . And γ is a predefined margin parameter. Then a fast update on rTi can be conducted to obtain a more accurate representation of relations and works as follows: $${\hat{\mathbf{r}}}_{{\mathcal{T}}_{i}}=\mathbf{r}_{{\mathcal{T}}_{i}}-\beta\nabla_{\mathbf{r}_{{\mathcal{T}}_{i}}}{\mathcal{L}}_{{\mathcal{S}}_{i}},$$ , (3) where β is the step size of the gradient updates. Next, the updated relation representation ˆrTi is exploited to measure the plausibility of triples in the query set Qi with a score function f q(·) defined by f q(hz, ri, tz) = ||ehz +ˆrTi −etz||, where (hz, tz) denotes the z-th pair in Qi. Then the loss function for updating few-shot learner FS(·) on the query set Qiis defined as follows: $$\mathcal{L}_{\mathcal{Q}_{i}}=\sum_{z=1}^{|\mathcal{Q}_{i}|}[\gamma+f^{q}(h_{z},r_{i},t_{z})-f^{q}(h_{z},r_{i},t_{c})]_{+},\tag{4}$$ $$+,\quad$$ $$\quad(4)$$ where tc ∈ Chz,ri is a tail entity candidate. Lastly, the few-shot learner is optimized on all meta-training tasks as follows: $${\mathcal{L}}_{\mathrm{FS}}=\sum_{i=1}^{|{\mathcal{T}}_{\mathrm{train}}|}{\mathcal{L}}_{{\mathcal{Q}}_{i}}.\qquad\qquad(5)$$ ## 4.2 Task Generator The few-shot learner on low-resource KGs suffering from the deficiency of training tasks tends to overfit and memorize the given tasks and lacks generalization ability. We thus target to generate few-shot tasks to complement the meta-training set. In particular, We design a task generator in a task representation space, rather than generating new tasks in the original discrete format. $$({\mathfrak{I}})$$ 4.2.1 Task Representation We build the representation of a task based on the entities involved in the task. Specifically, for a task Ti with a support set Si and a query set Qi, we denote the representations of Si and Qi as Si ∈ R K×2dand Qi ∈ R Z×2d, which are obtained by: $$\begin{array}{l l}{{\mathbf{S}_{i}=\mathbf{S}\mathbf{H}_{i}\oplus\mathbf{S}\mathbf{T}_{i}}}&{{}}\\ {{\mathbf{Q}_{i}=\mathbf{Q}\mathbf{H}_{i}\oplus\mathbf{Q}\mathbf{T}_{i},}}&{{}}\end{array}\qquad\qquad({\mathbf{6}})$$ where SHi ∈ R K×d denotes the representation matrix of head entities in support set Si and is obtained by SHi = eh1 ⊕ ... ⊕ ehK . d is the dimension of embeddings. Similarly, STi ∈ R K×drefers to the representation matrix of tail entities in support set Si and is obtained by STi = et1 ⊕ ... ⊕ etK . K denotes the size of the support set. The representation of Qi can be acquired in the same way. QHi 7255 and QTi represent the head entities and tail entities in Qi, respectively. Z denotes the size of the query set. With obtained Si and Qi, a task Ti can be represented by Ti ∈ R (K+Z)×2das follow: ## Ti = Si ⊕ Qi. (7) 4.2.2 Conditional Variational Autoencoder With learned task representations and the aim to generate meta-tasks, we expect to estimate the underlying posterior distribution p(·|T, r) of metatasks with given relations for task generation by sampling. Due to the intractability of this posterior distribution, we employ a conditional variational autoencoder (CVAE) (Sohn et al., 2015) to circumvent the direct posterior estimation and generate meta-tasks with relations as a conditional variable. Encoder. The encoder takes the representation T of a task T with its corresponding relation r as input data and aims to construct a latent distribution qϕ(z|T, r) to describe the distribution of meta tasks related to relation r, which is represented by the mean µ and standard deviation σ, and z is a sample drawn from the distribution. µ and σ are learned by two separate linear function fµ and fσ as follows: $$\mu=\mathrm{f}_{\mu}(\mathrm{MLP}_{\mathrm{enc}}(\mathrm{Concat}(\mathrm{T},r))))\tag{8}$$ $$\sigma=\mathrm{f}_{\sigma}(\mathrm{MLP}_{\mathrm{enc}}(\mathrm{Concat}(\mathrm{T},r))),\tag{9}$$ where fµ(x) = Wµx + bµ and fσ(x) = Wσx + bσ, MLPenc(·) is an L-layer fully connected neural network, Concat(,) is a concatenation operator, which first squeezes T to a vector T′ ∈ R 2(K+Z)·d then concatenates T′and a one-hot vector or (representing the ID of relation r). Here we assume there are V virtual relations representing undiscovered but existent relations in the real world. These virtual relations will have generated meta-tasks. A relation r then can be represented as a one-hot vector or ∈ R (|R*train*|+V )to denote its ID. Decoder. The decoder is to reconstruct the input representation T with the learned µ and σ, which defines the latent distribution of the given relation r. Given µ and σ, we can sample a latent variable z from the constructed space N (*µ, σ*). Yet the sampling operator makes the model non-differentiable and unable to calculate the gradient. Therefore, we adopt the reparameterization trick (Kingma and Welling, 2014) to solve this issue, which works by z = µ + σ ⊙ ϵ, where ϵ ∼ N (0, I) and ⊙ denotes the element-wise multiplication. The reconstruction should involve the relation information because the reconstructed task representation is a relation-specific representation. With the sampled z, the decoder can be denoted as pθ(T|z, r) and the reconstruction process is defined as follow: $$\widetilde{\bf T}^{\prime}={\rm MLP}_{\rm dec}({\bf z}\oplus{\bf o}_{r}),\tag{10}$$ where MLPdec(·) is an L-layer fully connected neural network and Te′ ∈ R2(K+Z)·d denotes the reconstructed representation of task T . With the encoder and decoder, we can define the reconstruction loss to minimize the variational lower bound by: $$\begin{array}{c}{{\cal L}_{\mathrm{CVAE}}=-\mathrm{KL}(q_{\phi}({\bf z}|\mathbf{T},r)||p_{\theta}({\bf z}|r))+}\\ {{\mathbb{E}[\log p_{\theta}(\mathbf{T}|\mathbf{z},r)],}}\end{array}\tag{11}$$ where the first term is a KL-divergence loss which can be rewritten as −KL(qϕ(z|T, r)||pθ(z|r)) = − 1 2 PM i=1(−σi + exp(σi) + µ 2 i − 1) by letting the prior distribution pθ(z|r) be N (0, I), and the second term is the reconstruction loss defined as E[log pθ(T|z, r)] = PM i=1 ||Te′ − T′||22 . 4.2.3 Task Generation With the designed CVAE, we can generate metatasks that are related to a relation. We generate tasks for virtual relations that refer to undiscovered relations, which would improve the diversity of tasks and enhance the generalization of the fewshot learner. Note that we do not generate tasks for known relations, because the key issue in lowresource KGC is the absence of representative tasks for a large portion of unknown relations, and those known relations without sufficient triples for metatask construction. Specifically, given a virtual relation rv, its one-hot vector representation can be obtained, which is denoted as ov. Then we can sample a latent variable zv from the prior distribution pθ(z|r) ∼ N (0, I). With the sampled zv and one-hot vector ov, according to Eq.(10), the well-trained decoder can be employed to generate the representation Te′v for the task corresponding to relation rv. Then the vector Te′v can be unsqueezed as Tev ∈ R (K+Z)×2d, which is further decomposed as four matrices SHgv ∈ R K×d, STf v ∈ R K×d, QHgv ∈ R Z×d, and QTgv ∈ R Z×dstanding for the generated representations of head and tail entities in a support set and a query set. And the four matrices form a task Tev = nSHgv, STf v, QHgv, QTgv o. The generation process can be repeated multiple times to obtain a set of tasks for a relation, which will be further exploited in the meta-training. ## 4.3 Task Selector The generated meta-tasks are leveraged in the metatraining stage with the desire to ease overfitting and improve generalization. However, these tasks inevitably comprise noisy meta-tasks, which cannot represent the corresponding relation well, and would mislead learning of the few-shot learner. To alleviate the adverse impact of noisy tasks, we design a task selector to select beneficial meta-tasks to promote the training of the few-shot learner. Specifically, given a set of generated meta-tasks TGEN = nTe1*, ...,* TeN o, where N denotes the number of tasks, the task selector learns to generate a score to measure the authenticity of a task and decide the beneficial tasks used for meta-training. For a task Tei, the score is calculated as follows: $$s(\widetilde{T_{i}})=\frac{1}{K}\sum_{j=1}^{K}\mathrm{MLP}_{\mathrm{rs}}(\mathbf{e}_{h_{j}}+\mathbf{r}_{\widetilde{T_{i}}}-\mathbf{e}_{t_{j}}),\tag{12}$$ where MLPrs(·|ψ) is an L-layer fully connected neural network with an output dimension as 1 and parameterized by ψ. Here rTei is obtained using SHgi and STf i following Eq.(1), ehj denotes the embedding of i-th head entity in SHgi and etj is the embedding of i-th tail entity in STf i. For better exploration, the task selector adopts a stochastic policy π to make the choice on meta-tasks under the categorical distribution p = Cat(·|TGEN ). The probability p(Tei) of selecting task Teiis calculated by p(Tei) = s(Tei)/PN i=1 s(Tei). With the obtained sampling probabilities, we can select B meta-tasks from TGEN . Then metatraining of few-shot learner FS(·) can be conducted following Eq.(1) to Eq.(4) using the selected B meta-tasks. Hence the few-shot learner is optimized on B generated meta-tasks as follows: $${\mathcal{L}}_{\widetilde{\mathrm{FS}}}=\sum_{i=1}^{B}{\mathcal{L}}_{\widetilde{Q}_{i}},\qquad\qquad(13)$$ where LQei denotes the loss on generated query set. To motivate the task selector towards the selection of precise meta-tasks, we evaluate the task selector with feedback signals from the few-shot learner, which reflect the effectiveness of metatasks for training the few-shot learner. Specifically, we denote ΦOLD as the parameters of the current few-shot learner and ΦNEW as the parameters after the few-shot learner undertakes a temporary update with Eq.(13). Leveraging FS(·|ΦOLD) and ![5_image_0.png](5_image_0.png) FS(·|ΦNEW) and following Eq.(1) to Eq.(4), the reward R is defined as follows: $$R=\operatorname{tanh}(\frac{1}{|\mathcal{T}_{v a l}|}\sum_{i=1}^{|\mathcal{T}_{v a l}|}\mathcal{L}_{\mathcal{Q}_{i}}^{\mathrm{OLD}}-\mathcal{L}_{\mathcal{Q}_{i}}^{\mathrm{NEW}}),\tag{14}$$ where L OLD Qiand L NEW Qiare loss functions, defined in Eq.(5) with meta-validation tasks and obtained with FS(·|ΦOLD) and FS(·|ΦNEW) respectively. Thereby performance improvement with ΦNEW against ΦOLD will reward the task selector and reinforce it to choose corresponding meta-tasks. To optimize the selector, we adopt the policy gradient algorithm REINFORCE (Williams, 1992) to overcome the issue of non-differentiability of the sampling process. The optimization works by: $$\psi\leftarrow\psi-\alpha\nabla_{\psi}\log\pi_{\psi}(R-b),\qquad(15)$$ where we use π with a slight abuse of notation to denote the task selector parameterized by ψ and α is the learning rate. Besides, b denotes a baseline function, e.g., the moving average of the reward, for reducing computational variance. ## 4.4 Optimization And Inference We employ the iterative optimization strategy to optimize three components in the proposed method, i.e., few-shot learner, task generator, and task selector. With the pre-trained representation for entities, the task generator is first optimized to generate meta-tasks TGEN , then the task selector is applied to conduct selection on TGEN , next the few-shot learner can be optimized using selected meta-tasks and given meta-tasks T*train*. With the obtained reward R, we can update the task selector according to Eq.(15). The whole process is repeated with enough iterations until all components converge. In the inference stage, we use the optimized fewshot learner to make inferences on new relations in meta-testing set T*test*. Similar to the meta-training stage, FS(·) can learn relation representation using the support set in T*test* for a new relation. Then the generated representation is leveraged to evaluate the test triples in the query set. Note that FS(·) is not updated by using the query set anymore in the meta-testing stage. ## 5 Experiments 5.1 Experimental Settings Dataset. We adopt a multilingual KG dataset (Chen et al., 2020) for evaluation. Specifically, we select four different language-specific KGs extracted from French (FR), Spanish (ES), Japanese (JA) and Greek (EL) DBpedia (Lehmann et al., 2015) as low-resource KGs, as they only have a small number of frequent relations that can offer triples for training task construction. Table 1 shows the statistics of used datasets. Baselines. To validate the effectiveness of our method, we compare our method with two groups of baseline methods. The first group consists of canonical KG completion models including TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2017), SimplE (Kazemi and Poole, 2018), and WRAN (Zhang et al., 2020b). The second group is the few-shot KG completion models, including GMatching (Xiong et al., 2018), FSRL (Zhang et al., 2020a), MetaR (Chen et al., 2019), and GANA (Niu et al., 2021). Implementation. We implement the proposed method using the Python library Pytorch and conduct all the experiments on an NVIDIA GeForce RTX 3090Ti. Following the popular setting (Zhang et al., 2020a; Xiong et al., 2018), we remove all the relations in T*train* and T*test* from KG G and obtain a *background knowledge graph* for pre-training the KG embedding leveraging DistMult (Yang et al., 2015) as the KG encoder. After that, the pre-trained embeddings of entities and relations can be used in the proposed method. Following the popular procedure and the setting in the related works (Zhang et al., 2020a; Xiong et al., 2018), we extract the few-shot learning tasks and divide them into the meta-training, meta-validation, and meta-testing tasks. Specifically, we select the relations with less than 500 but more than 50 triples for preparing the few-shot learning tasks following the popular setting (Zhang et al., 2020a; Xiong et al., 2018). Note that even though the meta-testing tasks include relations with more than 50 triples, we only sample K = 1 or K = 3 triples as the support set during the meta-testing phase and adopt the rest for evaluation to imitate the real few-shot scenario. For the few-shot learner, we use three-layer MLP to implement FS(·) with LeakyReLU as the activation function. The hidden units of each layer in FS(·) are set as 500, 200, and 100, respectively. And we set the margin parameter γ as 1.0 and set the step size β as 5.0. For the task generator, we use two-layer MLP to implement MLPenc(·) and MLPdec(·) with ReLU as the activation function. The hidden units of each layer in MLPenc(·) are set as 512, and 256, respectively. And the hidden units of each layer in MLPdec(·) are set as 256, and 512, respectively. Besides, we use a linear layer to implement fµ(·) and fσ(·) with the layer size as 256. For the task selector, we use two-layer MLP to implement MLPrs(·) with LeakyReLU as the activation function. The hidden units of each layer in MLPrs(·) are set as 100, and 50, respectively. The dimension of entity embedding is set as 100 for all methods. Besides, we also find the optimal parameters or follow the original paper to achieve the best performance for baseline methods. Note that all triples in the background KGs and the meta-training tasks, as well as all triples corresponding to entity pairs in the support set of metavalidation and meta-testing tasks, should be used to train the canonical KG completion models. For optimization, we employ Adam optimizer to optimize all loss functions with a learning rate of 0.001. The model trained on the meta-training tasks can be used for the meta-validation tasks every 1000 epochs, and the model parameters and corresponding performance will be recorded. Then the model with the best performance on MRR can be used as the final model for meta-testing. Besides, we use early stopping with 30 patient epochs during metatraining. Following the previous work, we report the Hits@1, Hits@5, Hits@10, and MRR (mean reciprocal rank) results to evaluate the performance of few-shot KG completion. Each evaluation is repeated 3 times and averaged results are reported. ## 5.2 Experimental Results Performance comparison. The results of all evaluated KG completion models on four different low-resource KGs with K = 1 and K = 3 are shown in Table 2. We can observe that (1) the proposed method FLow-KGC has superior performance over canonical KG completion models and few-shot KG completion models by Hits@1, Hits@5, Hits@10, and MRR; (2) The long-tail problem of low-resource KGs has a significant im- Table 2: Few-shot KG completion comparison on low-resource KGs. The few-shot size K = 1 and K = 3. The best results are in bold, and the strongest baseline is indicated with *. | KGs | ES (Spanish) | EL (Greek) | FR (French) | JA (Japanese) | | | | | | | | | | |------------------------------|---------------------------------------------------------------------------------------------------------|---------------------|---------------|-----------------------------|---------------------|----------------------------|-------------|----------------------|-------------|--------------------|-------------|-------|-------| | Methods | MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 | | | | | | | | | | | | | | TransE | 0.068 0.003 | 0.127 | 0.191 | 0.043 0.001 | 0.078 | 0.124 | 0.078 0.001 | 0.159 | 0.213 | 0.103 0.008 | 0.211 | 0.271 | | | DistMult | 0.051 0.019 | 0.055 | 0.108 | 0.059 0.029 | 0.077 | 0.116 | 0.035 0.008 | 0.045 | 0.081 | 0.065 0.008 | 0.103 | 0.161 | | | ComplEx | 0.089 0.040 | 0.124 | 0.188 | 0.162 0.105 | 0.217 | 0.263 | 0.075 0.036 | 0.106 | 0.163 | 0.091 0.037 | 0.135 | 0.188 | | | SimplE | 0.048 0.017 | 0.054 | 0.111 | 0.059 | 0.02 | 0.093 | 0.132 | 0.029 0.006 | 0.036 | 0.068 | 0.076 0.018 | 0.111 | 0.21 | | WRAN | 0.105 0.035 | 0.165 | 0.224 | 0.084 0.053 | 0.117 | 0.242 | 0.097 0.018 | 0.162 | 0.223 | 0.096 0.042 | 0.135 | 0.206 | | | GMatching | 0.186 0.153 | 0.248 | 0.289 | 0.212 0.146 | 0.297 | 0.329 | 0.168 0.125 | 0.203 | 0.274 | 0.162 0.143 | 0.258 | 0.278 | | | GANA | 0.195 0.167 | 0.254 | 0.265 | 0.202 0.162 | 0.294 | 0.324 | 0.172 0.128 | 0.216 | 0.273 | 0.167 0.081 0.268* | 0.288 | | | | FSRL | 0.223 0.176 | 0.265 | 0.298 | 0.245 0.176 0.305* 0.347* | 0.183 0.145 | 0.239 | 0.304 | 0.194 0.165 | 0.263 | 0.297* | | | | | MetaR | 0.242* 0.194* 0.285* 0.317* | 0.247* 0.199* 0.284 | 0.338 | 0.210* 0.158* 0.256* 0.315* | 0.227* 0.186* 0.264 | 0.285 | | | | | | | | | FLow-KGC 0.271 0.221 | 0.311 | 0.354 | 0.267 0.210 | 0.326 | 0.375 | 0.240 0.185 | 0.282 | 0.348 | 0.234 0.189 | 0.278 | 0.315 | | | | Improv. (%) 11.98 13.92 | 9.12 | 11.67 | 8.10 | 5.52 | 6.88 | 8.07 | 14.28 17.08 | 10.15 | 10.47 | 3.08 | 1.61 | 5.30 | 10.52 | | (b) The few-shot size K = 3. | | | | | | | | | | | | | | | KGs | ES (Spanish) | EL (Greek) | FR (French) | JA (Japanese) | | | | | | | | | | | Methods | MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 | | | | | | | | | | | | | | TransE | 0.072 0.003 | 0.142 | 0.199 | 0.081 0.001 | 0.163 | 0.228 | 0.098 0.001 | 0.202 | 0.258 | 0.09 | 0.007 | 0.174 | 0.235 | | DistMult | 0.076 0.028 | 0.102 | 0.163 | 0.149 0.087 | 0.205 | 0.272 | 0.064 | 0.02 | 0.089 | 0.156 | 0.142 0.074 | 0.197 | 0.266 | | ComplEx | 0.114 0.097 | 0.147 | 0.235 | 0.164 0.096 | 0.237 | 0.305 | 0.139 0.076 | 0.206 | 0.261 | 0.137 0.069 | 0.217 | 0.27 | | | SimplE | 0.008 0.098 | 0.118 | 0.178 | 0.134 0.069 | 0.207 | 0.274 | 0.058 0.017 | 0.081 | 0.145 | 0.103 0.039 | 0.158 | 0.254 | | | WRAN | 0.125 0.069 | 0.174 | 0.241 | 0.168 0.082 | 0.213 | 0.267 | 0.125 0.089 | 0.223 | 0.244 | 0.146 0.083 | 0.187 | 0.256 | | | GMatching | 0.201 0.164 | 0.252 | 0.309 | 0.226 0.164 | 0.272 | 0.347 | 0.175 0.138 | 0.226 | 0.292 | 0.186 0.152 | 0.263 | 0.307 | | | GANA | 0.206 0.173 | 0.242 | 0.252 | 0.265* 0.208* 0.307 | 0.389* | 0.203 0.158* 0.251* 0.312* | 0.151 0.093 | 0.246 | 0.310 | | | | | | FSRL | 0.234 0.185 | 0.281 | 0.327 | 0.257 0.203 0.310* | 0.384 | 0.182 0.143 | 0.246 | 0.284 | 0.205 0.162 | 0.269 | 0.335* | | | | MetaR | 0.247* 0.199* 0.297* 0.332* | 0.252 0.197 | 0.302 | 0.365 | 0.205* 0.154 | 0.246 | 0.297 | 0.224* 0.167* 0.285* | 0.320 | | | | | | FLow-KGC 0.275 0.227 | 0.320 | 0.348 | 0.270 0.217 | 0.323 | 0.421 | 0.223 0.164 | 0.283 | 0.345 | 0.248 0.199 | 0.292 | 0.341 | | | | Improv. (%) 11.33 14.07 | 7.74 | 4.82 | 1.88 | 4.32 | 4.19 | 8.22 | 8.78 | 3.79 | 12.74 | 10.57 | 10.71 19.16 | 2.45 | 1.79 | pact on canonical KG completion models, such as TransE, DisMult, ComplEx, and SimplE, which have dramatic performance degradation when inferring the missing triples associated with rare relations; (3) Few-shot KG completion models make progress in improving the inference by pre-training on similar tasks to learn better initialization for rare relations. However, their performance is confined by the deficiency of training tasks; (4) The superior results of FLow-KGC show the effectiveness of leveraging generated meta-tasks to mitigate the influence of the long-tail problem on low-resource KG completion and improve the generalization. Ablation study. To gain deeper insight into the effectiveness of each component in the proposed model, we conduct ablation studies by comparing the following variants with FLow-KGC: (1) FLowKGC-FSL that only keeps a few-shot learner and does not adopt the task generator and task selector. (2) FLow-KGC-w/o-S that adopts the few-shot learner to learn representation and task generator to generate tasks, without the adaptive selection enabled by the task selector. The results on French (FR) and Greek (EL) KG are summarized in Ta- | FR | MRR Hits@1 Hits@5 Hits@10 | | | | |----------------------|-----------------------------|-------|-------|-------| | FLow-KGC-FSL | 0.208 | 0.160 | 0.256 | 0.312 | | FLow-KGC-w/o-S 0.231 | 0.179 | 0.276 | 0.337 | | | FLow-KGC | 0.240 | 0.185 | 0.282 | 0.348 | | EL | MRR Hits@1 Hits@5 Hits@10 | | | | | FLow-KGC-FSL | 0.245 | 0.198 | 0.285 | 0.334 | | FLow-KGC-w/o-S 0.262 | 0.205 | 0.314 | 0.367 | | | FLow-KGC | 0.267 | 0.210 | 0.326 | 0.375 | ble 3, we see that the performance of FLow-KGCFSL is interior to that of others because the variant is a basic few-shot learner and easy to overfit the small set of training tasks. FLow-KGC-w/oS outperforms FLow-KGC-FSL, because the task generator can generate useful synthetic meta-tasks which complement the existing tasks and mitigate the overfitting of the few-shot learner to some extent. FLow-KGC combining the task generator and task selector achieves the best performance among these variants because the task selector distinguish beneficial tasks from noisy tasks and utilizes the ![8_image_0.png](8_image_0.png) Figure 2: Results on Spanish (ES) and Greek (EL) KGs when varying few-shot size K. ![8_image_1.png](8_image_1.png) selected tasks to learn a better few-shot learner. Impact of few-shot size K. To investigate the impact of few-shot size K on the performance of fewshot KG completion, we test MetaR, GANA, and FLow-KGC with size K = 1, 3, 5, 7, 9 on Spanish (ES) and Greek (EL) KGs. Figure 2 shows Hits@1 results of three methods with different K. We see that FLow-KGC consistently outperforms the other two baseline methods with different sizes K, demonstrating the effectiveness of the proposed method. And the performance of all methods gains improvement with K increasing because the larger support set provides more entity pairs to optimize the few-shot learner and learn more accurate representations for rare relations. Impact of the number of synthetic tasks. The number of virtual relations V and the number of generated tasks Nv for each virtual relation are hyper-parameters to decide the number of synthetic tasks. Figure 3 shows MRR results of FLow-KGC with different V and Nv on French (FR) and Spanish (ES) KGs. First, with a fixed Nv = 20 for French KG and a fixed Nv = 25 for Spanish KG, we find that FLow-KGC has superior performance when V = 10. Second, with a fixed V = 10, FLow-KGC with Nv = 20 and Nv = 25 performs best for French KG and Spanish KG, respectively. We think the reasons behind the observations are similar. A small V or Nv cannot effectively complement the current tasks, and a larger V or Nv, might introduce more noisy tasks and damage the performance of the few-shot learner. Impact of the proportion of task selection. FLow-KGC selects B beneficial tasks from the generated tasks to achieve the adaptive selection. Here we denote the selection proportion as B V ×Nv . To evaluate the impact of the proportion of task selection, we evaluate FLow-KGC with selection proportion 10%, 30%, 50%, 70%, 90%. Figure 4 shows the results on two KGs (ES and JA). We find that FLow-KGC with a selection proportion of 50% achieves the best performance. We think that a larger proportion unavoidably introduces more noisy tasks into the meta-training phase and a smaller proportion discards extra beneficial tasks, which hurts the effectiveness of FLow-KGC. ## 6 Conclusion In this paper, we proposed a novel few-shot KG completion model to ease the adverse impact of the long-tail issue on low-resource KG completion. Specifically, we designed a task generator based on a conditional variational autoencoder to generate synthetic meta-tasks and proposed a task selector to adaptively select beneficial meta-tasks for optimizing a few-shot learner, which further provides the feedback to update the task selector following the principle of reinforcement learning. These three components constitute our method FLow-KGC. Extensive experimental results demonstrate the rationality and effectiveness of our proposed method. ## Limitations Despite achieving superior performance, our proposed method requires manual selection for hyperparameters to decide the number of tasks, i.e., the number of virtual relations V and the number of synthetic tasks Nv for each virtual relation. In future work, we target developing the method with automatic adjustment to add/remove virtual relations and the corresponding tasks according to the status of the few-shot learner with the training going on by curriculum learning. Besides, although we adopt a task selector to adaptively select beneficial tasks, it is still inevitable to bring noisy tasks in the meta-training stage. We will explore the strategy to achieve better denoising. ## References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, Andreas Bender, Charles Tapley Hoyt, and William Hamilton. 2021. A review of biomedical datasets relating to drug discovery: A knowledge graph perspective. *arXiv preprint* arXiv:2102.10062. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Yixin Cao, Zhiyuan Liu, Chengjiang Li, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1452–1461. Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In EMNLP-IJCNLP, pages 4217–4226. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI, pages 1511–1517. Xuelu Chen, Muhao Chen, Changjun Fan, Ankith Uppunda, Yizhou Sun, and Carlo Zaniolo. 2020. Multilingual knowledge graph completion via ensemble knowledge transfer. *Findings of the Association for* Computational Linguistics: EMNLP 2020. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Édouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL, pages 8440–8451. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *AAAI*. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In *International conference on machine learning*, pages 1126–1135. PMLR. Luis Galárraga, Christina Teflioudi, Katja Hose, and Fabian M Suchanek. 2015. Fast rule mining in ontological knowledge bases with amie+. *The VLDB* Journal, 24(6):707–730. Qingyu Guo, Fuzhen Zhuang, Chuan Qin, Hengshu Zhu, Xing Xie, Hui Xiong, and Qing He. 2020. A survey on knowledge graph-based recommender systems. *IEEE Transactions on Knowledge and Data* Engineering. Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 4289–4300. Diederik P Kingma and Max Welling. 2014. Stochastic gradient vb and the variational auto-encoder. In Second International Conference on Learning Representations, ICLR, volume 19, page 121. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2019. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations. In *EMNLP-IJCNLP*, pages 3376–3381. Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bishan Yang, Justin Betteridge, Andrew Carlson, Bhavana Dalvi, Matt Gardner, Bryan Kisiel, et al. 2018. Never-ending learning. *Communications of the ACM*, 61(5):103–115. Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung, et al. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327–333. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, pages 271–280. Guanglin Niu, Yang Li, Chengguang Tang, Ruiying Geng, Jian Dai, Qiao Liu, Hao Wang, Jian Sun, Fei Huang, and Luo Si. 2021. Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion. *SIGIR*. Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019a. Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In *The World Wide Web Conference*. Shichao Pei, Lu Yu, Guoxian Yu, and Xiangliang Zhang. 2020. Rea: Robust cross-lingual entity alignment between knowledge graphs. *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019b. Improving cross-lingual entity alignment via optimal transport. In *International Joint Conference on Artificial Intelligence*. Janarthanan Rajendran, Alexander Irpan, and Eric Jang. 2020. Meta-learning requires meta-augmentation. Advances in Neural Information Processing Systems, 33:5705–5715. Jiawei Sheng, Shu Guo, Zhenyu Chen, Juwei Yue, Lihong Wang, Tingwen Liu, and Hongbo Xu. 2020. Adaptive attentional network for few-shot knowledge graph completion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1681–1691. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. *Advances in neural* information processing systems, 28. Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018a. Bootstrapping entity alignment with knowledge graph embedding. In *IJCAI*, volume 18, pages 4396–4402. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2018b. Rotate: Knowledge graph embedding by relational rotation in complex space. In *International Conference on Learning Representations*. Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, and Xiangliang Zhang. 2022. Positive-unlabeled learning with adversarial data augmentation for knowledge graph completion. *IJCAI*. Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. *Journal of Machine* Learning Research, 18:1–38. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multirelational graph convolutional networks. In *International Conference on Learning Representations*. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Machine learning*. Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1980–1990. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. *International Conference on Learning Representations*. Shiquan Yang, Rui Zhang, and Sarah Erfani. 2020. Graphdialog: Integrating graph knowledge into endto-end task-oriented dialogue systems. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 1878–1888. Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, and Nitesh V Chawla. 2020a. Few-shot knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3041–3048. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei Zhang, and Huajun Chen. 2020b. Relation adversarial network for low resource knowledge graph completion. In *Proceedings of The Web Conference 2020*, pages 1–12. Wen Zhang, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019. Interaction embeddings for prediction and explanation in knowledge graphs. In *Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining*, pages 96–104. Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In Thirty-second AAAI conference on artificial intelligence. Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Zhiping Shi, Hui Xiong, and Qing He. 2020c. Relational graph neural network with hierarchical attention for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9612–9619. Wenxuan Zhou, Fangyu Liu, Ivan Vulic, Nigel Collier, ´ and Muhao Chen. 2021. Prix-lm: Pretraining for multilingual knowledge base construction. arXiv preprint arXiv:2110.08443. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section of limitations ✗ A2. Did you discuss any potential risks of your work? No potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
chen-2023-incomplete
Incomplete Utterance Rewriting as Sequential Greedy Tagging
https://aclanthology.org/2023.findings-acl.456
The task of incomplete utterance rewriting has recently gotten much attention. Previous models struggled to extract information from the dialogue context, as evidenced by the low restoration scores. To address this issue, we propose a novel sequence tagging-based model, which is more adept at extracting information from context. Meanwhile, we introduce speaker-aware embedding to model speaker variation. Experiments on multiple public datasets show that our model achieves optimal results on all nine restoration scores while having other metric scores comparable to previous state-of-the-art models. Furthermore, benefitting from the model{'}s simplicity, our approach outperforms most previous models on inference speed.
## Yunshan Chen SF Technology Co., Ltd. chenyunshan.ai@gmail.com ## Abstract The task of incomplete utterance rewriting has recently gotten much attention. Previous models struggled to extract information from the dialogue context, as evidenced by the low restoration scores. To address this issue, we propose a novel sequence tagging-based model, which is more adept at extracting information from context. Meanwhile, we introduce speakeraware embedding to model speaker variation. Experiments on multiple public datasets show that our model achieves optimal results on all nine restoration scores while having other metric scores comparable to previous state-of-theart models. Furthermore, benefitting from the model's simplicity, our approach outperforms most previous models on inference speed. ## 1 Introduction Recent years have witnessed increasing attention in dialogue systems mainly due to its promising potential for applications like virtual assistants or customer support systems (Hauswald et al., 2015; Debnath et al., 2018). However, studies (Carbonell, 1983) show that users of dialogue systems tend to use incomplete utterances which usually omit (a.k.a. ellipsis) or refer back (a.k.a. co-reference) to the concepts that appeared in previous dialogue contexts. (also known as non-sentential utterances, (Fernández et al., 2005). Thus, dialogue systems must understand these incomplete utterances to make appropriate responses. To tackle the problem, the task of Incomplete Utterance Rewriting(IUR, also known as context rewriting) (Su et al., 2019; Pan et al., 2019; Elgohary et al., 2019), which aims to rewrite an incomplete utterance into an utterance that is semantically equivalent but self-contained to be understood without context, has recently become an increasing focus of NLP research. As depicted in Table 1, the incomplete utterance u3 not only omits the subject "深圳"(Shenzhen), but also refers to the | Turn | Utterance with Translation | |------------------------------------------|--------------------------------------------| | u1 (A) | 深圳最近天气怎么样? | | (How is the recent weather in Shenzhen?) | | | u2 (B) | 最近经常阴天下雨。 | | (It is always raining recently.) | | | u3 (A) | 冬天就是这样的。 (Winter is like this.) | | 深圳冬天就是经常阴天下雨。 | | | u ′ 3 | (It is always raining in winter Shenzhen.) | Table 1: An example dialogue between speaker A and B, including the context utterances (u1, u2), the incomplete utterance (u3) and the rewritten utterance (u′3). semantics of "经常阴天下雨"(always raining) via the pronoun "这样"(this). The downstream dialogue model only needs to take the last utterance by explicitly recovering the dropped information into the latest utterance. Thus, the burden of long-range reasoning can be primarily relieved, making the downstream dialogue modeling more accurate. The previous top work on building IUR model mainly includes generation-based methods and tagging-based methods. Generation-based solution (Su et al., 2019; Pan et al., 2019; Elgohary et al., 2019) consider this task as a standard text-generation problem, adopting a sequence-tosequence model with a copy mechanism (Gulcehre et al., 2016; Gu et al., 2016; See et al., 2017). However, those methods generate the rewritten utterance from scratch, which introduces an over-large search space and neglects the critical trait that the main structure of a rewritten utterance is always the same as the incomplete utterance. In order to break through those limitations, tagging-based approach (Liu et al., 2020; Hao et al., 2021; Jin et al., 2022; Zhang et al., 2022; Wang et al., 2022) was proposed. For specifically, here we consider models like RUN (Liu et al., 2020) as a tagging-based method. Its semantic segmentation Figure 1: The illustration of the main learning task of the sequence tagging model SGT(Sequential Greedy Tagging). We adopt the same example dialogue from Table 1. Considering SGT is position-dependent, and word order between Chinese and English is different, the corresponding English utterance is not provided, which is the same for Figure 3. task can be analogous to the sequence tagging task. The main difference is that the semantic segmentation task is tagging in two-dimensional coordinates, while the sequence annotation task is tagging in one-dimensional coordinates. The previous top tagging-based approach generally formalizes IUR as learning the edit operation and corresponding location. The tagging-based approach enjoys a smaller search space than the generation-based approach and can better utilize the information that the main structure of a rewritten utterance is always the same as the incomplete utterance. Despite their success, existing approach that learning edit operation and the corresponding location has difficulty handling situations where multiple inserts correspond to one position. Moreover, models like RUN adopt a heavy model that takes ten convolution layers in addition to the BERT encoder, which will increase its training time and slows down its infer speed. More critically, although BERT (Devlin et al., 2019) has shown to be powerful in extracting information, the generally low restoration scores prove that previous BERTbased models are ineffective in extracting the information needed for IUR from historical utterances. Finally, the experimental results of SA-BERT (Gu et al., 2020) demonstrate that explicitly modeling speaker changes has a specific enhancement effect on modeling multi-turn dialogue tasks. The previous approach did not model this critical information. To address these issues, we propose a novel sequence tagging model named SGT(Sequential Greedy Tagging), which is not based on learning editing operations and can significantly improve the restoration score and inference speed. Our solution was derived from the following thinking: First, we consider that in the dialogue process, any complete utterance is composed by only of a few fragments. For example, "I love you" includes three components: subject, verb, and object. Even if it is expanded with modifications and qualifications, its composition is still minimal. Based on this insight, we thought it would be possible to build a model to identify the fragments and their order from dialogue history to form a target completed utterance. And then, splice those fragments together in sequence and get the complete utterance. Meanwhile, in order to keep the number of fragments constituting the target rewritten utterance relatively small, we adopt the greedy tagging strategy. Our model will identify all the fragments and their order required to form a completed utterance; each fragment is the longest fragment found in the given order. We might as well call this fragment **GLCS** (Greedy Longest Common Subsequence). Specifically, we use the tag type to represent the order of GLCS for composing the target rewrite utterance, For example, the first GLCS that constitutes a rewritten utterance would be tagged as A, and the second is B, the third is C, and so on. In the above manner, we converted IUR into a simple sequence tagging task, as illustrated in Figure 1. After the model has identified all GLCSs from the dialogue history through this strategy, the target rewritten utterance can be obtained by splicing each GLCS in alphabetical order according to its tag. Furthermore, we introduce speaker-aware embedding to model the speaker changes in different rounds. Finally, to better perceive the boundaries of each tagging mention, we add two simple losses in addition to the sequence labeling loss. In summary, our contributions are as follows: 1. We proposed SGT, a novel paradigm to model IUR. Due to the simplicity and effectiveness of modeling, our approach can fully utilize the sequence labeling capabilities of BERT to extract information from historical utterances and thus restore incomplete utterances with more accuracy. Experiments on several datasets show that our method significantly improved the ability to extract mentions from context, which are argued to be harder to copy by (Pan et al., 2019). 2. To the best of our knowledge, we are the first to introduce speaker-aware embedding to model IUR. 3. Finally, benefit from the model's simplicity. Our inference speed is faster than most previous models. ## 2 Related Work Earlier efforts (Su et al., 2019; Elgohary et al., 2019) treated dialogue utterance rewriting as a common text generation problem and integrated seq-toseq models with copy mechanisms to model this task. Later work (Pan et al., 2019; Zhou et al., 2019; Huang et al., 2021) explore task-specific features for additional gains in performance. For example, (Pan et al., 2019) adopt a pipeline-based method. The idea is to detect keywords first and then append those words to the context and adopt a pointer generator that takes the output of the first step to produce the output. However, this two-step method inevitably accumulates errors. SRL (Xu et al., 2020) trains a semantic labeling model to highlight the central meaning of keywords in the dialogue as a sort of prior knowledge for the model. To obtain an accurate SRL model for dialogues, they manually annotate SRL information for more than 27,000 dialogue turns, which is costly and time-consuming. RUN (Liu et al., 2020) convert this task into a semantic segmentation problem, a significant task in computer vision. In particular, their model generates a word-level edit matrix, which contains the operations of insertion and substitution for each original utterance. Rather than word embeddings, RAU (Zhang et al., 2022) directly extracts ellipsis and co-reference relationships from Transformer's self-attention weighting matrix and edits the original text accordingly to generate complete utterances. RUN++ (Wang et al., 2022) Introduce contrastive learning and keyword detection tasks to model the problem jointly. Both RAU and RUN++ make significant improvements in most metrics on several datasets. Although some additional effective strategies exist. It is still in the same paradigm as RUN, learning edit matrix by cast IUR as a semantic segmentation task. RAST (Hao et al., 2021) is the first work to convert dialogue utterance rewriting into a sequence tagging task. It takes experimentation to prove that most models for this task suffer from the robustness issue, i.e., performance drops when testing on a different dataset. By contrast, RAST is more robust than the previous works on cross-domain situations. Moreover, this work design additional reinforcement learning task to improve fluency. Despite all these efforts, its overall in-domain performance still lags behind methods that learn edit operation matric (Liu et al., 2020). To better enhance pre-trained language models for multi-turn response selection in retrieval-based chatbots. A model named Speaker-Aware BERT (SA-BERT) (Gu et al., 2020) proposed to make the model aware of the speaker's changed information, which is an essential and intrinsic property of multiturn dialogues. Although RAST has a different learning paradigm from works that learn edit matrix, it still tries to learn the edit operation and corresponding location by sequence tagging. As mentioned before, our method is sequence tagging-based but takes an entirely new paradigm that would not learn edit operations. Besides, inspired by SA-BERT (Gu et al., 2020), we introduce speaker embedding to this task. Finally, we introduce two simple sequence labeling tasks to model this problem jointly. ## 3 Methodology 3.1 Task Definition Here we give the formal definition of how we model the IUR problem with the SGT approach. Taking all history utterances H = (U1, U2*, ..., U*n) as input, SGT aims to learn a function to rewrite Un to R: f(H) → R. R is the target rewritten utterance in the infer stage. In particular, Un is the last utterance of all history utterances and the utterance that needs to be rewritten in the IUR task. R is the reference rewritten utterance Uref in the training phase and the target rewritten utterance in the inference phase. ## 3.2 Model Architecture Figure 2 shows the overall architecture of our model. Contextual Encoder Since pre-trained language models have been proven to be effective in many NLP tasks, our experiment employs BERT (Devlin et al., 2019) to be encoder. For a fair comparison, we take the same BERT-base encoder as the previous sota work (e.g., RUN, RAU, RUN++) to represent each input. Concretely, given input token list H = (x1, x2, · · · , xM) which concatenated by all utterances of dialogue history and inserted a special ![3_image_0.png](3_image_0.png) token [SEP] between each utterance for separate utterances in different turns. The BERT encoder is firstly adopted to represent the input with contextualized embeddings and the calculation of this part is defined as: $$\mathbf{\tau})=B E R T\left(H\right)$$ $\uparrow$ . E = (e1, *· · ·* , eM) = *BERT* (H) (1) Speaker Aware Embedding To distinguish utterances between different speakers, our approach stitches a one-dimensional one-hot vector at the hidden dimension with the output representation of the BERT encoder. This design is based on two considerations. On the one hand, most of the dialogue in the dataset is back-and-forth conversations between only two people. On the other hand, adding speaker embedding at the input layer and performing domain adaptation like SA-BERT will make the encoder different from the BERT-based model, which would contradict the fair comparison conditions we assumed earlier in paragraph 3.2. The calculation of this part is defined as follows:: $$E A=C o n c a t\left(D r o p o u t\left(E\right),S A\right)\quad\quad\left(2\right)$$ $\pi M\times768\cdot\frac{1}{2}$ In the above equation, E ∈ RM×768 is the output representation from the contextual encoder. SA ∈ RM×S denotes the speaker-aware embedding. We concatenate E and SA alongside its hidden dimension to get EA ∈ RM×(768+S). Sequential Greedy Tagging Our main task is sequential greedy tagging, this can be generally $$\mathbb{I}\;\mathrm{d}S_{\cdot}^{\cdot}$$ defined as: $$P_{s g t}=f(H)$$ $$(1)$$ Specifically, H = (x1, x2, · · · , xM) is the input token list that concatenated by the dialogue's history utterances. The model learns a mapping function f to predict from H to the token-level sequence labeling matrix Psgt ∈ RMXN , where M is the token number of sequence H, and N is the number of tag types. The objective function is defined as: $$L_{s g t}=\frac{1}{M\times N}\sum_{i=0}^{M\times N}C E\left(P_{s g t}^{i},Y_{s g t}^{i}\right)\quad\mathrm{(4)}$$ $$\left({\mathcal{I}}{\mathcal{I}}\right)$$ Where Y i sgt is the target type of the i-th sample at the token level. CE is the notation of cross-entropy loss which is the same for both equations 5 and 6. GLCS Detection and GLCS Edge Detection To better lock in the span of target GLCS needed to make up the rewritten utterance, we introduced multi-task learning. Firstly, as depicted by the red components on the right side of Figure 2, the GLCS Detection module (GD) is a binary classification task to distinguish whether a token should belong to a target GLCS. The module GD outputs Pgd ∈ RMX1. LD is essentially a sequence tagging problem, and the loss function of the GLCS detection is as follows: $$L_{g d}=\frac{1}{M}\sum_{i=0}^{M}C E\left(P_{g d}^{i},Y_{g d}^{i}\right)\qquad\quad(5)$$ Y i gd is the golden mentions label of the i-th sample. P i gd is the predicted mentions label of the i-th sample. Secondly, as depicted by the green components on the right side of Figure 2, the GLCS Edge Detection module (GED) is a binary classification task with a structure similar to GD. Specifically, a target that consists of a single token or only two tokens will be marked throughout as 1; only tokens at its start position and end position will be marked as 1 when more than three tokens, left with the others as 0. The loss function of the GED is as follows: $$L_{g e d}=\frac{1}{M}\sum_{i=0}^{M}C E\left(P_{g e d}^{i},Y_{g e d}^{i}\right)\qquad\mathrm{(6)}$$ Y i ged is the golden mentions label of the i-th sample. P i ged is the predict mentions label of the i-th sample. Final Learning objectives Finally, we combine all tasks and train them simultaneously by taking the summation of all loss functions, and the final loss function is shown below: $$L_{f i n a l}=L_{g d}+L_{g e d}+L_{s g t}$$ ## 3.3 Data Construction The construction of the training data for the SGT task is shown in Figure 3. First, in step S1, we make U (1) ref = Uref , then find the LCS between each history utterance and U 1 ref separately. Also, this LCS needs to satisfy being a prefix of U (1) ref . After step S1, we can get the first GLCS "深圳"(Shenzhen), and we set the label of its corresponding position to "AA." Then, in step S2, we make U (2) ref = (U (2) ref remove the prefix "深圳"(Shenzhen)). Performing the same GLCS search process, we can obtain the second GLCS "冬天就是"(winter is) and set its label as "BBBB." Analogously, we can get the third GLCS "经常阴天下雨"(always cloudy and raining) and set its label as "CCCCCC" at Step S3. Finally, the historical utterances are stitched together as the input of the SGT task. The corresponding labels obtained from steps S1, S2, and S3 are used as the labels of the sequence labeling task. Points need to be clarified: (i) **Granularity** The token sequence is char level for Chinese and word level for English and numbers, both in the GLCS ![4_image_0.png](4_image_0.png) $$(7)$$ matching phase of S1, S2, and S3 and in the training phase of data obtained from S4, which is the same as RUN; (ii) **Duplicate Matching** In case of duplicate matches, e.g., if U1 and U2 have the same desired GLCS, the principal is the latter, the better. ## 4 Experiments In this section, we conduct through experiments to demonstrate the superiority of our approach. Datasets We conduct experiments on three public datasets across different domains: Chinese datasets in open-domain dialogues: MULTI (Pan et al., 2019) and REWRITE (Su et al., 2019) , English Task-Oriented Dialogue TASK (Quan et al., 2019). For a fair comparison, We adopt the same data split for these datasets as our baselines. The statistics of these datasets are displayed in Table 2. | MULTI | REWRITE | TASK | | |------------|-----------|---------|---------| | Language | Chinese | Chinese | English | | Train | 194K | 18K | 2.2K | | Dev | 5K | 2K | 0.5K | | Test | 5K | NA | NA | | Avg. C len | 25.5 | 17.7 | 52.6 | | Avg. Q len | 8.6 | 6.5 | 9.4 | | Avg. R len | 12.4 | 10.5 | 11.3 | Baselines To prove the effectiveness of our approach, we take the State-of-the-art models as strong baselines including SRL (Xu et al., 2020), SARG (Huang et al., 2021), PAC (Pan et al., 2019), RAST (Hao et al., 2021), T-Ptr-λ (Su et al., 2019), RUN (Liu et al., 2020) and RUN++ (Wang et al., 2022) . Evaluation We employ credible automatic metrics to evaluate our approach. As in literature (Pan et al., 2019), we examine SGT using the widely used automatic metrics BLEU, ROUGE, EM and Restoration F-score. (i) **BLEU**n (Bn) evaluates how similar the rewritten utterances are to the golden ones via the cumulative n-gram BLEU score (Papineni et al., 2002). (ii) **ROUGE**n (Rn) measures the n-gram overlapping between the rewritten utterances and the golden ones, while **ROUGE**L (RL) measures the longest matching sequence between them (Lin, 2004). (iii) EM stands for the exact match accuracy, which is the strictest evaluation metric. (iv) **Restoration Precision**n, Restoration Recalln and **Restoration F-score**n (Pn, Rn, Fn) emphasize more on words from dialogue context which are argued to be harder to copy (Pan et al., 2019). Therefore, they are calculated on the collection of n-grams that contain at least one word from context utterance. As validated by Pan et al. (2019), above automatic metrics are credible indicators to reflect the rewrite quality. Implementation Our implementation was based on PyTorch (Paszke et al., 2019) and fastNLP (Xipeng Qiu, 2018). In practice, we adopt the exact connection words setting with RUN and append the list of connection words to the head of H, as part of it. Considering that only two speakers are in the datasets related to our experiments, we set the hidden_size of SA to 1. For encoding different tagging types, We choose IO encoding, the simplest tag encoding schema, which tags each token as either being in (I-X) a particular type of named entity type X or in no entity (O). Since the distribution of tag types is severely unbalanced (e.g. (O) accounts for more than 81% on MULTI), we employed weighted cross-entropy loss and tuned the weight on development sets. We used Adam (Kingma and Ba, 2014) to optimize our model and set the learning rate as 2e-5. We set the dropout rate as 0.3 for the dropout operation on the equation 2. For a fair comparison, the BERT used in our model is BERT-base which is the same as our baselines. ## 4.1 Model Comparison Table 3, Table 4, and Table 5 show the experimental results of our approach and baselines on MULTI and REWRITE. As shown, our approach greatly surpasses all baselines on practically all restoration scores significantly. Taking MULTI as an example, our approach exceeds the best baseline RUN++(PCL) on restoration score by a significant margin, reaching a new state-of-theart performance on almost all restoration metrics. Our approach improves the previous best model by 9.79 points and 9.89 points on restoration F3 and F2, respectively. Furthermore, our approach reaches comparable performance on other auto metrics. As demonstrated by the result of REWRITE, our approach achieves comparable performance Model P1 R1 F1 P2 R2 F2 P3 R3 F3 B1 B2 R1 R2 SRL NA NA NA NA NA NA NA NA NA 85.8 82.9 89.6 83.1 T-Ptr-λ (n_beam=5) NA NA 51.0 NA NA 40.4 NA NA 33.3 90.3 87.7 90.1 83.0 PAC (n_beam=5) 70.5 58.1 63.7 55.4 45.1 49.7 45.2 36.6 40.4 89.9 86.3 91.6 82.8 SARG (n_beam=5) NA NA 62.3 NA NA 52.5 NA NA 46.4 91.4 88.9 91.9 85.7 RAST NA NA NA NA NA NA NA NA NA 89.7 88.9 90.9 84.0 RUN 73.2 64.6 68.8 59.5 53.0 56.0 50.7 45.1 47.7 **92.3 89.6** 92.4 85.1 RUN++(PCL) NA NA 71.1 NA NA 59.1 NA NA 51.1 92.1 89.4 92.6 **86.2** SGT(Ours) **75.0 67.5 71.1 73.1 65.3 69.0 64.7 57.5 60.9** 92.1 89.0 **92.7** 85.3 Table 3: Reuslts on MULTI. All models except T-Ptr-λ are initalized from pretrained Bert-base-Chinese model. All results are extracted from the original papers. The final line is the result of our complete model. A bolded **number** in a column indicates a sota result against all the other approach, whereas underline numbers show comparable performances. Both are same for Table 4&5. . Model F1 F2 F3 **EM B**1 B2 B4 R1 R2 RL SRL NA NA NA 60.5 89.7 86.8 77.8 91.8 85.9 90.5 RAST NA NA NA 63.0 89.2 88.8 86.9 93.5 88.2 90.7 RUN 89.3 81.9 76.5 67.7 93.5 91.1 86.1 95.3 90.4 94.3 RUN++(PCL) 89.8 83.2 78.2 **69.0** 93.7 91.5 **87.0** 95.6 **91.0 94.6** SGT(Ours) **91.0 89.8 85.1** 67.4 **94.9 92.2** 86.8 **96.4** 90.8 93.8 Table 4: Results on REWRITE. All models are initialized from pretrained Bert-base-Chinese model. All baseline results are extracted from the RUN++ (Wang et al., 2022). The final line is the result of our complete model. Model **EM B**4 F1 Ellipsis Recovery † 50.4 74.1 44.1 GECOR 1 † 68.5 83.9 66.1 GECOR 2 † 66.2 83.0 66.2 RUN 70.6 86.1 68.3 SGT(Ours) **71.1 86.7 85.0** on the B4, R2, and RL scores and a new state-ofthe-art performance on B1 and R1 scores. Even for the most strict metric EM on REWRITE, our approach reached comparable performance with RUN, demonstrating the comprehensive ability of our model. Besides, our approach achieves better results against all baselines on TASK, as depicted in Table 5. Specifically, we achieve state-of-the-art performance on the EM score and exceed the previous best model by 16.7 points on the restoration F1 score. Finally, the combined performance of our model on the three datasets above demonstrates that our model can perform well on datasets with ## Varied Languages And Tasks. 4.2 Closer Analysis We conduct a series of experiments to analyze our model thoroughly. First, we conduct a detailed ablation study to validate the efficacy of the components in our model. Then, in the same run-time setting, we compare the inference speed of our model to that of representative baselines. Ablation Study By analyzing table 6, we can find that "w/o sa","w/o gd" or "w/o ged" basically hurts the effect of the model, and these can initially corroborate that each of these modules is beneficial to our model. Meanwhile, we can find that "w/o gd+ged" significantly reduces R3, indicating that these two subtasks are very helpful for discovering the potential target GLCS. Further, we find that although removing "sa" alone has little effect on the restoration score, comparing the results of removing "gd+ged" and removing "sa+gd+ged" reveals that the fit with the missing speaker-aware information significantly reduces the restoration score. The F3 decreases from 74.7 to 71.8, which indicates that the information of different speakers or rounds is | Variant | P3 | R3 | F3 | B1 | B2 | RL | |---------------------|------|------|------|------|------|------| | SGT | 81.9 | 71.7 | 76.5 | 94.5 | 91.4 | 94.5 | | SGT w/o (sa) | 80.8 | 71.3 | 75.8 | 94.1 | 91.2 | 94.5 | | SGT w/o (gd) | 80.9 | 71.6 | 76.0 | 94.1 | 90.9 | 94.4 | | SGT w/o (ged) | 81.5 | 70.7 | 75.8 | 94.0 | 91.2 | 94.4 | | SGT w/o (gd+ged) | 82.0 | 68.5 | 74.7 | 93.6 | 90.6 | 94.1 | | SGT w/o (sa+gd+ged) | 79.4 | 65.5 | 71.8 | 93.6 | 90.3 | 93.8 | | RUN | 70.7 | 45.7 | 55.5 | 91.5 | 89.4 | 93.7 | crucial to extract the target GLCS correctly, and combining "sa" embedding and "gd+ged" subtasks can significantly improve the model's ability to obtain the target GLCS fragments from the context. Finally, we find that even though the absence of the three critical components "sa+gd+gd" leads to an overall decrease in model performance, our model still achieves a better restoration score than the RUN model, which further validates the effectiveness of our sequential greedy tagging learning strategy for modeling and solving UIR problems. Inference Speed As shown in Table 7, both SGT and RUN significantly outperform traditional generation algorithms regarding inference speed and B4 score. At the same time, the most time-consuming computation of SGT in the inference phase, except for the BERT encoder, is only one layer of a linear transformation, which dramatically saves the inference time compared with RUN, which has Unet (Ronneberger et al., 2015) structures after the context encoder. Therefore, we can see that the inference time of SGT is significantly less than that of RUN. The latency of a single rewriting task is reduced by 20ms, while the B4 score slightly better. | Model | B4 | ∆B4 | Latency | Speedup | |-----------|------|-------|-----------|-----------| | L-Gen | 73.6 | 0.0 | 82 ms | 1.00 × | | L-Ptr-Gen | 75.4 | +1.8 | 110 ms | 0.75 × | | T-Gen | 62.5 | -11.1 | 322 ms | 0.25 × | | T-Ptr-Gen | 77.6 | +4.0 | 415 ms | 0.20 × | | RUN | 86.2 | +12.6 | 71 ms | 1.15 × | | SGT | 86.8 | +13.2 | 51 ms | 1.60 × | ## 5 Conclusion In this paper, we convert the IUR problem into a simple sequence tagging task, SGT. The simplicity and effectiveness of the modeling paradigm not only improve the inference speed and allow the pretrained BERT encoder to fully exploit its widely validated information extraction ability which can significantly improve the restoration score and ensure that other metrics are competitive. We also introduced speaker-aware embedding to explicitly model speaker changes and verified that it has some improvement effect on the IUR task. In the future, we will explore the following directions: 1. Adopt the GD task in this paper to extract essential fragments and then pick the best permutation of fragments with a language model or using a PAC-like pointer network for fragment integration to get rid of the problem of category imbalance is caused by representing the order with tag lists. 2. Combining SGT's efficient fragment extraction paradigm with generation. ## Limitations Although our model has made some progress, it still has some limitations. First of all, SGT uses the tag type to represent the connection order of GLCS fragments when forming a complete utterance, and the average statistics on the three datasets we used show that more than 99% of the complete utterance can be composed with less than three GLCS fragments. That will lead to situations that need to combine multiple GLCSs (e.g., more than 3) to form a complete utterance, which cannot be fully trained or fall into unbalanced tag categories. Second, like other tagging-based models, the fragments that make up the complete utterance must exist in history utterances or connection words, which does not work well for situations where it is necessary to combine context information and introduce new words to express their complete utterance. ## Ethics Statement We guarantee that the approach in our research is original and that the experimental results of the SGT model are reproducible. All the experimental data of the baseline model can be reproduced from the relevant open-source code or found in the cited paper. Finally, The list of authors of the submissions does not include any individuals who did not contribute substantially to work submitted. ## Acknowledgements First, we thank all the anonymous reviewers for their valuable comments. Moreover, we are grateful for all the previous work related to exploring the IUR task, which has inspired us a lot. ## References Jaime G Carbonell. 1983. Discourse pragmatics and ellipsis resolution in task-oriented natural language interfaces. In 21st Annual Meeting of the Association for Computational Linguistics, pages 164–168. Poulami Debnath, Shubhashis Sengupta, and Harshawardhan M Wabgaonkar. 2018. Identifying, classifying and resolving non-sentential utterances in customer support systems. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ahmed Elgohary, Denis Peskov, and Jordan BoydGraber. 2019. Can you unpack that? learning to rewrite questions-in-context. Can You Unpack That? Learning to Rewrite Questions-in-Context. Raquel Fernández, Jonathan Ginzburg, and Shalom Lappin. 2005. Using machine learning for non-sentential utterance classification. In *Proceedings of the 6th* SIGdial Workshop on Discourse and Dialogue, pages 77–86. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware bert for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 2041–2044. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequenceto-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631– 1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. *arXiv preprint arXiv:1603.08148*. Jie Hao, Linfeng Song, Liwei Wang, Kun Xu, Zhaopeng Tu, and Dong Yu. 2021. RAST: Domain-robust dialogue rewriting as sequence tagging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4913–4924, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Johann Hauswald, Michael A Laurenzano, Yunqi Zhang, Cheng Li, Austin Rovinski, Arjun Khurana, Ronald G Dreslinski, Trevor Mudge, Vinicius Petrucci, Lingjia Tang, et al. 2015. Sirius: An open end-to-end voice and vision personal assistant and its implications for future warehouse scale computers. In *Proceedings of the Twentieth International Conference on Architectural Support for Programming* Languages and Operating Systems, pages 223–238. Mengzuo Huang, Feng Li, Wuhe Zou, Hongbo Zhang, and Weidong Zhang. 2021. Sarg: A novel semi autoregressive generator for multi-turn incomplete utterance restoration. In *AAAI*. Lisa Jin, Linfeng Song, Lifeng Jin, Dong Yu, and Daniel Gildea. 2022. Hierarchical context tagging for utterance rewriting. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *International* Conference on Learning Representations. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, and Dongmei Zhang. 2020. Incomplete utterance rewriting as semantic segmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2846–2857, Online. Association for Computational Linguistics. Zhufeng Pan, Kun Bai, Yan Wang, Lianqiang Zhou, and Xiaojiang Liu. 2019. Improving open-domain dialogue systems via multi-turn incomplete utterance restoration. In *Proceedings of the 2019 Conference* on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1824–1833, Hong Kong, China. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. Jun Quan, Deyi Xiong, Bonnie Webber, and Changjian Hu. 2019. GECOR: An end-to-end generative ellipsis and co-reference resolution model for taskoriented dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4547–4557, Hong Kong, China. Association for Computational Linguistics. O. Ronneberger, P.Fischer, and T. Brox. 2015. Unet: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI), volume 9351 of *LNCS*, pages 234–241. Springer. (available on arXiv:1505.04597 [cs.CV]). Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance ReWriter. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 22–31, Florence, Italy. Association for Computational Linguistics. Zhihao Wang, Tangjian Duan, Zihao Wang, Minghui Yang, Zujie Wen, and Yongliang Wang. 2022. Utterance rewriting with contrastive learning in multi-turn dialogue. *arXiv preprint arXiv:2203.11587*. Fudan NLP Xipeng Qiu. 2018. fastnlp, a lightweight framework for natural language processing (nlp). https://github.com/fastnlp/fastNLP. Kun Xu, Haochen Tan, Linfeng Song, Han Wu, Haisong Zhang, Linqi Song, and Dong Yu. 2020. Semantic Role Labeling Guided Multi-turn Dialogue ReWriter. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6632–6639, Online. Association for Computational Linguistics. Yong Zhang, Zhitao Li, Jianzong Wang, Ning Cheng, and Jing Xiao. 2022. Self-attention for incomplete utterance rewriting. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8047–8051. IEEE. Kun Zhou, Kai Zhang, Yu Wu, Shujie Liu, and Jingsong Yu. 2019. Unsupervised context rewriting for open domain conversation. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1834–1844, Hong Kong, China. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 8 ✓ B1. Did you cite the creators of artifacts you used? 8 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
jiang-riloff-2023-exploiting
Exploiting Commonsense Knowledge about Objects for Visual Activity Recognition
https://aclanthology.org/2023.findings-acl.457
Situation recognition is the task of recognizing the activity depictedin an image, including the people and objects involved. Previousmodels for this task typically train a classifier to identify theactivity using a backbone image feature extractor. We propose thatcommonsense knowledge about the objects depicted in an image can alsobe a valuable source of information for activity identification. Previous NLP research has argued that knowledge about the prototypicalfunctions of physical objects is important for language understanding,and NLP techniques have been developed to acquire this knowledge. Our work investigates whether this prototypical function knowledgecan also be beneficial for visual situation recognition. Webuild a framework that incorporates this type of commonsense knowledgein a transformer-based model that is trained to predict the actionverb for situation recognition. Our experimental results show thatadding prototypical function knowledge about physical objects doesimprove performance for the visual activity recognition task.
# Exploiting Commonsense Knowledge About Objects For Visual Activity Recognition Tianyu Jiang and **Ellen Riloff** Kahlert School of Computing University of Utah Salt Lake City, UT 84112 {tianyu, riloff}@cs.utah.edu ## Abstract Situation recognition is the task of recognizing the activity depicted in an image, including the people and objects involved. Previous models for this task typically train a classifier to identify the activity using a backbone image feature extractor. We propose that commonsense knowledge about the objects depicted in an image can also be a valuable source of information for activity identification. Previous NLP research has argued that knowledge about the prototypical functions of physical objects is important for language understanding, and NLP techniques have been developed to acquire this knowledge. Our work investigates whether this prototypical function knowledge can also be beneficial for visual situation recognition. We build a framework that incorporates this type of commonsense knowledge in a transformer-based model that is trained to predict the action verb for situation recognition. Our experimental results show that adding prototypical function knowledge about physical objects does improve performance for the visual activity recognition task. ## 1 Introduction Physical objects play an important role in our daily lives. People use different tools to achieve different goals in all kinds of situations. For example, we use a toothbrush to clean our teeth, a microwave oven to heat food, and a camera to take photos. The functions of physical objects is a type of commonsense knowledge that has been recognized to play an important role in natural language processing (Burstein, 1979; Jiang and Riloff, 2021). Physical objects play an important role in computer vision as well. There are well-established computer vision tasks that aim to identify the objects in an image, such as object detection (Lin et al., 2014) and image classification (Deng et al., 2009; Krizhevsky, 2009). Recently, attention has been paid to more comprehensive image under- ![0_image_0.png](0_image_0.png) Figure 1: *Situation Recognition* involves predicting activities with semantic role/value pairs. standing, such as identifying the salient event depicted in an image as well as relevant people and objects. **Situation recognition** (Yatskar et al., 2016) is the task of producing a structured summary of an image that describes the main activity and the entities that fill semantic roles for that activity. The task was originally defined using frame structures from FrameNet (Baker et al., 1998; Ruppenhofer et al., 2016) as the activity representation. For example, given the image shown in Figure 1, a system should identify a *baking* event (which is indexed in FrameNet as a type of *Cooking_creation* activity), and recognize the corresponding semantic role/value pairs associated with FrameNet's *Cooking_creation* frame. Models for this task usually follow a two-step pipeline: (1) predict a verb that describes the activity depicted in the image, and (2) identify the entities associated with each semantic role. Previous systems have relied solely on features extracted from the image and have not yet exploited any external commonsense knowledge. Our work focuses on the activity recognition (verb prediction) part of the situation recognition task. We hypothesize that (a) correctly identifying the activity in an image strongly depends on recognizing the objects that appear in the image, and (b) explicit commonsense knowledge about physical objects can also be beneficial. More specifically, our work is motivated by recent research emphasizing the importance of commonsense knowledge 7277 about the prototypical functions of physical objects for language understanding (Jiang and Riloff, 2021, 2022). An intuitive extension to visual reasoning is that if an object appears in an image, especially when it is used by a person, the activity depicted in the image is likely to be the prototypical function associated with the object. For example, a woman holding a comb is probably brushing her hair, and a man holding a cookie sheet (as shown in Figure 1) is probably baking. We explore these hypotheses by creating a transformer-based model that incorporates commonsense knowledge about the prototypical functions of physical objects for visual activity recognition. Our experimental results confirm that correctly identifying the objects in an image is very important for activity recognition, and we show that providing explicit knowledge about the prototypical functions of objects can improve performance for this task. ## 2 Related Work Commonsense knowledge about physical objects has long been recognized to be important for natural language understanding (Burstein, 1979). Within the NLP community, a variety of recent projects have focused on acquiring and using different types of knowledge about physical objects, including relative physical knowledge (Forbes and Choi, 2017), relative spatial relations (Collell et al., 2018), semantic plausibility (Wang et al., 2018), object affordances (Persiani and Hellström, 2019), and object usage status (Jiang and Riloff, 2022). The work most relevant to our research is Jiang and Riloff (2021), which developed a NLP method to learn the most typical way that people use humanmade physical artifacts. They used FrameNet frames as a representation for object functions and they created a dataset of physical objects paired with their prototypical function frames to evaluate their results. Our research incorporates their prototypical function data into a transformer-based model for visual activity recognition. Visual reasoning tasks, such as visual question answering (Antol et al., 2015) and image captioning (Young et al., 2014), have been widely explored for understanding images and videos. Previous work has proposed to use external knowledge for visual tasks, such as image classification (Marino et al., 2017), object detection (Singh et al., 2018), and visual question answering (Wu et al., 2016). Situation recognition is a task of recognizing the activity depicted in an image, including the people and objects involved in the activity and the roles these participants play. Yatskar et al. (2016) introduced the **imSitu** dataset, which associates images with a verb that describes the main action, and a set of semantic roles derived from FrameNet (Ruppenhofer et al., 2016). They tackled this problem by first applying the VGG network (Simonyan and Zisserman, 2014) to extract features from the image and then building a CRF model to jointly predict the verb and semantic roles. Several research efforts have further explored this task. Suhail and Sigal (2019) used a graph neural network to capture the relations between semantic roles. Pratt et al. (2020) used a LSTM to jointly classify verbs and semantic roles. Cooray et al. (2020) cast situation recognition as a query-based visual reasoning problem and further handled inter-dependencies between queries to overcome the sparsity issues of semantic roles. Recently, Cho et al. (2022) proposed a collaborative framework using two transformer modules, and Li et al. (2022) used contrastive learning to distinguish the correct activities from negative examples. All of these prior efforts have relied solely on features extracted directly from the image. Our work aims to show that explicitly providing commonsense knowledge about objects can also be beneficial for visual activity recognition. ## 3 Methods Given an image, the visual activity recognition task predicts a verb that describes the main activity in the image. Figure 2 shows the framework of our model called ARF (Activity Recognition with Functions), which takes 3 sources of input: 1) the image, 2) nouns corresponding to the objects in the image, and 3) the names of FrameNet frames that describe the prototypical functions of the objects. We use the CLIP (Radford et al., 2021) model, which has been pre-trained on both images and text, to generate an encoding for each of the 3 types of input. Finally, we give the concatenated representation vectors as input to a transformer model that is trained to predict a verb for activity recognition. ## 3.1 Notation The task can be denoted as given the ith image Ii (i = 1..n), the system should predict the correct activity verb v∗ i . The score for the jth candidate ![2_image_0.png](2_image_0.png) verb being the activity for image Iiis defined as: $$\Pr(v_{i}^{j}|I_{i})={\frac{\exp(g(I_{i},v_{i}^{j}))}{\sum_{k=1}^{m}\exp(g(I_{i},v_{i}^{k}))}}\qquad(1)$$ where g(·) is a function produced by our model for scoring the assignment of a verb to the image, and m is the total number of candidate verbs. We use negative log likelihood as our loss function: $${\mathcal{L}}=-\sum_{i=1}^{n}\log\operatorname*{Pr}(v_{i}^{*}|I_{i})$$ $${\mathrm{(2)}}$$ ## 3.2 Object Recognition Ideally, we would use an Object Detector to identify the objects in an image for our experiments. However, the object detectors that are most readily available use categories that do not cover the range of object types that we need. For example, object detection datasets often contain a number of animate objects such as people and animals. As an alternative, we turned to image captioning systems. For our first set of experiments, we used a state-ofthe-art image captioning model called OFA (Wang et al., 2022) to generate 10 different sentences that describe the image. We set beam size 10 and diversity 10. We then extracted the nouns from these sentences to create a set of words that (hopefully) include the objects. However, even though the image captioning system often generated reasonable captions, the most relevant objects were frequently omitted from the caption, or misidentified.1 Since the goal of our research is to determine whether *adding* explicit knowledge about an object improves performance, 1One likely reason is that the images are in low resolution and many objects are small, such as a pencil. we cannot truly assess the value of such knowledge when we do not know what objects appear in the image. Developing better methods to identify specific objects in an image is an important direction for future research in computer vision. For now, we continued our investigation by performing additional experiments with the gold nouns in the imSitu dataset. These experiments essentially evaluate the impact of adding object knowledge when the objects have been perfectly identified by an oracle. ## 3.3 Prototypical Function Knowledge We obtained the knowledge of what an object is typically used for from a dataset2created by (Jiang and Riloff, 2021). Their data contains a list of physical objects represented as WordNet synsets (Miller, 1995), and each object is paired with a humanannotated frame from FrameNet that represents its prototypical function. For example, *knife* is paired with the *Cutting* frame. For each object in an image, we aim to use its function frame to help with activity identification. However, Jiang and Riloff (2021) and imSitu (Yatskar et al., 2016) used different subsets of frames from FrameNet. We felt that it made sense to align them, so we used the inter-frame relations provided by FrameNet to map the prototypical function frames to imSitu's frames. For each function frame, we create a mapping to all of the imSitu frames that are within one hop via any frame relation. Finally, we associate each object with its corresponding imSitu frames. ## 3.4 Activity Recognition Model We use CLIP ViT-B/32 (Radford et al., 2021) as the backbone model to encode the image and text. For each example, we first apply CLIP's image encoder to produce an image feature vector. Then we use CLIP's text encoder to generate an embedding for each object (noun) and average the object vectors. For each object, we also collect its prototypical function frames and use CLIP's text encoder again to generate embeddings for each frame's name, then average those vectors. If there is no object, or no associated frame, then we encode an empty string. Next, we build a transformer model consisting of 6 encoding layers and a classification layer on 2https://github.com/tyjiangU/physical_ artifacts_function | Model | Dev Acc | Test Acc | |-------------------------|-----------|------------| | Yatskar et al. (2016) | 32.3 | 32.3 | | Cooray et al. (2020) | 38.0 | 38.2 | | Pratt et al. (2020) | 39.6 | 39.9 | | Suhail and Sigal (2019) | 43.2 | 43.3 | | Cho et al. (2022) | 44.4 | 44.7 | | Li et al. (2022) | - | 45.6 | | ARF | 46.2 | 46.4 | | ARF+nounsC | 46.6 | 46.5 | | ARF+nounsC+func | 46.9 | 47.2 | | ARF+nounsG | 69.2 | 69.5 | | ARF+nounsG+func | 72.0 | 71.9 | Table 1: Experimental results. top. As input, the model takes the concatenation of all 3 vectors (corresponding to image, objects and functions). The classifier then selects the most probable action verb from all 504 candidate verbs used in the imSitu dataset. ## 4 Evaluation The imSitu data contains 126,102 images, with manually annotated activity verbs and frame structures. We follow the same data split (train 75,702, development 25,200, test 25,200) as Yatskar et al. (2016). We report verb prediction accuracy on both the development and test sets. When fine-tuning the transformer, we use batch size 32, hidden vector dimension 512, AdamW optimizer with learning rate 1e-4 and train for 10 epochs. ## 4.1 Experimental Results Table 1 compares our model with six previous methods described in Section 2. The ARF row shows the performance of our basic model using only image input. Our model performs a little better than previous systems, probably due to the CLIP model which is quite good. Also, the other models are trained for the full situation recognition task, whereas our model is trained solely for the verb prediction task. The next two rows show results when adding embeddings for the nouns extracted from the captioning system (nounsC) and when using the nouns as well as their function frames (nounsC+func). The nouns alone produce just a tiny improvement, but adding the function frames improves a bit more. We believe that these results are primarily due to the limitations of the captioning system. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) The last two rows in Table 1 show the performance when using the gold nouns (nounsG) and when using the gold nouns plus their associated function frames (nounsG+func). These results show a huge performance boost simply from correctly identifying all the objects in the image. And providing the external knowledge about their prototypical functions further improves performance. In the next section, we try to better understand the role that objects play. ## 4.2 Analysis Figure 3 shows some examples of how the functions of objects in the image can help identify the main activity. Consider subfigure (a), we see a hand-held **spoon** in front of the baby's mouth; the baby is expressing their like or dislike by making a grimace; there is some green substance (presumably food) both on the face and spoon. We don't see a series of continuous actions, yet we know it is a feeding event because of our commonsense knowledge. Similarly for the other images in Figure 3, from the shields, we can infer *Protecting*; looking at the canoe, we know it is *Motion*; and the knife is a good indicator for *Cutting*. Images with and without Objects However, not all images contain "salient" physical objects. For example, imagine a picture showing a man running on a trail. The man is wearing clothes, which usually does not help with identifying the running activity (people generally wear clothes). In order to tease apart the images with and without salient ob- | Model | w/ Func | w/o Func | |-----------------|-----------|------------| | ARF | 46.0 | 46.4 | | ARF+nounsG | 70.4 | 68.5 | | ARF+nounsG+func | 75.1 | 70.4 | jects, we divided the dev set into two subsets: one set (*w/ Func*) contains 8,957 images where at least one gold noun is associated with a function frame, and the other set (*w/o Func*) contains 16,243 images for which no nouns map to any frames. Since the gold annotations only provide semantic role values that are associated with the main activity, it is safe to assume that the *w/ Func* set of images would contain salient objects. Table 2 compares the performance of our systems on each subset of data. We see that performance is nearly identical when only using image features. Adding the gold nouns produces a big performance gain for both groups, although it benefits the *w/ Func* subset a little more. When the function frame knowledge is introduced, we see more separation: the images that depict physical objects associated with functions benefit more from having external knowledge about functions. This result confirms that the knowledge is beneficial in the expected way. Which Semantic Categories Matter? The performance gap between ARF+nounsG and ARF is substantial, and we were curious to understand what types of nouns contributed the most. So we conducted another set of experiments on the dev set to identify certain types of semantic roles. There are 190 different semantic roles in the data, but we are primarily interested in understanding the importance of physical objects. So we coarsely grouped the semantic roles into 3 categories roughly corresponding to People, *Locations* and *Objects*. To keep things manageable, we identified the 16 most frequent semantic roles that appear in at least 2,000 images and manually assigned them to the 3 categories. The *People* category includes agent, agentpart, *victim*, and *coagent*. The Locations category contains *place* and *destination*. The *Objects* category contains tool, item, *substance*, object, *container*, and *vehicle*. We disregarded a few semantic roles that are highly ambiguous (e.g., source can be both a location and object). Table 3 shows our experimental results. Each experiment collected all images containing at least | People | Locations | Objects | | |---------------|-------------|-----------|------| | with Nouns | 69.3 | 69.2 | 72.2 | | without Nouns | 61.4 | 64.4 | 37.2 | Table 3: Performance with and without the nouns for specific semantic roles. one instance of a relevant semantic role and then evaluated performance on those images both with and without the gold annotated nouns. For example, the *Objects* column shows that our model achieved 72.2% accuracy on the images that contain at least one object when it was given the nouns. But performance dropped to 37.2% accuracy on those same images without the nouns. In contrast, providing the gold nouns had much less impact on the other sets of images, which contain People or *Locations* but not necessarily *Objects*. Salient Objects Another challenge is how to find the "salient" objects that play important roles in the image, and from which we have a better chance of identifying the main activity. We count the number of physical objects (not in the People or *Locations* semantic category) for all images. We find that nearly 40% of images are annotated with two or more objects. In our ARF model, when there are multiple objects in the image, we simply use the average of each object's embedding, which could potentially be improved by giving more weight to the most salient object. This issue may be even more important when using object detection systems because they may identify more objects (the gold annotation only contains objects that belong to a pre-defined semantic role)! This is an important issue to study in future work. ## 5 Conclusion The prototypical functions of physical objects is a type of commonsense knowledge that is important for NLP. In this work, we showed that it can be a useful source of information for image understanding as well. Specifically, we tackled the situation recognition task by building a transformer model that incorporates the functions of objects to predict the activity in an image. The experiments show that knowledge of the objects and their prototypical functions can improve performance on this task. However, automatically recognizing the objects in an image remains a challenge, and exploiting better object detection methods is an important direction for future work. ## 6 Limitations For image captioning, we used the pre-trained OFA model for zero-shot inference. We did not explore every state-of-the-art model or fine-tune OFA specifically on the imSitu dataset. Other image captioning systems could yield better results. The gap between automatic object recognition and using gold nouns confirms that correctly identifying the objects in an image is very important for activity recognition. Also, we are not certain that mapping the Jiang and Riloff (2021) function frames to the imSitu frames is strictly necessary. ## Acknowledgements We thank the Utah NLP group for their constructive comments. We also thank the anonymous ACL reviewers for their valuable suggestions and feedback. ## References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision (ICCV 2015). Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (ACL-COLING 1998). Mark H. Burstein. 1979. The use of object-specific knowledge in natural language processing. In Proceeding of the 17th annual meeting on Association for Computational Linguistics (ACL 1979). Junhyeong Cho, Youngseok Yoon, and Suha Kwak. 2022. Collaborative transformers for grounded situation recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022). Guillem Collell, Luc Van Gool, and Marie-Francine Moens. 2018. Acquiring common sense spatial knowledge through implicit spatial templates. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018). Thilini Cooray, Ngai-Man Cheung, and Wei Lu. 2020. Attention-based context aware reasoning for situation recognition. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (CVPR 2020). Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009). Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of actions and objects. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)*. Tianyu Jiang and Ellen Riloff. 2021. Learning prototypical functions for physical artifacts. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL 2021). Tianyu Jiang and Ellen Riloff. 2022. Identifying physical object use in sentences. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022). Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto. Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, and Shih-Fu Chang. 2022. Clip-event: Connecting text and images with event structures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022). Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In *European conference on computer vision (ECCV 2014)*. Kenneth Marino, Ruslan Salakhutdinov, and Abhinav Gupta. 2017. The more you know: Using knowledge graphs for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). George A. Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41. Michele Persiani and Thomas Hellström. 2019. Unsupervised inference of object affordance from text corpora. In *Proceedings of the 22nd Nordic Conference on Computational Linguistics*. Sarah Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, and Aniruddha Kembhavi. 2020. Grounded situation recognition. In *European Conference on Computer* Vision (ECCV 2020). Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning (ICML 2021)*. Josef Ruppenhofer, Michael Ellsworth, Myriam R. L. Petruck, Christopher R. Johnson, Collin F. Baker, and Jan Scheffczyk. 2016. FrameNet II: Extended theory and practice. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. *arXiv preprint arXiv:1409.1556*. Krishna Kumar Singh, Santosh Divvala, Ali Farhadi, and Yong Jae Lee. 2018. Dock: Detecting objects by transferring common-sense knowledge. In Proceedings of the European Conference on Computer Vision (ECCV 2018). Mohammed Suhail and Leonid Sigal. 2019. Mixturekernel graph attention network for situation recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV 2019). Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning (ICML 2022). Su Wang, Greg Durrett, and Katrin Erk. 2018. Modeling semantic plausibility by injecting world knowledge. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018). Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. 2016. Ask me anything: Free-form visual question answering based on knowledge from external sources. In *Proceedings of the* IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016). Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016). Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. *Transactions of the* Association for Computational Linguistics, 2:67–78. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✗ A2. Did you discuss any potential risks of your work? I don't see any risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3. ✓ B1. Did you cite the creators of artifacts you used? Section 3. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. ## C ✓ **Did You Run Computational Experiments?** Section 4. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
xiao-etal-2023-tucker
Tucker Decomposition with Frequency Attention for Temporal Knowledge Graph Completion
https://aclanthology.org/2023.findings-acl.458
Temporal Knowledge Graph Completion aims to complete missing entities or relations under temporal constraints. Previous tensor decomposition-based models for TKGC only independently consider the combination of one single relation with one single timestamp, ignoring the global nature of the embedding. We propose a Frequency Attention (FA) model to capture the global temporal dependencies between one relation and the entire timestamp. Specifically, we use Discrete Cosine Transform (DCT) to capture the frequency of the timestamp embedding and further compute the frequency attention weight to scale embedding. Meanwhile, the previous temporal tucker decomposition method uses a simple norm regularization to constrain the core tensor, which limits the optimization performance. Thus, we propose Orthogonal Regularization (OR) variants for the core tensor, which can limit the non-superdiagonal elements of the 3-rd core tensor. Experiments on three standard TKGC datasets demonstrate that our method outperforms the state-of-the-art results on several metrics. The results suggest that the direct-current component is not the best feature for TKG representation learning. Additional analysis shows the effectiveness of our FA and OR models, even with smaller embedding dimensions.
## Tucker Decomposition With Frequency Attention For Temporal Knowledge Graph Completion Likang Xiao1,3**, Richong Zhang**1∗ , Zijie Chen2**, Junfan Chen**1 1SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China 2School of Electrical and Computer Engineering, University of Toronto, Toronto, Canada 3Shen Yuan Honors College, Beihang University, Beijing, China {xiaolk, zhangrc, chenjf}@buaa.edu.cn chenzijie162@gmail.com ## Abstract Temporal Knowledge Graph Completion aims to complete missing entities or relations under temporal constraints. Previous tensor decomposition-based models for TKGC only independently consider the combination of one single relation with one single timestamp, ignoring the global nature of the embedding. We propose a Frequency Attention (FA) model to capture the global temporal dependencies between one relation and the entire timestamp. Specifically, we use Discrete Cosine Transform (DCT) to capture the frequency of the timestamp embedding and further compute the frequency attention weight to scale embedding. Meanwhile, the previous temporal tucker decomposition method uses a simple norm regularization to constrain the core tensor, which limits the optimization performance. Thus, we propose Orthogonal Regularization (OR) variants for the core tensor, which can limit the non-superdiagonal elements of the 3-rd core tensor. Experiments on three standard TKGC datasets demonstrate that our method outperforms the state-of-the-art results on several metrics. The results suggest that the direct-current component is not the best feature for TKG representation learning. Additional analysis shows the effectiveness of our FA and OR models, even with smaller embedding dimensions. ## 1 Introduction Knowledge graph (KG) contains a number of structured facts (*h, r, t*), where a fact expresses a directed relation r from a head entity h to a tail entity t. The complex KGs, such as FreeBase (Berant et al., 2013), DBPedia (Auer et al., 2007), and Wikidata (Vrandeciˇ c and Krötzsch ´ , 2014), are collected manually or automatically from structured or unstructured data on the web. Such KGs are successfully applied to several downstream tasks, e.g., Question Answering (Berant et al., 2013) and ![0_image_0.png](0_image_0.png) Figure 1: A toy example from the temporal knowledge graph shows an athlete's career. This example illustrates the temporal dependencies of facts that Tom Adeyemi plays for four teams from 2011 to 2017. Toy example scores calculated by TuckER-FA are in the appendix C. Recommender System (Wang et al., 2018). However, those works ignore that many facts in the KGs change over time. Temporal facts can be expressed as a quadruplet (*h, r, t, τ* ), with τ being the timestamp. The temporal Knowledge graphs (TKGs), such as ICEWS (Lautenschlager et al., 2015), GDELT (Leetaru and Schrodt, 2013) and YAGO (Mahdisoltani et al., 2015), are then built to handle these facts coupled with timestamps. One problem that hinders the application of TKGs in downstream tasks is the inevitable incompleteness or knowledge scarcity problem caused by missing entities or relations. Thus, Temporal Knowledge Graph Completion (TKGC) aiming to complete the missing entities or relations over time has become an essential task in the research community. The previous methods for TKGC can be divided into four branches, in which the critical challenge is how to integrate timestamps into KGC modeling. Time-dependent Embedding method (Trivedi et al., 2017; Goel et al., 2020; Dasgupta et al., 2018) considers the temporal information as a transformation or an activation function for entities or relations. Timestamp Embedding method (Han et al., 2021b; Lacroix et al., 2020; Shao et al., 2022) treats timestamps as additional learnable embeddings of the score function. Experimental experience (Han et al., 2021b) suggests that timestamp embeddings generally perform bet- ∗Corresponding author: zhangrc@act.buaa.edu.cn ter than time-dependent embeddings. Knowledge Graph Snapshots method (Liao et al., 2021; Li et al., 2021) aggregates multi-relational interactions of cropped subgraph to achieve more precise representations. Historical Context method (Jung et al., 2021; Zhu et al., 2021) model n-hop facts chain or repeat facts to increase the interpretability. The previous timestamp embedding methods model each quadruplet independently, which only captures the *local* temporal dependencies, ignoring the *global* temporal dependencies between one relation and entire timestamp. As shown in Figure 1, an athlete plays for different teams in different periods. We treat such events as a continuous line of events rather than as separate events. We propose a Frequency Attention (FA) model to address this issue. Specifically, we treat each dimension in the timestamp embedding as a long-term signal and use Discrete Cosine Transform (DCT) to capture the frequency of the signal. Furthermore, we take the frequency and part of the relation embedding as input to calculate attention weights for each timestamp. The proposed frequency attention model can easily apply to exist tensor decomposition methods. The previous tucker decomposition method is interpreted as a high-dimensional linear classifier to distinguish facts. TuckERTNT (Shao et al., 2022), uses a simple L2 norm as the core tensor regularization. However, this regularization may be overstrict, leading the embedding to change sharply and risk vanishing or explosion. Inspired by orthogonal regularization in (Brock et al., 2019), we propose two variants of orthogonal regularization (OR) for the core tensor, i.e., excluding superdiagonal elements or diagonal elements of each slice matrix of the core tensor. This way, we achieve a balanced core tensor regularization, preventing the embedding norm from vanishing or exploding. In summary, our work makes the following contributions: (1) we propose a frequency attention model using DCT to capture the global temporal dependency between relations and entire timestamp. (2) we introduce two variants of core tensor orthogonal regularization for the tucker decomposition, which can prevent the embedding norm from vanishing or explosion. (3) Experiment results on three standard datasets show that our model outperforms the SOTA models on several metrics. The additional analysis demonstrates the effectiveness of our frequency attention model and orthogonal regularization. ## 2 Related Works 2.1 Static Kg Embedding There has been ample research on static knowledge graph embedding. We grouped all mainstream models into four main categories. Tensor decomposition-based models RESCAL (Nickel et al., 2011), Distmult(Yang et al., 2015), ComplEx (Trouillon et al., 2016), and TuckER (Balazevic et al., 2019) compute triplet score in real or complex domain. Distance-based models are built upon euclidean or hyperbolic distance as shown in TransE (Bordes et al., 2013), mainfoldE (Xiao et al., 2016) and RotatE (Sun et al., 2019). DURA (Zhang et al., 2020) figure out that distance-based method can be viewed as decomposition method with a L2 regularization. Neural-based models use a convolutional network to capture the KG structure information, as shown in ConvE (Dettmers et al., 2018). Other models learn from a variety of experiences from fields. Inspired by reinforcement learning, MultiHopKG (Lin et al., 2018) sample nhop triplets chain to compute the fact triplet scores and re-rank candidates. ## 2.2 Temporal Kg Embedding There are two scenarios for integrating temporal information into existing static embedding models, timestamp embedding and time-dependent entity embedding. Time-dependent entity embedding can explicitly model dynamic changes of entities, such as periodicity and trending. A well-known timedependent entity embedding is diachronic embedding (Goel et al., 2020), which uses the sine function to represent the frequency of entity evolution over different time granularity. (Han et al., 2021b) compares six KG embedding models, and figures out that timestamp embedding can achieve similar or even better performance with significantly fewer parameters. Although timestamp embedding might suffer from the growing number of timestamps, the time granularity can also be controlled within an appropriate range by enlarging. Further analysis in TNTComplEx (Lacroix et al., 2020) points out that time-dependent relation embedding can obtain comparable results to time-dependent entity embedding with smaller computational costs. From the viewpoint of the subgraph, there are a series of knowledge graph snapshots/subgraphs over time, which contain potential multi-relational interactions. (Liao et al., 2021) adopt probabilistic entity representations based on variational Bayesian inference to jointly model the entity features and the uncertainty. (Li et al., 2021) employs a multi-layer graph convolutional network on each subgraph to capture the dependencies of adjacent facts. From another contextual perspective, the relevance between the query and its historical context can be used as evidence for reasoning. (Jung et al., 2021) proposes a multi-hop reasoning model using a graph attention layer and finds that temporal displacements are more indicative for inference than timestamps. (Zhu et al., 2021) notice more than 80% of events from 1995 to 2019 in the ICEWS repository are repeated events. In this case, they introduce a copy mechanism to re-rank the candidates. ## 3 Preliminaries 3.1 Problem Definition To formally define the problem and describe the solution, we use consistent notations in the rest of the paper. We represent scalars with the lower case letters, e.g., dr, represent sets with the flower letters, e.g., E, represent vectors with the bold lower case letters, e.g., hi, denoting the i th entity embedding. We use bold upper letters H to denote the embedding matrix and represent the high order tensor with bold flower letters W. We use a ⊙ b to denote Hadamard (elementwise) product of two vectors or matrix, a ⊗ b to denote the tensor outer product, [A|B] to denote the vector or matrix concatenation operator, *|| · ||*p to denote the p-norm of a vector or tensor. A temporal knowledge graph G consists of a set of facts {(hi, rj , tk, τl)*} ⊆ E × R × E × T* , where E is a finite entity set, R is a finite relation set and T is a finite timestamp set. Each quadruplet (hi, rj , tk, τl) respectively denotes a relation rj from the head entity hito tail entity tk at a specific time τl. A temporal knowledge graph uses a binary tensor X = {0, 1}*|E|×|R|×|E|×|T |* to indicate whether the corresponding quadruplets occurs in the KG data set. |E|, |R| and *|T |* denote the number of entities, relations, and timestamps, respectively. Although knowledge graphs contain large numbers of facts, they are still incomplete due to the complex nature of the real world. TKGC aims to predict the missing entity. We focus on the link prediction problem, aiming to predict the tail entity or head entity through the query (hi, rj , ?, τl) or (?, rj , tk, τl). The problem reduces to ranking a set of candidate entities to select the most likely entity that makes the partial quadruplet factual. The problem can be formulated as a ranking problem to learn a quadruplet score function Xˆ(E, R, E, T ) ∈ R to sort all candidate entities. ## 3.2 Tucker Decomposition For Tkg Embedding Many tensor decomposition methods apply to the KG embeddings, such as bilinear decomposition, canonical decomposition, and tucker decomposition. Among these methods, tucker decomposition, a kind of principal component analysis approach for high-order tensors, is viewed as the general one. In particular, when the super-diagonal elements in the core tensor of Tucker equal 1 and other elements equal 0, tucker decomposition degrades into canonical decomposition. TuckER (Balazevic et al., 2019) has proved that Dismult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) can be included into the framework of TuckER. In KGC task, An 3-order tensor X can be decomposed into a core tensor W ∈ R De×Dr×De and entity/relation embedding matrix E/R as factor matrix. The formula of the tucker decomposition is as follows. $$\begin{array}{l}{{\mathcal{X}=\mathcal{W}\times_{1}\mathbf{E}\times_{2}\mathbf{R}\times_{3}\mathbf{E}=<\mathcal{W};\mathbf{E},\mathbf{R},\mathbf{E}>}}\\ {{=\sum_{d_{1}=1}^{D_{e}}\sum_{d_{2}=1}^{D_{r}}\sum_{d_{3}=1}^{D_{e}}W_{d_{1}d_{2}d_{3}}\mathbf{h}_{:d_{1}}\otimes\mathbf{r}_{:d_{2}}\otimes\mathbf{t}_{:d_{3}}}}\end{array}$$ ×n denotes the n-mode product of the tensor, which can be explained as the core tensor expanding into a matrix along the n-th dimension. To obtain proper timestamp embedding T , TuckERTNT (Shao et al., 2022) use two relation embeddings R and Rtto separately capture time-variant information and time-invariant information as follows. ## X =< W; E, Rt ⊙ T + R, E > Although previous works have achieved good results in the TKGC task, they may still encounter many problems. First, the learnable parameters representing the frequency of DE-SimplE may be clustered around 0, affecting the model performance (as shown in Appendix D). Second, TuckERTNT constrains the core tensor with the fourth power of L4 norm. However, it is not guaranteed that the n-mode product of the core tensor is well-perform. Therefore, there is still space to improve the temporal tucker decomposition for the TKGC task. ## 4 Model We propose a new framework, TuckER-FA, combining the FA and OR with the temporal tucker decomposition method. We input the timestamp embedding and relation embedding to the FA model to compute frequency attention weights, then weighted-sum the timestamp embedding and combine it with the relation embedding. The timestamp-enhanced relation feature and head/tail entity embedding compose the factor matrix of tucker decomposition. In learning progress, we include several regularization losses and the orthogonal regularization of the core tensor into the overall objective. ![3_image_0.png](3_image_0.png) ## 4.1 Score Function We propose a score function for TKGC based on tucker decomposition shown in Figure 2. Specifically, the formula of the score function is. $$\mathcal{X}_{i j k l}=<\mathcal{W};\mathbf{h}_{i},\mathbf{r}_{j1}\odot[\mathbf{r}_{j2}|F A(\hat{\mathbf{r}}_{j1},\mathbf{t}_{l})],\mathbf{t}_{k}>$$ We use ρ =dτ dr to control the embedding dimension ratio between timestamps and relations. rj1 ∈ R dr and rj2 ∈ R dr−dτrespectively denote two relation embeddings. τl ∈ R dτis the timestamp embedding. FA represents the Frequency Attention model. The input rˆj1 denote the part of relation embedding which aligns with τl. The main amount of computation is concentrated on tucker decomposition rather than the FA model. ## 4.2 Frequency Attention To capture the crucial temporal features of TKG, we propose a Frequency Attention (FA) model shown in Figure 3. We treat the evolution of timestamp embedding over time as a combination of periodic functions with different frequencies. Inspired by FcaNet (Qin et al., 2021), we use Discrete Cosine Transform (DCT) to capture the different frequency components of timestamp embeddings. In this way, we can capture the global temporal dependency of one relation and the entire timestamp. The chronologically arranged timestamp embedding T ∈ R Nτ ×dτis viewed as dτ different temporal signals. The direct-current (DC) component f0 and frequency component fk of DCT are respectively formulated as follows. $$f_{0}{=}\sum_{i=0}^{N_{\tau}-1}T_{i}=G A P(T)N_{\tau}$$ $$f_{k}{=}\sum_{i=0}^{N_{\tau}-1}T_{i}c o s(\frac{\pi k}{N_{\tau}}(i+\frac{1}{2}))\ \ k\in\{0,...,N_{\tau}-1\}$$ G.A.B. _cannot_ _the_ _block_ _frequency_ _i._ GAP represents the global average pooling operation, which always use to calculate the channel attention weight in the computer vision domain. In the TKGC task, the main calculation procedure of direct-current component frequency attention is as follows. ## F A(Τ L) = Σ(F C(Gap(T:D)Nτ )) ⊙ Τ:D The FC block represents a fully-connected layer, and σ denotes sigmoid function. It is natural to include more frequency components to calculate attention weight. Considering the limitation of computing resources, we optionally select part of the frequency components. We divide the time embedding dimension dτ into n parts and assign a set of selected frequencies f0*, ...f*n to each part. We also introduce rˆj1, part of relation embedding aligned with τl, into the FA model. $$F A({\hat{\mathbf{r}_{j}}},\mathbf{\tau}_{l})\!=\!\sigma(F C({\hat{\mathbf{r}_{j1}}}\odot[f_{0}|f_{1}|...|f_{n}]))\odot\mathbf{\tau}_{l}$$ In addition, the computation complexity of the operation with finite orthogonal function bases is linear. The cost of computation of the frequency ![4_image_0.png](4_image_0.png) attention model is negligible compared to the cost of computation in tucker decomposition. The frequency attention weight determines how much timestamp embedding information for the corresponding dimension is retained. The FA model considers the evolution of a single relation over the entire timestamp. ## 4.3 Orthogonal Regularization For Core Tensor There have been many research results on orthogonal regularization of the matrix, such as (Miyato et al., 2018). In summary, the orthogonal regularization allows the parameter matrix to be closer to the diagonal-dominant non-singular matrix. A nonsingular matrix prevents abrupt truncation changes of the feature map during matrix multiplication. Furthermore, tucker decomposition can be viewed as matrix multiplication for factor matrix with the arbitrary slice of core tensor. In other words, orthogonal regularization could be applied to core tensor multiplication. Inspired by BigGAN (Brock et al., 2019), we heuristically propose two variants of core tensor orthogonal regularization Φ1 and Φ2. The baseline is the simple norm regularization Φ0 used in ## Tuckertnt. $$\begin{array}{l}{{\Phi_{0}(\theta)=||{\mathcal{W}}||_{4}^{4}}}\\ {{\Phi_{1}(\theta)=||{\mathcal{W}}\odot({\bf1}-{\mathcal{I}})||_{4}^{4}}}\\ {{\Phi_{2}(\theta)=||{\mathcal{W}}\odot({\bf1}-P r o j({\mathcal{I}}))||_{4}^{4}}}\end{array}$$ 1, I*, P roj*(I) is tensors with the same shape as W. All the elements of 1 are 1. The superdiagonal elements of I are 1, and the other elements are 0. The diagonal elements of the arbitrary slice matrix of *P roj*(I) are 1, and the other elements are 0. The tucker decomposition degenerates to the CP decomposition when the super-diagonal elements of the core tensor are 1, and the rest are 0. Φ1 regularization can restrict the result of the tucker decomposition to the neighborhood of the weighted CP decomposition. (the super-diagonal elements of the core tensor are weights, and the rest elements indicate a slight difference from weighted CP decomposition. ## 4.4 Other Regularization Researchers have investigated many different kinds of embedding regularization to alleviate the overfitting problem. TNTComplex uses the third power of Nuclear-3 norm twice for temporal or non-temporal quadruplets. TIMEPLEX (Jain et al., 2020) use sampled weighted L2 regularization to avoid the overfitting problem. In our model, we use embedding regularization as ChronoR (Sadeghian et al., 2021) does, using the fourth power of L4 norm as embedding regularization. $$\Omega_{4}(\theta)\!=\!||\mathbf{h}||_{4}^{4}+||\mathbf{t}||_{4}^{4}+||\mathbf{r}_{1}||_{4}^{4}+||[\mathbf{r}_{2}|F A(\mathbf{\tau})]||_{4}^{4}$$ Because of the real-world time continuity, it is natural to guarantee adjacent timestamp embeddings or repeat timestamp embeddings closer in the embedding space. TuckERTNT (Shao et al., 2022) proposes several temporal regularization to smooth the timestamp embedding. Notice that YAGO15K has many quadruplets without any timestamp, in which we artificially add a unique timestamp. This unique timestamp is excluded when computing temporal regularization terms. To be consistent with the embedding regularization, the adjacent timestamp differential regularization is as follow. $$\Lambda_{4}(\theta)=\frac{1}{|\mathcal{T}|-1}\,\sum_{i=1}^{|\mathcal{T}|-1}\,||\mathbf{\tau}_{i+1}-\mathbf{\tau}_{i}||_{4}^{4}$$ ## 4.5 Loss Function For each training data, we use instantaneous multiclass loss. $${\mathcal{L}}({\mathcal{X}}_{i j k l})=-{\mathcal{X}}_{i j k l}+l o g(\sum_{k^{\prime}}e x p({\mathcal{X}}_{i j k^{\prime}l}))$$ Considering instantaneous multi-class loss and the above three regularization term jointly, we train our model by minimizing the following loss function. $$\begin{aligned} \mathcal{L}(\mathcal{X};\theta) &= \frac{1}{|S|}\sum_{(i,j,k,l)\in S} \left[\mathcal{L}(\mathcal{X};\theta) + \lambda \Phi_n(\mathcal{X};\theta)\right.\\ &\left.+\lambda_1\Omega_4(\mathcal{X};\theta) + \lambda_2\Lambda_4(\mathcal{X};\theta))\right] \quad n=1,2,3 \end{aligned}$$ where $\lambda_1$, $\lambda_2$ and $\lambda$ is importance hyperparameter. for tuning. ## 5 Experiments 5.1 Datasets And Evaluation Metrics We choose three of the most commonlyused datasets to evaluate our model, including ICEWS14, GDELT, and YAGO15K. The detailed statistics of each dataset are shown in Table 1. ICEWS14 is extracted from the Integrated Crisis Early Warning System (Lautenschlager et al., 2015) repository, which contains political events with daily timestamp points. This dataset, for the most part, is time-sensitive and accurate in descriptions. GDELT is extracted from the Global Database of Events, Language, and Tone (Leetaru and Schrodt, 2013), which covers news data from 1979 to the present by automatically crawling. GDELT is a complicated dataset because of its abstract entity, such as government and organization. YAGO15K (Friedland and Lim, 2018) augmented events of FB15K (Bordes et al., 2013) with time interval. YAGO15K is worst-perform in TKGC tasks because it requires the model to handle both temporal and non-temporal knowledge. We follow the standard evaluation set in previous work, and report two standard metrics, Hit@k (k ∈ {1, 3, 10}) and filtered Mean Reciprocal Rank (MRR). They can evaluate the rank of the correct entity in the filtered candidate set. Hit@k reflects the percentage of the query whose correct tail entities are ranked within the top k candidates. Mean Reciprocal Rank, which computes the average of the reciprocal of mean rank, reflects the correct fact rank of the model. We follow the time-aware filtering (Han et al., 2021a), which means entities that cause ambiguity are removed from the candidate list for a query. We using reciprocal setting to add (tk, r−1 j, hi, τl) into train set for each quadruplet (hi, rj , tk, τl). The detailed hyperparameters of our model are shown in Appendix B. ## 5.2 Main Results Table 2 shows the main temporal knowledge graph completion results. The results of other models come from the original paper. We use the bold number to indicate the existing best results. Our model slightly outperforms or ties with previous SOTA results on several metrics of ICEWS14 and YAGO3-10. On GDELT, our model achieves significant improvement results on all metrics. Compared with TuckERTNT, our model TuckER-FA Table 1: The statistics of the benchmark datasets. | ICEWS14 | GDELT YAGO15K | | | |-------------|-----------------|-----------|----------| | #Entity | 7128 | 500 | 15403 | | #Relation | 230 | 20 | 34 | | #Timestamp | 365 | 366 | 198 | | #Facts | 90730 | 3419607 | 138056 | | Timespan | 2014 2015-2016 | 1513-2017 | | | Granularity | Daily | Daily | Annually | | Type | Point | Point | Interval | Table 2: The evaluation results on ICEWS14, GDELT, and YAGO15K. For the other works, we report the best results reported in their original paper. Model ICEWS14 GDELT YAGO15K Metric MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 TransE 28.0 9.4 - 63.7 15.5 6.0 17.8 33.5 29.6 22.8 - 46.8 SimplE 45.8 34.1 51.6 68.7 20.6 12.4 22.0 36.6 - - - - ComplEx 47.0 35.0 54.0 71.0 21.3 13.3 22.5 36.6 36.0 29.0 36.0 45.0 TTransE 25.5 7.4 - 60.1 11.5 0.0 16.0 31.8 32.1 23.0 - 51.0 TA-DistMult 47.7 36.3 - 68.6 20.6 12.4 21.9 36.5 29.1 21.6 - 47.6 DE-SimplE 52.6 41.8 59.2 72.5 23.0 14.1 24.8 40.3 - - - - TeMP 60.7 54.5 67.3 77.4 - - - - 27.5 19.1 29.7 43.7 TNTComplex 62.0 52.0 66.0 76.0 22.4 14.4 23.9 38.1 36.0 28.4 37.0 53.7 ChronoR 62.5 **54.7** 66.9 77.3 - - - - 36.6 **29.1** 37.9 53.8 BoxTE 61.5 53.2 66.7 76.4 35.2 26.9 37.7 51.1 - - - - TuckERTNT 62.5 54.4 67.3 77.3 44.8 35.2 49.2 63.0 - - - - TuckER-FA **62.7** 54.4 67.7 78.0 48.6 39.3 53.2 **66.0** 36.5 28.2 39.2 **54.3** Table 3: Ablation study of FA and OR TuckER-FA. has d[(1 − ρ)*|T |* + (2 + ρ)|R|] fewer parameters and 1.1% higher MRR performance when using the same embedding dimensionality. Compared with 2500+ dimensions of entity and relation of TNTComplex and ChronoR, our model gets better results using only 400 dimensions of entity and relation. Increasing the number of model parameters substantially improves MRR performance on the GDELT, but the improvement on the other two models is slight. Compared with ICEWS14, the GDELT dataset has nearly 38 times the number of facts, and fewer entities/relations, within the same time span. The corresponding graph of GDELT is spatially denser, with many more recurring facts. As a result, GDELT greatly enhances the global temporal dependency of relations, which is exactly our FA model focus on. Complex global temporal dependency explains TuckER-FA outstanding advantage on GDELT compared with its counterparts. ## 5.3 Ablation Study Table 3 shows the ablation study of the Frequency Attention(FA) model and core tensor Orthogonal Regularization(OR) model. It can be observed that both models are valuable when working individually, and the combination of them performs even better. We can point out that the OR model is more effective than the FA model. The single FA model increases the accuracy significantly in ICEWS14 compared with the vanilla model, which is probably because this dataset has the shortest time span and the minimal data. The improvement of the single OR model is slight in YAGO15K, and it may be attributed to the presence of facts without timestamps. ![6_image_0.png](6_image_0.png) | MRR | ICEWS14 | GDELT | YAGO15K | |----------|------------|------------|-------------| | ALL | 62.7 | 48.6 | 36.5 | | w/o FA | 62.0(-0.7) | 47.3(-1.3) | 34.8 (-1.7) | | w/o OR | 60.6(-2.1) | 46.1(-2.5) | 33.4 (-3.1) | | w/o Both | 58.4(-4.3) | 44.9(-3.7) | 32.6 (-3.9) | ## 5.4 **Frequency Of Temporal Knowledge Graph** From the result of FcaNet (Qin et al., 2021) in the Appendix E, we can notice frequency atten- ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) (b) YAGO15K tion model with a single direct-current component always reach the best result in the Image Classification task. The low-frequency component is more critical than the high-frequency component because the low-frequency signal of the images provides information such as shape or size, while the high-frequency signal provides information such as edge details. However, this phenomenon does not occur in the TKGC task. Usually, it is difficult to have many repeated details in a naturally captured image. In contrast, many facts, such as the toy example, continuously change over time in TKGC. These facts determine that there exist some inherent frequencies in timestamp embedding. Figure 4 shows the results of the frequency attention model with different single DCT bases. We use a combination of frequencies from the top six results to achieve our best results. The difference between the frequencies was insignificant, meaning the global temporal dependency spread over all frequencies. In detail, the direct-current component is the second-high result in ICEWS14 but the sixth-high in YAGO15K. The reason may be that the facts without a timestamp in YAGO15K disturb the estimation of the intrinsic frequency. The above results show that finding a suitable frequency feature with FA is helpful for the TKGC task. To study why TuckER-FA performs differently on different datasets compared to other baselines, we count the number of the query (*h, r,* ?, ?) occur- Table 4: MRR Results for different test set divisions on ICEWS14 and YAGO15k. | Dataset | Total | DC | HF | |-----------|---------|------|------| | ICEWS14 | 62.5 | 61.3 | 64.2 | | YAGO15k | 36.5 | 35.1 | 39.4 | rences over the entire timestamp. The average number of occurrences per query in GDELT is 284.7, and queries that occur more than once account for 98.8% of the training set. The statistics are 7.33, 40.8% in ICEWS14, and 4.51, 54.9% in YAGO15k, respectively. Thus, TuckER-FA gains better results on GDELT because the FA module is good at capturing the global temporal dependencies between one relation over the entire timestamp. Table 4 show the MRR Results for two test set divisions on ICEWS14 and YAGO15k. We split the test set into two subsets, one consisting of queries that occur only once (direct-currency component, DC) and the other consisting of queries that appear multiple times (high-frequency component, HF). In conclusion, our model works much better on the high-frequency component test set. Because the GDELT dataset is almost exclusively high-frequency components, it boosts most significantly. ## 5.5 Orthogonal Regularization Figure 5 shows the detailed comparison of three regularizations for the core tensor. In this experiment, we fix the λ1 and λ2 and only use the direct-current component for the frequency attention model. The Φ2 regularization without the diagonal elements of the arbitrary slice matrix of the core tensor performs slightly better than the Φ0 norm regularization. The Φ1 regularization without the superdiagonal elements of the core tensor performs best. The Φ1 regularization increase MRR by 0.7 points on ICEWS14 and 1.8 points on YAGO15K. Top Performing Φ1 can increase MRR to 2.1 points on ICEWS14 and 3.1 points on YAGO15K. We can point out that the relaxation of the core tensor constraint is effective. ![7_image_2.png](7_image_2.png) $$\frac{\mathrm{HF}}{64.2}$$ $$\underline{39.4}$$ $$\pi.1$$ $\square$ ## 5.6 Effect Of Parameter Complexity To compare model results more fairly, we add two experiments controlling the number of parameters or entity dimensionality between TNTcomplEx, TeLM, and TuckER-FA. A complete parameter comparison between baseline models and TuckERFA is in Appendix A. Figure 6 shows our TuckER-FA consistently outperforms the baseline TNTComplEx and TeLM with the same dimensionality on ICEWS14. TNTComplEx uses a specialized matrix to represent non-timestamp embedding, which allows it to perform best in low dimensionality on YAGO15K. Our TuckER-FA can overtake in high dimensionality. The upper limit of TeLM is not as high as TuckER-FA. In summary, our model has the highest performance ceiling for TKGC. Table 5 shows the comparison with the same parameters and 50 epochs training on ICEWS14. We limit the size of the learnable parameters to approximately 67M, which means 2110 embedding dimensions for TeLM, 4000 for TNTComplEX, and 400 for our TuckER-FA. We can point out that our TuckER-FA model achieves a significant improvement in MRR and Hit@3 and a weak improvement in the other two metrics. | Model | MRR | Hit@1 | Hit@3 | Hit@10 | |------------|-------|---------|---------|----------| | TuckER-FA | 62.5 | 54.3 | 67.5 | 77.2 | | TNTComplEx | 61.2 | 52.4 | 66.3 | 77.4 | | TeLM | 62.1 | 54.2 | 66.7 | 77.0 | Table 5: Results with approximately 67M parameters of TNTComplEx, TeLM, and TuckER-FA on ICEWS14. ## 6 Conclusion In our work, we propose a DCT-based Frequency Attention model and two variants of Orthogonal Regularization for the core tensor of tucker decomposition. The FA model considers the global temporal dependency between one relation and the entire timestamp. Each KG has its unique inherent frequency. The OR term relaxes the constraint on the superdiagonal of the core tensor and improves the performance of tucker decomposition. TuckER-FA achieves SOTA results on three standard datasets of temporal knowledge graph completion task. There might be further discussions on an efficient frequency selection strategy or a theoretical assumption for tensor regularization. ## 7 Limitation Although our method has been shown effective, it has two limitations that may be improved in the future. First, the FA model has advantages in computation but relies on an effective frequency selection strategy, which is difficult to design. We just simply select some manual frequencies for different datasets by experience. The more effective frequency selection strategy needs further exploration. Second, there is no theoretical guarantee that the orthogonal regularization can generalize to a 3-order tensor. Our OR terms are only formally consistent with matrix orthogonal regularization, which has been empirically shown effective. ## Acknowledgements This work is supported partly by the National Key R&D Program of China under Grant 2021ZD0110700, partly by the Fundamental Research Funds for the Central Universities, and partly by the State Key Laboratory of Software Development Environment. ## References Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, pages 722–735, Berlin, Heidelberg. Springer Berlin Heidelberg. Ivana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5184–5193. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In *Proceedings of the 2013* conference on empirical methods in natural language processing, pages 1533–1544, Seattle, USA. Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787–2795. Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large scale GAN training for high fidelity natural image synthesis. In *7th International Conference on Learning Representations, ICLR 2019, New* Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Pratim Talukdar. 2018. Hyte: Hyperplanebased temporally aware knowledge graph embedding. In *EMNLP*. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1811–1818. AAAI Press. Shmuel Friedland and Lek-Heng Lim. 2018. Nuclear norm of higher-order tensors. *Math. Comput.*, 87(311):1255–1281. Rishab Goel, Seyed Mehran Kazemi, Marcus A. Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In *AAAI*. Zhen Han, Zifeng Ding, Yunpu Ma, Yujia Gu, and Volker Tresp. 2021a. Learning neural ordinary equations for forecasting future links on temporal knowledge graphs. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8352– 8364. Association for Computational Linguistics. Zhen Han, Gengyuan Zhang, Yunpu Ma, and Volker Tresp. 2021b. Time-dependent entity embedding is not all you need: A re-evaluation of temporal knowledge graph completion models under a unified framework. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8104– 8118. Association for Computational Linguistics. Prachi Jain, Sushant Rathi, Mausam, and Soumen Chakrabarti. 2020. Temporal knowledge base completion: New algorithms and evaluation protocols. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3733– 3747. Association for Computational Linguistics. Jaehun Jung, Jinhong Jung, and U. Kang. 2021. Learning to walk across time for interpretable temporal knowledge graph completion. *Proceedings of the* 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Jennifer Lautenschlager, Steve Shellman, and Michael Ward. 2015. ICEWS Event Aggregations. Kalev Leetaru and Philip A Schrodt. 2013. Gdelt: Global data on events, location and tone, 1979-2012. Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. 2021. Temporal knowledge graph reasoning based on evolutional representation learning. *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information* Retrieval. Siyuan Liao, Shangsong Liang, Zaiqiao Meng, and Qiang Zhang. 2021. Learning dynamic embeddings for temporal knowledge graphs. *Proceedings of the* 14th ACM International Conference on Web Search and Data Mining. Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3243–3253. Association for Computational Linguistics. Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. 2015. YAGO3: A knowledge base from multilingual wikipedias. In *Seventh Biennial Conference on Innovative Data Systems Research, CIDR* 2015, Asilomar, CA, USA, January 4-7, 2015, Online Proceedings. www.cidrdb.org. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral normalization for generative adversarial networks. In *6th International* Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In *Proceedings of* the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 809–816. Omnipress. Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. 2021. Fcanet: Frequency channel attention networks. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 763–772. IEEE. Ali Sadeghian, Mohammadreza Armandpour, Anthony Colas, and Daisy Zhe Wang. 2021. Chronor: Rotation based temporal knowledge graph embedding. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI* 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 6471–6479. AAAI Press. Pengpeng Shao, Dawei Zhang, Guohua Yang, Jianhua Tao, Feihu Che, and Tong Liu. 2022. Tucker decomposition-based temporal knowledge graph completion. *Knowl. Based Syst.*, 238:107841. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Rakshit S. Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In *ICML*. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *Proceedings of the 33nd International Conference on* Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of *JMLR Workshop and Conference Proceedings*, pages 2071–2080. JMLR.org. Denny Vrandeciˇ c and Markus Krötzsch. 2014. ´ Wikidata: A free collaborative knowledgebase. *Commun.* ACM, 57(10):78–85. Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In *Proceedings of the 27th ACM International Conference* on Information and Knowledge Management, pages 417–426. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: Knowledge graph embedding for precise link prediction. In *Proceedings of* the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 1315–1321. IJCAI/AAAI Press. Chenjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann. 2020. Temporal knowledge graph completion based on time series gaussian embedding. In The Semantic Web - ISWC 2020 - 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part I, volume 12506 of *Lecture Notes in* Computer Science, pages 654–671. Springer. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Zhanqiu Zhang, Jianyu Cai, and Jie Wang. 2020. Duality-induced regularizer for tensor factorization based knowledge graph completion. *CoRR*, abs/2011.05816. Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, and Yan Zhan. 2021. Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks. In *AAAI*. ## A Parameter Complexity Table 6: Parameter complexity of different models. Here d denotes the embedding dimensionality. | DE-SimplE | d((1 + 2ρ)|E| + |R|) | |-------------------------------------------------------------------------------------------------------------------------------------------------|------------------------| | TNTComplEx 2d(|E| + |T | + 4|R|) TeLM 4d(|E| + |T | + |R| + 1) TuckERTNT d(|E| + |T | + 4|R|) + d 3 TuckER-FA d(|E| + ρ|T | + (2 − ρ)|R|) + d 3 | | When d is smaller the number of entity/relation/timestamp collection, tucker decomposition has considerable parameter advantage. Compared with TuckERTNT of the same type, our TuckER-FA gets better results with parameter advantage in modeling relation embedding. ## B Hyperparameters Table 7: Obtained best hyperparameters. | ICEWS14 | GDELT | YAGO15K | | |-----------|---------|-----------|------| | ρ | 0.75 | 0.90 | 0.75 | | λ1 | 1e-2 | 1e-1 | 1e-4 | | λ2 | 1e0 | 1e0 | 1e-2 | | λ | 1e-2 | 1e-4 | 1e-3 | We implement our model based on two previous models, TuckER and TNTcomplex. Other baseline models mentioned above have yet to provide publicly available code. During the pre-processing data phase, we convert all time intervals into two different time points and consider them independent. The time intervals in YAGO15K look like "OccursUntil/OccurSince 1994". The time interval is split into two parts, "OccursSince" or "OccursUntil" merged into a relation, and the time point transforms the timestamp. Note that quadruplet without timestamps in YAGO15K also own a unique timestamp. Although the dimensionality of entity and relation can be different, we use the same dimensionality in our experiments. For general learning settings, we set the dimensionality of entity and relation to 800, batch size to 1000, learning rate to 0.1, and dropout probability to 0.3. Each embedding initializes from 0.01 times the Standard Gaussian distribution. Moreover, the learnable parameters of the core tensor initialize from the uniform distribution from -1 to 1. The frequency attention model uses a single frequency component from f0 to f12 ![11_image_0.png](11_image_0.png) input. We choose ratio ρ form {0.45, 0.6, 0.75, 0.9}, embedding regularization balance term λ1 from {1e-1, 1e-2, 1e-3, 1e-4}, temporal regularization balance term λ2 from {1e0, 1e-1, 1e-2, 1e-3}, and core tensor regularization balance term λ from {1e-2, 1e-3, 1e-4, 1e-5}. Note that the timestamp embedding dimensionality should be divisible by n, the number of the selected frequency components. We repeats the experiment three times and reported the average results. ## C Visualization Of Toy Example To illustrate whether our frequency attention model captures the temporal dependencies between a relation and the entire timestamp, we visualize the scores of a selected set of facts. Figure 7 show the scores change of the toy example. The factual tail entity can consistently score high by TuckER-FA. The gray entity's score changes very little because its ground truth belongs to the test set. The ground truth of the other three color entities belongs to the train set. The facts in the same category (Tom Adeyemi, playsFor,?) change quickly, which reflects a high intrinsic frequency of global temporal dependency. The scores of ground truth are very high on timestamps where facts exist, while all entities achieve a low score on timestamps where facts do not exist. Our model can capture the fastchanging temporal dependency of facts in the same category. ![12_image_0.png](12_image_0.png) ## D The Frequency Of De-Simple DE-SimplE divides the timestamp 2014-01-31 into three numbers representing the year, month, and day. Then, these time numbers are fed into the sine function along with the frequency and phase parameters. Figure 8 shows the histogram of learned frequency parameters of DE-SimplE on the ICEWS14 dataset. These learned parameters concentrate around 0, meaning only low-frequency information about entities over time is captured. In previous work (Xu et al., 2020), the evolution process is divided into four different components: static component, periodicity component, trend component, and randomness component. In our opinion, the random component focuses on model robustness, and the static component focuses on the static entity or relation rather than the timestamp. The periodicity and trend components mean the temporal dependency of relations and timestamps, which can be captured by a periodic function such as cosine. If the period of the function is greater than the entire time span, then the cosine function captures the trend component. Similarly, if the period of the function is less than the entire time span, the cosine function captures the periodicity component. ## Frequency Attention For Image E The image uses 2-dimensional DCT as Figure 9 , while the KG uses only one-dimensional DCT. Frequency is an essential characteristic of DCT and indicates how many repetitive structures there are in the data. The main body of the image is composed of low-frequency features, while the TKG body has high-frequency features due to repeated quadruplets. ![12_image_1.png](12_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 7, before the References A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? appendix B ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? section 5.1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 5.1 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? appendix B ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 5.1 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 5.6 and appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
munoz-ortiz-vilares-2023-another
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
https://aclanthology.org/2023.findings-acl.459
The usefulness of part-of-speech tags for parsing has been heavily questioned due to the success of word-contextualized parsers. Yet, most studies are limited to coarse-grained tags and high quality written content; while we know little about their influence when it comes to models in production that face lexical errors. We expand these setups and design an adversarial attack to verify if the use of morphological information by parsers: (i) contributes to error propagation or (ii) if on the other hand it can play a role to correct mistakes that word-only neural parsers make. The results on 14 diverse UD treebanks show that under such attacks, for transition- and graph-based models their use contributes to degrade the performance even faster, while for the (lower-performing) sequence labeling parsers they are helpful. We also show that if morphological tags were utopically robust against lexical perturbations, they would be able to correct parsing mistakes.
# Another Dead End For Morphological Tags? Perturbed Inputs And Parsing Alberto Muñoz-Ortiz and David Vilares Universidade da Coruña, CITIC Departamento de Ciencias de la Computación y Tecnologías de la Información Campus de Elviña s/n, 15071 A Coruña, Spain {alberto.munoz.ortiz, david.vilares}@udc.es ## Abstract The usefulness of part-of-speech tags for parsing has been heavily questioned due to the success of word-contextualized parsers. Yet, most studies are limited to coarse-grained tags and high quality written content; while we know little about their influence when it comes to models in production that face lexical errors. We expand these setups and design an adversarial attack to verify if the use of morphological information by parsers: (i) contributes to error propagation or (ii) if on the other hand it can play a role to correct mistakes that word-only neural parsers make. The results on 14 diverse UD treebanks show that under such attacks, for transition- and graph-based models their use contributes to degrade the performance even faster, while for the (lower-performing) sequence labeling parsers they are helpful. We also show that if morphological tags were utopically robust against lexical perturbations, they would be able to correct parsing mistakes. ## 1 Introduction The use of morphological tags was a core component of dependency parsers to improve performance (Ballesteros and Nivre, 2012). With the rise of neural models, feeding explicit morphological information is a practice that has greatly vanished, with (often) the exception of part-of-speech (PoS) tags. In this line, Ballesteros et al. (2015) already found that character-based word vectors helped improving performance over purely word-level models, specially for rich-resource languages, for which the use of morphological information is more relevant (Dehouck and Denis, 2018). Related, Dozat et al. (2017) showed that predicted PoS tags still improved the performance of their graph-based parser, even when used together with character-based representations. Smith et al. (2018) and de Lhoneux et al. (2017) studied the impact that ignoring PoS tag vectors had on the performance of a biLSTM transition-based parser (Kiperwasser and Goldberg, 2016). They conclude that when considering PoS tags, word-level, and character-level embedddings, any two of those vectors are enough to maximize a parser performance, i.e., PoS tag vectors can be excluded when using *both* word-level and characterlevel vectors. Zhou et al. (2020) showed the utility of PoS tags when learned jointly with parsing. Recently, Anderson and Gómez-Rodríguez (2021) and Anderson et al. (2021) have explored the differences between using gold and predicted PoS tags, showing that the former are helpful to improve the results, while the latter are often not, with the exception of low-resource languages, where they obtain small but consistent improvements. Furthermore, Muñoz-Ortiz et al. (2022) showed that the efficacy of PoS tags in the context of sequence labeling parsing is greatly influenced by the chosen linearization method. However, most of such work has focused on: (i) studying the effect of the universal PoS tags (Zeman et al., 2021), and (ii) its impact on nonperturbed inputs. Yet, NLP models are very sensible and brittle against small attacks, and simple perturbations like misspellings can greatly reduce performance (Ebrahimi et al., 2018; Alzantot et al., 2018). This has been shown for tasks such as named-entity recognition, question answering, semantic similarity, and sentiment analysis (Moradi and Samwald, 2021). In parallel, defensive strategies have been tested to improve the robustness of NLP systems, e.g., placing a word recognition module before downstream classifiers (Pruthi et al., 2019), or using spelling checks and adversarial training (Li et al., 2019). Yet, as far as we know, no related work has been done on testing perturbed inputs for parsing and the effect, positive or negative, that using morphological information as explicit signals during inference might have in guiding the parsers.1 ## 2 Adversarial Framework Perturbed inputs occur for several reasons, such as for instance on-purpose adversarial attacks (Liang et al., 2018) or, more likely, unintended mistakes made by human writers. In any case, they have an undesirable effect on NLP tools, including parsers. Our goal is to test if under such adversarial setups, coarse- and fine-grained morphological tags: (i) could help obtaining more robust and better results in comparison to word-only parsers (going against the current trend of removing any explicit linguistic input from parsers); or (ii) if on the contrary they contribute to degrade parsing performance. Below, we describe both how we generate (i, §2.1) linguistically-inspired attacks at characterlevel, and (ii, §2.2) the tested parsers. ## 2.1 Perturbed Inputs To perturb our inputs, we use a combination of four adversarial misspellings, inspired by Pruthi et al. (2019) who designed their method relying on previous psycholinguistic studies (Davis, 2003; Rawlinson, 1976). In particular, we consider to: (i) drop one character, (ii) swap two contiguous characters, (iii) add one character, and (iv) replace a character with an adjacent character in a QWERTY keyboard. These changes will probably transform most words into out-of-vocabulary terms, although some perturbations could generate valid tokens (likely occurring in an invalid context). We only apply perturbations to a fraction of the content words of a sentence2(details in §3), as function words tend to be shorter and a perturbation could make them unrecognizable, which is not our aim. Finally, we only allow a word to suffer a single attack. Since we will be evaluating on a multilingual setup, we considered language-specific keyboards to generate the perturbations. We restrict our analysis to languages that use the Latin alphabet, but our adversarial attack would be, in principle, applicable to any alphabetic script. ## 2.2 Parsing Models Since we want a thorough picture of the impact of using morphological information on parsers, we include three models from different paradigms: 1. A left-to-right transition-based parser with pointer networks (Fernández-González and 2Those which universal PoS tags is ADJ, ADV, INTJ, PROPN, NOUN or VERB. Gómez-Rodríguez, 2019). It uses biLSTMs (Hochreiter and Schmidhuber, 1997) to contextualize the words, and the outputs are then fed to a pointer network (Vinyals et al., 2015), which keeps a stack and, in a left-to-right fashion, decides for each token its head. 2. A biaffine graph-based parser (Dozat et al., 2017). This model also uses biLSTMs to first contextualize the input sentence. Differently from Fernández-González and GómezRodríguez, the tree is predicted through a biaffine attention module, and to ensure wellformed trees it uses either the Eisner (1996) or Chu (1965); Edmonds (1968) algorithms.3 3. A sequence labeling parser (Strzyz et al., 2020) that uses a 2-planar bracketing encoding to linearize the trees. Like the two other parsers, it uses biLSTMs to contextualize sentences, but it does not use any mechanism on top of their outputs (such as biaffine attention or a decoder module) to predict the tree (which is rebuilt from a sequence of labels). Particularly, we use this third model to: (i) estimate how sensitive raw biLSTMs are to attacks, (ii) compare their behavior against the transitionand graph-based models and the extra mechanisms that they incorporate, (iii) and verify if such mechanisms play a role against perturbed inputs. Inputs We concatenate a word vector, a second word vector computed at character level, and (optionally) a morphological vector. This is the preferred input setup of previous work on PoS tagging plus its utility for neural UD parsing (de Lhoneux et al., 2017; Anderson and GómezRodríguez, 2021).4 Note that character-level vectors should be robust against our attacks, but it is known that in practice they are fragile (Pruthi et al., 2019). In this respect, our models use techniques to strengthen their behaviour against word variation, by using character-level dropout. This way, we inject noise during training and give all our models a lexical-level defensive mechanism to deal with misspellings. We kept this feature to keep the setup realistic, as character-level dropout is implemented 3This is true for the supar implementation that we use, although Dozat et al. relied on heuristics. 4Some authors (Zhou et al., 2020) exploit PoS tags for parsing in a multi-task learning setup instead, but the differences in the experiments are small (∼0.3 points) and they are limited to English and Chinese on non-UD treebanks. by default in most of modern parsers, and ensure stronger baselines. Training and hyperparameters We use nonperturbed training and development sets,5since our aim is to see how parsers trained in a standard way (and that may use explicit morphological features) behave in production under adversarial attacks. Alternatively, we could design additional techniques to protect the parsers against such perturbations, but this is out of the scope of this paper (and for standard defensive strategies, we already have character-level dropout). For all parsers, we use the default configuration specified in the corresponding repositories. We use 2 GeForce RTX 3090 for training the models for around 120 hours. Morphological tags To predict them, we use a sequence labeling model with the same architecture than the one used for the sequence labeling parser. We use as input a concatenation of a word embedding and a character-level LSTM vector. ## 3 Experiments We Now Describe Our Experimental Setup: Data We selected 14 UD treebanks (Zeman et al., 2021) that use the Latin alphabet and are annotated with universal PoS tags (UPOS), languagespecific PoS tags (XPOS), and morphological feats (FEATS). It is a diverse sample that considers different language families and amounts of data, whose details are shown in Table 1. For the pre-trained word vectors, we rely on Bojanowski et al. (2017).6 Also, note that we only perturb the test inputs. Thus, when the input is highly perturbed, the model will mostly depend on the character representations, and if used, the morphological tags fed to it. Treebank # Sent. Family #UPOS #XPOS #FEATS ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) ![2_image_2.png](2_image_2.png) ![2_image_3.png](2_image_3.png) ![2_image_4.png](2_image_4.png) AfrikaansAfriBooms 1 315 Germanic (IE) 16 95 55 BasqueBDT 5 396 Basque 16 - 573 EnglishEWT 12 543 Germanic (IE) 18 51 153 FinnishTDT 12 217 Uralic 16 14 1 786 GermanGSD 13 814 Germanic (IE) 17 52 458 HungarianSzeged 449 Uralic 16 - 384 IndonesianGSD 4 477 Austronesian 18 45 48 IrishIDT 4 005 Celtic (IE) 17 72 653 LithuanianHSE 153 Baltic (IE) 16 30 215 MalteseMUDT 1 123 Afro-Asiatic 17 47 - PolishLFG 13 774 Slavic (IE) 15 623 1 037 SpanishAnCora 14 305 Latin (IE) 18 318 243 SwedishLinES 3 176 Germanic (IE) 17 214 171 TurkishPenn 14 851 Turkic 15 - 490 how the magnitude of the attacks affects the results. For each targeted word, one of the four proposed perturbations is applied randomly. To control for randomness, each model is tested against 10 perturbed test sets with the same level of perturbation. To check that the scores were similar across runs, we computed the average scores and the standard deviation (most of them exhibiting low values). Setup For each parser we trained four models: a word-only (word) baseline where the input is just the concatenation of a pre-trained word vector and a character-level vector, and *three* extra models that use universal PoS tags (word+UPOS), language-specific PoS tags (word+XPOS), or feats (word+FEATS). For parsing evaluation, we use labeled attachment scores (LAS). For the taggers, we report accuracy. We evaluate the models on two setups regarding the prediction of morphological tags: (i) tags predicted on the same perturbed inputs as the dependency tree, and (ii) tags predicted on non-perturbed inputs. Specifically, the aim of setup ii is to simulate the impact of using a tagger that is very robust against lexical perturbations. ## 3.1 Results Tables 2 and 3 show the average LAS results across all treebanks and models for tags predicted on perturbed and non-perturbed inputs, respectively. Figures 1, 2, and 3 display the mean LAS difference between the word and the other model configurations, using tags predicted on both perturbed and non-perturbed inputs for each parser. ## 3.1.1 Results Using Morphological Tags Predicted On Perturbed Inputs Figure ??.a shows the score differences for the transition-based parsers. The average difference between the baseline and all the models using morphological tags becomes more negative as the per- % Perturbed Transition-based Graph-based Sequence labeling Tagger accuracy ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) ![3_image_3.png](3_image_3.png) ![3_image_4.png](3_image_4.png) ![3_image_6.png](3_image_6.png) word UPOS XPOS FEATS word UPOS XPOS FEATS word UPOS XPOS FEATS UPOS XPOS FEATS 0 75.66 74.93 76.28 74.84 79.35 77.44 78.38 77.28 68.29 68.98 70.96 66.79 89.76 87.80 83.38 10 74.93 73.68 75.07 73.53 78.59 75.69 76.77 75.49 66.71 67.31 69.34 64.97 88.56 86.17 81.68 20 74.11 72.45 73.92 72.13 77.81 73.93 75320 73.73 65.18 65.61 67.76 63.16 87.38 84.59 79.94 30 73.33 71.19 72.66 70.74 76.99 72.22 73.56 71.92 63.62 63.96 66.17 61.37 86.17 82.91 78.22 40 72.52 69.86 71.45 69.33 76.10 70.36 71.88 70.06 62.09 62.24 64.59 59.55 84.93 81.30 76.50 50 71.66 68.58 70.13 67.93 75.27 68.63 70.14 68.09 60.52 60.50 62.94 57.81 83.71 79.61 74.68 60 70.78 67.26 68.75 66.46 74.37 66.72 68.37 66.09 58.94 58.91 61.36 56.10 82.48 77.90 72.92 70 69.87 65.88 67.40 64.92 73.49 64.96 66.64 66.06 57.44 57.24 59.77 54.36 81.19 76.13 71.13 80 68.96 64.50 66.03 63.46 72.48 63.05 64.80 62.27 55.90 55.61 58.17 52.65 79.93 74.42 69.37 90 67.99 63.12 64.61 61.90 71.57 61.12 62.97 60.16 54.42 53.95 56.54 50.96 78.62 72.64 67.56 100 67.04 61.74 63.16 60.34 70.59 59.23 61.14 58.13 52.92 52.30 54.97 49.23 77.30 70.85 65.74 % Perturbed Transition-based Graph-based Sequence labeling ![3_image_5.png](3_image_5.png) ![3_image_7.png](3_image_7.png) ![3_image_8.png](3_image_8.png) ![3_image_9.png](3_image_9.png) ![3_image_10.png](3_image_10.png) ![3_image_11.png](3_image_11.png) ![3_image_12.png](3_image_12.png) ![3_image_13.png](3_image_13.png) ![3_image_14.png](3_image_14.png) word UPOS XPOS FEATS word UPOS XPOS FEATS word UPOS XPOS 0 75.66 74.93 76.28 74.84 79.35 77.44 78.38 77.28 68.29 68.98 70.96 66.79 10 74.93 74.64 76.05 74.55 78.59 76.91 78.01 76.78 66.71 68.60 70.53 66.19 20 74.11 74.36 75.82 74.23 77.81 76.46 77.58 73.62 65.18 68.19 70.08 65.62 30 73.33 74.02 75.60 73.94 76.99 75.88 77.20 75.82 63.62 67.76 69.62 64.99 40 72.52 73.71 75.36 73.66 76.10 75.44 76.78 75.27 62.09 67.34 69.13 64.46 50 71.66 73.41 75.17 73.35 75.27 74.94 76.42 74.80 60.52 66.88 68.66 63.79 60 70.78 73.06 74.87 73.04 74.37 74.46 76.02 74.25 58.94 66.40 68.19 63.18 70 69.87 72.74 74.64 72.70 73.49 73.99 75.53 73.76 57.44 65.95 67.72 62.56 80 69.86 72.39 74.40 72.37 72.48 73.46 75.13 73.26 55.90 65.45 67.23 61.92 90 67.99 72.08 74.13 72.10 71.57 72.92 74.46 72.73 54.42 64.93 66.75 61.27 100 67.04 71.73 73.93 71.74 70.59 72.45 74.35 72.15 52.92 64.41 66.27 60.63 ![3_image_16.png](3_image_16.png) (a) Perturbed (b) Non-perturbed ![3_image_15.png](3_image_15.png) ![3_image_17.png](3_image_17.png) (a) Perturbed (b) Non-perturbed centage of perturbed words increases. Such difference is only positive for word+XPOS when none or a few percentage of words are perturbed. All morphological tags show a similar tendency, word+FEATS degrading the performance the most, followed by the 'coarse-grained' word+UPOS. Figure 2.a shows the results for the graph-based parsers. Again, most morphological inputs contribute to degrade the performance faster than the baseline. In this case, no model beat the baseline when predicting tags on the perturbed inputs. The performance of word+FEATS and word+UPOS is similar (performing word+UPOS a bit better), and the word+XPOS models improve the performance the most. Figure 3.a shows the results for the sequence labeling parsers: differences between the baseline and the models utilizing morphological information exhibit minor changes ranging from 0% to 100% of perturbed words. Also, the usefulness of the morphological information depends on the specific tags selected. While word+UPOS obtains similar results to the baseline, word+XPOS scores around 2-3 points higher for the tested percentages of pertur- ![4_image_0.png](4_image_0.png) bations, and word+FEATS harms the performance in a range between 1 and 4 points. The results show that feeding morphological tags to both graph- and transition-based parsers has a negative impact to counteract such attacks, degrading their performance faster. On the contrary, the sequence labeling parsers, that rely on biLSTMs to make the predictions, can still benefit from them. In addition, the different trends for the sequence labeling parser *versus* the transition- and graphbased parsers, which additionally include a module to output trees (a pointer network and a biaffine attention, respectively), suggest that such modules are likely to be more effective against adversarial attacks than explicit morphological signals. ## 3.1.2 Results Using Morphological Tags Predicted On Non-Perturbed Inputs As mentioned above, we use this setup to estimate whether morphological tags could have a positive impact if they were extremely robust against lexical perturbations (see also Figures 1.b, 2.b and 3.b). In the case of the transition-based parser, we observe that morphological tags predicted on non-perturbed inputs help the parser more as the inputs' perturbation grows, being word+XPOS the most helpful information, while UPOS and FEATS become useful only when sentences are perturbed over 20% (but they also become more and more helpful). The graph-based parser also benefits from the use of more precise tags: word+XPOS models beat the baseline when the perturbation is over 30%; and over 50% for word+UPOS and word+FEATS setups. Finally, for the sequence-labeling parser, morphological information from a robust tagger helps the model surpass the baseline for any percentage of perturbed words (except in the case of word+FEATS, when it only happens with perturbations over 20%). ## 3.1.3 Discussion On Slightly Perturbed Inputs Unintended typos are commonly found among users. For experiments with a small percentage of perturbed words (< 20%), transition-based parsers show improvement solely with the word+XPOS model, even when using non-robust taggers. Conversely, graph-based parsers do not benefit from morphological tags in this setup. Last, sequence labeling parsers benefit from incorporating XPOS and UPOS information, irrespective of the tagger's robustness, but not FEATS. ## 3.1.4 Differences Across Morphological Tags Averaging across languages, the language-specific XPOS tags have a better (or less bad, for setup i) behavior. These tags are specific to each language. The coarse-grained UPOS tags have a common annotation schema and tagset. This eases annotation and understanding, but offer less valuable information. For FEATS, the annotation schema is common, but in this case they might be too sparse. ## 4 Conclusion This paper explored the utility of morphological information to create stronger dependency parsers when these face adversarial attacks at characterlevel. Experiments over 14 diverse UD treebanks, with different percentages of perturbed inputs, show that using morphological signals help creating more robust sequence labeling parsers, but contribute to a faster degradation of the performance for transition- and graph-based parsers, in comparison to the corresponding word-only models. ## Acknowledgements This paper has received funding from grant SCANNER-UDC (PID2020-113230RB-C21) funded by MCIN/AEI/10.13039/501100011033, the European Research Council (ERC), which has supported this research under the European Union's Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia "CITIC", funded by Xunta de Galicia and the European Union (ERDF - Galicia 2014-2020 Program), by grant ED431G 2019/01. ## Limitations Main limitation 1 The experiments of this paper are only done in 14 languages that use the Latin alphabet, and with a high share of Indo-European languages, with up to 4 Germanic languages. This is due to two reasons: (i) the scarcity of XPOS and FEATS annotations in treebanks from other language families, and (ii) the research team involved in this work did not have access to proficient speakers of languages that use other alphabets. Hence, although we created a reasonable diverse sample of treebanks, this is not representative of all human languages. Main limitation 2 Although we follow previous work to automatically generate perturbations at character-level, and these are inspired in psycholinguistic studies, they might not be coherent with the type of mistakes that a human will make. In this work, generating human errors is not feasible due to the amount of languages involved, and the economic costs of such manual labour. Still, we think the proposed perturbations serve the main purpose: to study how morphological tags can help parsers when these face lexical errors, while the used method builds on top of most of previous work on adversarial attacks at character-level. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890–2896, Brussels, Belgium. Association for Computational Linguistics. Mark Anderson, Mathieu Dehouck, and Carlos GómezRodríguez. 2021. A falta de pan, buenas son tortas: The efficacy of predicted UPOS tags for low resource UD parsing. In *Proceedings of the 17th International* Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 78–83, Online. Association for Computational Linguistics. Mark Anderson and Carlos Gómez-Rodríguez. 2021. What taggers fail to learn, parsers need the most. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 309–314, Reykjavik, Iceland (Online). Linköping University Electronic Press, Sweden. Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with LSTMs. In Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 349–359, Lisbon, Portugal. Association for Computational Linguistics. Miguel Ballesteros and Joakim Nivre. 2012. MaltOptimizer: A system for MaltParser optimization. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2757–2763, Istanbul, Turkey. European Language Resources Association (ELRA). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. *Scientia Sinica*, 14:1396–1400. Matt Davis. 2003. Psycholinguistic evidence on scrambled letters in reading. Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu Kiperwasser, Sara Stymne, Yoav Goldberg, and Joakim Nivre. 2017. From raw text to Universal Dependencies - look, no tags! In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 207–217, Vancouver, Canada. Association for Computational Linguistics. Mathieu Dehouck and Pascal Denis. 2018. A framework for understanding the role of morphology in Universal Dependency parsing. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 2864–2870, Brussels, Belgium. Association for Computational Linguistics. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30, Vancouver, Canada. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics. Jack Edmonds. 1968. Optimum branchings. *Mathematics and the Decision Sciences, Part*, 1(335-345):25. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics. Daniel Fernández-González and Carlos GómezRodríguez. 2019. Left-to-right dependency parsing with pointer networks. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 710–716, Minneapolis, Minnesota. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313– 327. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-word applications. In *Network and Distributed Systems Security (NDSS) Symposium 2019*. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In *Proceedings of the 27th* International Joint Conference on Artificial Intelligence, IJCAI'18, page 4208–4215. AAAI Press. Milad Moradi and Matthias Samwald. 2021. Evaluating the robustness of neural language models to input perturbations. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1558–1570, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alberto Muñoz-Ortiz, Mark Anderson, David Vilares, and Carlos Gómez-Rodríguez. 2022. Parsing linearizations appreciate PoS tags - but some are fussy about errors. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 117–127, Online only. Association for Computational Linguistics. Danish Pruthi, Bhuwan Dhingra, and Zachary C. Lipton. 2019. Combating adversarial misspellings with robust word recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5582–5591, Florence, Italy. Association for Computational Linguistics. Graham Ernest Rawlinson. 1976. *The significance of* letter position in word recognition. Ph.D. thesis, University of Nottingham. Aaron Smith, Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2018. An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2711–2720, Brussels, Belgium. Association for Computational Linguistics. Michalina Strzyz, David Vilares, and Carlos GómezRodríguez. 2020. Bracketing encodings for 2-planar dependency parsing. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 2472–2484, Barcelona, Spain (Online). International Committee on Computational Linguistics. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. *Advances in neural information processing systems*, 28. Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Hamid Aghaei, Željko Agic, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy ´ Ajede, Gabriele Aleksandravi ˙ ciˇ ut¯ e, Ika Alfina, Lene ˙ Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Maria Jesus Aranzabe, Bilge Nas Arıcan, Hórunn Arnardóttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Deniz Baran Aslan, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Starkaður Barkarson, Rodolfo Basile, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, Gözde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne Bielinskien ˙ e, Kristín Bjarnadóttir, ˙ Rogier Blokland, Victoria Bobicev, Loïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Anouck Braggaar, Kristina Brokaite, Aljoscha Bur- ˙ chardt, Marie Candito, Bernard Caron, Gauthier Caron, Lauren Cassidy, Tatiana Cavalcanti, Gül¸sen Cebiroglu Eryi ˘ git, Flavio Massimiliano Cecchini, ˘ Giuseppe G. A. Celano, Slavomír Céplö, Nesli- ˇ han Cesur, Savas Cetin, Özlem Çetinoglu, Fabri- ˘ cio Chalub, Shweta Chauhan, Ethan Chi, Taishi Chika, Yongseok Cho, Jinho Choi, Jayeol Chun, Juyeon Chung, Alessandra T. Cignarella, Silvie Cinková, Aurélie Collomb, Çagrı Çöltekin, Miriam ˘ Connor, Marine Courtin, Mihaela Cristescu, Philemon Daniel, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Elisa Di Nuovo, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Sandra Eiche, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaž Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Richárd Farkas, Jannatul Ferdaousi, Marília Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Fabrício Ferraz Gerardi, Kim Gerdes, Filip Ginter, Gustavo Godoy, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciut¯ e,˙ Matias Grioni, Loïc Grobol, Normunds Gruz¯ ¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Tunga Güngör, Nizar Habash, Hinrik Hafsteinsson, Jan Hajic, Jan Haji ˇ c jr., Mika Hämäläinen, Linh Hà M ˇ y, ˜ Na-Rae Han, Muhammad Yudistira Hanifmuti, Sam Hardwick, Kim Harris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladká, Jaroslava Hlavácová, Florinel Hociung, Petter Hohle, ˇ Eva Huber, Jena Hwang, Takumi Ikeda, Anton Karl Ingason, Radu Ion, Elena Irimia, O.lájídé Ishola, Kaoru Ito, Siratun Jannat, Tomáš Jelínek, Apoorva Jha, Anders Johannsen, Hildur Jónsdóttir, Fredrik Jørgensen, Markus Juutinen, Sarveswaran K, Hüner Ka¸sıkara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Neslihan Kara, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerová, Jesse Kirchner, Elena Klementieva, Elena Klyachko, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Mehmet Köse, Natalia Kotsyba, Jolanta Kovalevskaite, Simon Krek, Parameswari Krishna- ˙ murthy, Sandra Kübler, Oguzhan Kuyrukçu, Aslı ˘ Kuzgun, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lê Hông, Alessandro Lenci, Saran Lertpra- ` dit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Bruna Lima Padovani, Krister Lindén, Nikola Ljubešic,´ Olga Loginova, Stefano Lusito, Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Menel Mahamdi, Jean Maillard, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Bü¸sra Mar¸san, Cat˘ alina M ˘ ar˘ an- ˘ duc, David Marecek, Katrin Marheinecke, Héctor ˇ Martínez Alonso, Lorena Martín-Rodríguez, André Martins, Jan Mašek, Hiroshi Matsuda, Yuji Matsumoto, Alessandro Mazzei, Ryan McDonald, Sarah McGuinness, Gustavo Mendonça, Tatiana Merzhevich, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missilä, Cat˘ alin ˘ Mititelu, Maria Mitrofan, Yusuke Miyao, AmirHossein Mojiri Foroushani, Judit Molnár, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Giovanni Moretti, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Mariam Nakhlé, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Berzkalne, Manuela Nevaci, Luong Nguy˜ên Thi ., Huy`ên Nguy˜ên Thi . Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo. Olúòkun, Mai Omura, Emeka Onwuegbuzia, Petya Osenova, Robert Östling, Lilja Øvrelid, ¸Saziye Betül Özate¸s, Merve Özçelik, Arzucan Özgür, Balkız Öztürk Ba¸saran, Hyunji Hayley Park, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme PaulinoPassos, Angelika Peljak-Łapinska, Siyao Peng, ´ Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela Rääbis, Alexandre Rademaker, Mizanur Rahoman, Taraka Rama, Loganathan Ramasamy, Carlos Ramisch, Fam Rashel, Mohammad Sadegh Rasooli, Vinit Ravishankar, Livy Real, Petru Rebeja, Siva Reddy, Mathilde Regnault, Georg Rehm, Ivan Riabov, Michael Rießler, Erika Rimkute, Larissa Ri- ˙ naldi, Laura Rituma, Putri Rizqiyah, Luisa Rocha, Eiríkur Rögnvaldsson, Mykhailo Romanenko, Rudolf Rosa, Valentin Ros, ca, Davide Rovati, Olga Rudina, Jack Rueter, Kristján Rúnarsson, Shoval Sadde, Pegah Safari, Benoît Sagot, Aleksi Sahala, Shadi Saleh, Alessio Salomoni, Tanja Samardžic, Stephanie ´ Samson, Manuela Sanguinetti, Ezgi Sanıyar, Dage Särg, Baiba Saul¯ıte, Yanin Sawanakunanon, Shefali Saxena, Kevin Scannell, Salvatore Scarlata, Nathan Schneider, Sebastian Schuster, Lane Schwartz, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Syeda Shahzadi, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Yana Shishkina, Muh Shohibussirri, Dmitry Sichinava, Janine Siewert, Einar Freyr Sigurðsson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Shafi Sourov, Carolyn Spadine, Rachele Sprugnoli, Steinhór Steingrímsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadová, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Chihiro Taguchi, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Dipta Tanaya, Samson Tella, Isabelle Tellier, Marinella Testori, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Urešová, Larraitz Uria, Hans ˇ Uszkoreit, Andrius Utka, Sowmya Vajjala, Rob van der Goot, Martine Vanhove, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Natalia Vlasova, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Sri Hartati Wijono, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Arife Betül Yenice, Olcay Taner Yıldız, Zhuoran Yu, Arlisa Yuliawati, Zdenek Žabokrtský, ˇ Shorouq Zahra, Amir Zeldes, He Zhou, Hanzhi Zhu, Anna Zhuravleva, and Rayan Ziane. 2021. Universal dependencies 2.9. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. Houquan Zhou, Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Is pos tagging necessary or even helpful for neural dependency parsing? In *CCF International Conference on Natural Language Processing* and Chinese Computing, pages 179–191. Springer. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3. Experiments ✓ B1. Did you cite the creators of artifacts you used? 3. Experiments B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3. Experiments ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3. Experiments ## C ✓ **Did You Run Computational Experiments?** 2. Adversarial Framework ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 2. Adversarial Framework The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3. Experiments ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3. Experiments C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
paz-argaman-etal-2023-hegel
{H}e{G}e{L}: A Novel Dataset for Geo-Location from {H}ebrew Text
https://aclanthology.org/2023.findings-acl.460
The task of textual geolocation {---} retrieving the coordinates of a place based on a free-form language description {---} calls for not only grounding but also natural language understanding and geospatial reasoning. Even though there are quite a few datasets in English used for geolocation, they are currently based on open-source data (Wikipedia and Twitter), where the location of the described place is mostly implicit, such that the location retrieval resolution is limited. Furthermore, there are no datasets available for addressing the problem of textual geolocation in morphologically rich and resource-poor languages, such as Hebrew. In this paper, we present the Hebrew Geo-Location (HeGeL) corpus, designed to collect literal place descriptions and analyze lingual geospatial reasoning. We crowdsourced 5,649 literal Hebrew place descriptions of various place types in three cities in Israel. Qualitative and empirical analysis show that the data exhibits abundant use of geospatial reasoning and requires a novel environmental representation.
## Hegel: A Novel Dataset For Geo-Location From Hebrew Text Tzuf Paz-Argaman*,a, Tal Bauman*,b, Itai Mondshinea, Itzhak Omerc, Sagi Dalyotb, and Reut Tsarfatya aBar-Ilan University, Israel, bThe Technion, Israel, cTel Aviv University, Israel, {tzuf.paz-argaman, mondshi1, reut.tsarfaty}@biu.ac.il, talbauman@campus.technion.ac.il, omery@tauex.tau.ac.il, dalyot@technion.ac.il ## Abstract The task of textual geolocation - retrieving the coordinates of a place based on a free-form language description - calls for not only grounding but also natural language understanding and geospatial reasoning. Even though there are quite a few datasets in English used for geolocation, they are currently based on open-source data (Wikipedia and Twitter), where the location of the described place is mostly implicit, such that the location retrieval resolution is limited. Furthermore, there are no datasets available for addressing the problem of textual geolocation in morphologically rich and resourcepoor languages, such as Hebrew. In this paper, we present the Hebrew Geo-Location (HeGeL) corpus, designed to collect literal place descriptions and analyze lingual geospatial reasoning. We crowdsourced 5,649 literal Hebrew place descriptions of various place types in three cities in Israel. Qualitative and empirical analysis show that the data exhibits abundant use of geospatial reasoning and requires a novel environmental representation.1 ## 1 Introduction And Background Textual Geolocation Identification, a crucial component of Geographic Information Retrieval (GIR), is the task of resolving the location, i.e., coordinates of a place, based on the reference to it in a text. It requires a combination of language and environmental knowledge. On top of the usual non-spatial linguistic challenges in Natural Language Understanding (NLU), such as named entity recognition (NER), anaphora resolution, bridging anaphora, etc., the textual geolocation task presents geospatial challenges that require multimodal processing and grounding (Ji et al., 2022; Fried et al., 2022; Misra et al., 2017; Qi et al., 2020; Paz-Argaman et al., 2020). Proper names, such as 'Rabin Square', also known as *named entities* in Natural Language Procesing (NLP), and as *rigid designators* in formal semantics (Kripke, 1972), can be easily grounded based on a Gazetteer or a simple map. However, geolocating linguistic terms that involve spatial expressions without the explicit mention of a proper name still present an open challenge. This interpretation challenge includes the understanding and resolution of (at least): (i) definite descriptions, such as 'the school' (ii) geospatial terms, such as cardinal directions; 'east of'; and (iii) geospatial numerical reasoning; 'two buildings away from the pharmacy'. To address these and other challenges, we need to both ground entity mentions to their corresponding physical entities in the environment, and to reason about geospatial relations expressed between entities - these two processes being closely intertwined. To do so, we need a corpus for the geolocation task that maps rich geospatial place descriptions to their corresponding location coordinates. However, current corpora for geolocation are based on naturally-occurring open-source resources, such as Wikipedia articles (Eisenstein et al., 2010; Wing and Baldridge, 2011; Han et al., 2012; Wing and Baldridge, 2014; Wallgrün et al., 2018), which are not spatially oriented, i.e., the description of locations is implicit or absent in the corresponding text. Subsequently, the accuracy of retrieval is fairly low (around 100 km). Furthermore, all geolocation datasets previously studied in NLP are in English, with a dearth of corpora for low-resource languages, in particular, for morphologically rich languages, such as Hebrew. To understand the geolocation challenges and build models that do various spatial reasoning tasks, English cannot be our sole focus (Baldridge et al., 2018). Hebrew, a Semitic morphologically rich language is notoriously difficult to parse (Tsarfaty et al., 2020, 2019). Moreover, resources that are ![1_image_0.png](1_image_0.png) available for Hebrew NLP research focus on traditional tasks, such as Part-of-speech (POS) tagging, syntactic parsing, etc; and lack corpora for understanding and reasoning in real-world situations. In this work we present HeGeL, a novel dataset for Hebrew Geo-Location, the first ever Hebrew NLU benchmark involving both grounding and geospatial reasoning. To create HeGeL, we crowdsourced 5,649 geospatially-oriented Hebrew place descriptions of various place types from three cities in Israel. We designed our task based on a realistic scenario of human place description, relying on people's memory of the world, rather than, e.g., using a map (Anderson et al., 1991; Paz-Argaman and Tsarfaty, 2019). Crucially, relying on environmental cognition results in various levels of geospatial knowledge (Siegel and White, 1975) that are manifested in the descriptions and the geospatial reasoning that is required to resolve their location (Hayward and Tarr, 1995). To avoid the much simpler task of grounding proper named entities, we explicitly restricted the use of proper names in the description of the place and adjacent landmarks. Unlike the text-based navigation task (MacMahon et al., 2006; Chen et al., 2019; Ku et al., 2020; De Vries et al., 2018; Thomason et al., 2020), which requires representing an agent's current perspective, reflecting its route knowledge, we show that the HeGeL task requires a full-environment representation, thus, capturing complex geospatial relations among multiple physical entities. Through a thorough linguistic and empirical analysis, we demonstrate the characteristics and challenges associated with Hebrew place descriptions, showing that HeGeL serves both as a challenging NLU benchmark and as a corpus for geospatial cognition research. ## 2 The Hegel Task And Dataset This work addresses the task of geolocating places on a map based on natural language (NL) geospatial descriptions that are given in a colloquial language and based on participants' memory of the environment (i.e., cognitive map). The input to the HeGeL task is as follows: (i) an NL place description of the whereabouts of the place, and (ii) a map with rich details of the environment (e.g., physical entities names, geospatial relations, and attributes). The output is a pair of coordinates (x,y) specifying the physical location of the place described in the text. Figure 1 shows an example of a place description from HeGeL translated from Hebrew. To simplify the crowdsourcing task and encourage participants' engagement, we frame the data crowdsourcing process as the well-known game, the *treasure hunt* task (Kniestedt et al., 2022), in which the *instructor-participant* is required to describe in writing the location of the treasure, a known place in the city, to a different *followerparticipant* who then needs to locate it on a map. Thus, the online assignment is divided into two tasks: the instructor's *writing of place descriptions* and the follower's *validation*. To avoid preconceived notions as to the 'correct' way to describe a place, we first presented the participants with the task of writing a place description, and once completed, the validation task was given.2 2Appendix A includes additional data collection details. We hereby provide the details of the two UI tasks: (i) Task 1. Writing a place description In this task we requested participants to describe in a freeform text the location of a place known to them, to a third party who might not be familiar with the whereabouts of that place. To collect place descriptions based solely on people's memory, we did not visualize the area of the place, e.g., on a map. Instead, we ensured that the participants are well familiarized with the place by asking them to state how familiar they are with the place on a scale of 1-5. If this score was 1 or 2, we presented the participant with a different place to describe. To ensure diverse human-generated textual descriptions, places were chosen based on their type, position/location in the city (places were spread across the city), geometry, size, and context. To avoid the use of proper names, we developed a rule-based methodology to make sure that the explicit name of the goal (place) or of the nearby landmarks (< 100 meters) will not appear explicitly in the description. The original description was saved, and the participants were asked to input another description without the above names. (i) Task 2. Place description validation To verify that a person who reads the text description will understand where the treasure is hidden, i.e., geolocate the place, we developed a map-based retrieval task. The participant in the follower role was asked to read the crowdsourced textual description and mark its location on the map, i.e., where the treasure is hidden. For marking the location, we implemented an interactive online map based on OpenStreetMap (OSM), 3 which allows the participants to move and zoom-in to precisely pin the described place on the map. The map supports the cognitive process needed to ground mentioned entities to physical entities, reason about the geospatial relations, and locate the described place. To familiarize participants with the interactive map tool and task, they had to first pass a simple map marking test, and only then they could start task 2 of reading place descriptions (given by other participants), marking place locations on the map, and rate the clarity of the textual description on a scale of 1-5. Target Selection and Retrieval Errors The treasure-hunt task we devised included 167 places in the three largest cities in Israel: Tel Aviv, Haifa, and Jerusalem. These three cities are differently shaped, and show different physical, morphological and topographic features, which potentially affect the legibility and imageability of urban components, and therefore also on place descriptions. These differences can be expressed in the use of various physical features and prepositions, e.g., frequent use of the physical object 'landmark' and the prepositions 'above' or 'below' in hilly terrains that characterize Haifa and Jerusalem. To assess the quality and interpretability of the place descriptions, we calculate the shortest Euclidean distance between the coordinates of the goal's (physical element) shape (polygon, line or point), and the location marked by the 'follower' on the map (task 2); we term this distance as *retrieval* error. To determine the agreement rate among human participants, each textual place description is validated by at least two participants. To ensure that we work with descriptions that can be geolocated, we set a hard distance threshold of 300 meters, based on analysis of the descriptions' clarity score that we had conducted on a prior (held-out) development corpus we collected for the task. ## 3 Data Statistics And Analysis The resulting HeGeL dataset contains 5,649 validated descriptions paired with their coordinates on a map. The locations are divided among three cities: 2,142 in Tel Aviv, 1,442 in Haifa, and 2,065 in Jerusalem. 1,833 participants completed the writing task, inserting in total 10,946 place descriptions, and 2,050 participants completed 12,655 validation tasks. The dataset is balanced, with about 33 descriptions per place. Figure 2 shows a Venn diagram representing the relation of the three sets of city-based vocabularies (formed from unique lemmas produced by More et al. (2019) lemmatization tool). The intersection of the three cities contains only 15.07% of the entire vocabulary (the union of the three cities' vocabularies). The shared language is not focused on city-specific terms, such as 'Knesset'. Instead, it includes rich spatial terms, such as 'between', modified prepositions such as 'next to', and nondefinite entities, such as 'street'. From the Venn diagram we also conclude that almost half of the lemmas of the three vocabularies, corresponding Phenomenon c µ **Example from HeGeL (translated into English)** | Spatial knowledge (Siegel and White, 1975) Type of elements in a city (Lynch, 1960) | |---------------------------------------------------------------------------------------| Edge 36% 0.6 "when reaching Yafo, one should go toward the sea. . . " Node 40% 0.44 ". . . a few minutes walk from the **HaShaon square**. . . " Landmark 60% 1.08 ". . . near **Levinski market**" District 36% 0.4 "**South part of the city** next to. . . " Path 68% 0.76 "On **Carlebach street**. . . " Spatial knowledge (Siegel and White, 1975) Landmarks 32% n/a "Next to the sea in Tel Aviv-Yafo" Route 20% n/a "**Passing Azrieli on Menachem Begin and then turn right. . .** " Survey 48% n/a "**South part of the city near Levinski market**" Reference to unique entity 100% 2.32 ". . . in the middle of **Dizengoff street**" Cardinal direction 44% 0.76 "**South** of Sharona. . . " Coreference 16% 0.16 ". . . continue a bit west and it. . . " Table 1: Linguistic qualitative analysis of 25 randomly sampled descriptions in HeGeL. c is the percentage of ![3_image_0.png](3_image_0.png) descriptions containing at least one example of the phenomenon, and µ is the mean number of times the phenomenon appears in each description. to the three cities, contain city-specific lemmas: 48.6%, 40.65%, and 49.3% for Tel Aviv, Haifa, and Jerusalem, respectively. As such, HeGeL enables a city-split setup, training on one city and testing on a different unseen city, where city-reserved named entities present an out-of-vocabulary (OOV) challenge for models trained on another city. Table 1 shows an analysis of the linguistic phenomena manifested in the HeGeL dataset, demonstrating the spatial knowledge and reasoning skills required for solving the HeGeL task. We analyzed the frequency of the five types of elements in a city defined by Lynch (1960), along with the three types of spatial knowledge defined in Siegel and White (1975), and other spatial properties. The frequent use of cardinal directions, as well as the use of sur- Table 2: Quantitative analysis of HeGeL. Table 3: Correlations between place types and linguistic and features. | Feature | Avg. per description | Unique in corpus | |--------------------------|------------------------|--------------------| | Number of lemmas | 12.93 | 6,663 | | Number of tokens | 11.50 | 9,207 | | Number of named entities | 0.55 | 3,490 | | Number of prepositions | 2.39 | 14,256 | | Number of verbs | 0.53 | 3,152 | ![3_image_1.png](3_image_1.png) vey knowledge, suggests that any NLP model built to deal with the HeGeL task should not only represent a local view of the goal, or possible routes, but also take into consideration the full region, and mimic people's map-like view of the environment. Therefore, unlike navigation tasks where only the agent's current perspective is represented in the model, this task requires full representation of the environment. We further perform a quantitative analysis of word tokens and lemmas that appear in HeGeL, depicted in Table 2. Overall, the HeGeL dataset contains a large vocabulary of 9,207 unique tokens and 6,663 unique lemmas. There are mentions of physical entities, but as we limited the mentions of named-entities of the described place and landmarks adjacent to it; these are relatively rare, and are mostly references to prominent city landmarks. Also, as most place descriptions are not route-based descriptions, there are only few verbs used in the descriptions. Prepositions, on the other hand, are abundant. | Feature | p-value FDR corrected p-value | | | |---------------------------------------|---------------------------------|--------|--------| | Number of Words | 0.8306 | 0.8306 | | | Task 1: | Number of named entities 0.0000 | 0.0000 | | | Place description | Number of prepositions | 0.0145 | 0.0217 | | (linguistic features) Number of verbs | 0.3400 | 0.4080 | | | Task 2: | Retrieval error | 0.0000 | 0.0000 | | Human verification Clearness score | 0.0000 | 0.0000 | | In Table 3, using a one-way analysis of variance (ANOVA) test, we found a significantly (p<0.05) different distribution between place type descriptions and the following features: number of named entities, number of verbs, human verification retrieval error, and clarity score. ## 4 Experiments We create a zero-shot (ZS) city-based split, such that we train on one city and test on another. The train, development, and test sets correspond to the descriptions collected in Tel Aviv, Haifa, and Jerusalem, respectively. We evaluate different baseline models for the geolocation task on the HeGeL dataset. We use three evaluation metrics based on retrieval error: mean, median, and task completion (TC) accuracy - the percentage of place descriptions located within the 300 meters threshold. We provide three baselines for the HeGeL task. We first assess a brute-force NER approach; i.e., we test whether recognizing named entities in the text and retrieving their corresponding coordinates is sufficient for solving the HeGeL task of geolocation. To this end, we used Google Maps API and produced two baseline models: (i) Google Maps API Query - we queried the API with the full raw text descriptions as input, with no prepossessing; and (ii) Oracle NER - we queried all 1-5 n-grams against Google Maps API and retrieved the closest geolocation to the goal. In our second approach, we employ a dualencoder model. One encoder encodes the text using a Hebrew Monolingual pre-trained encoder, AlephBERT (Seker et al., 2022), which produces a 768dimension vector representation of the text. The other encoder processes the environment, which is represented as a graph based on OSM data. Each point of interest in the graph is connected to an S2Cell4, which contains its geometry and is based on S2-geometry. These S2Cells are encoded using a random-walk algorithm to produce a 64dimensional vector for each cell. These vectors are then passed through a linear layer to produce 768-dimensional vectors. We calculate the cosine similarity score between the text and environment vectors and use it to align the respective representations via maximization of the cosine similarity score with a cross-entropy loss over the scores. | Split | Model | Mean | Median | TC | |-----------------------|-------------|-------------|------------|---------| | Google Maps API Query | 2,811 | 849 | 27.66 | | | Oracle NER* | 2,373 | 496 | 37.79 | | | ZS | HUMAN | 553 | 151 | 70.81** | | ZS | 2,727(1684) | 2,612(1930) | 2.37(1.5) | | | FS 20% | 1717(35) | 1583(49) | 3.43(0.09) | | | Dual-encoder | | | | | | FS 80% | 983(23) | 632(13) | 15.7(0.38) | | Performing an ANOVA test, we found a significantly (p<0.05) different distribution between place type descriptions and the retrieval error of the Oracle NER. The mean retrieval error of the Path and Node place types were the lowest in both human verification and Oracle NER. This suggests that both of these place types are easier for humans to geolocate. The results in Table 4 show that our task is not solvable with adequate resolution by the Google Maps API. The human performance provides an upper bound for the HeGeL task performance, while the simple Google Maps API Query provides a lower bound. The Google API model's low performance suggests that NER and the Gazetteerbased methods in and of themselves are insufficient to handle the HeGeL task successfully, and that geospatial reasoning is necessary. The Dualencoder's low performance on the ZS split suggests that OOV is a major challenge. The few-shot (FS) split shows an improvement of the model after finetuning on additional samples from the test-region (FS 20% and 80%). This suggests that a possible solution for the city-split setup might be dataaugmentation via generating grounded descriptions for the tested region - an approach we reserve for future research. ## 5 Conclusion The contribution of this paper is threefold. First, we present the first geolocation benchmark with Hebrew place descriptions. Second, to the best of our knowledge, this is the only *crowdsourced* geolocation dataset, thus, eliciting explicit geospatial descriptions, allowing for better retrieval resolution. Finally, our analysis shows that the dataset presents complex spatial reasoning challenges which require novel environmental model representation. ## Limitations While we aim for our HeGeL crowdsourcing methodology to be applicable to other languages, and in particular low-resource languages, the UI design and our analyses require knowledge of the intended language, as well as familiarity with the regions where it is spoken. Moreover, as our methodology relies on people's familiarity with the places, it limits the cities chosen for the task and the participants that could take part, restricting the demographics of the participants accordingly. In addition, relying on people's memory of the environment causes many of the descriptions to be too vague for humans to geolocate, thus, many of the descriptions were disqualified during the validation process as they could not have been resolved. The relatively low percentage of place descriptions that were successfully validated, raises the costs of collecting such a dataset. ## Acknowledgements This research is funded by a grant from the European Research Council, ERC-StG grant number 677352, and a grant by the Israeli Ministry of Science and Technology (MOST), grant number 317992, for which we are grateful. ## References Anne H Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, et al. 1991. The hcrc map task corpus. Language and speech, 34(4):351–366. Jason Baldridge, Tania Bedrax-Weiss, Daphne Luong, Srini Narayanan, Bo Pang, Fernando Pereira, Radu Soricut, Michael Tseng, and Yuan Zhang. 2018. Points, paths, and playscapes: Large-scale spatial language understanding tasks set in the real world. In Proceedings of the First International Workshop on Spatial Language Understanding, pages 46–52. Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12538–12547. Harm De Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. 2018. Talk the walk: Navigating new york city through grounded dialogue. *arXiv preprint arXiv:1807.03367*. Jacob Eisenstein, Brendan O'Connor, Noah A Smith, and Eric Xing. 2010. A latent variable model for geographic lexical variation. In *Proceedings of the* 2010 conference on empirical methods in natural language processing, pages 1277–1287. Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, and Aida Nematzadeh. 2022. Pragmatics in grounded language learning: Phenomena, tasks, and modeling approaches. *arXiv preprint arXiv:2211.08371*. Bo Han, Paul Cook, and Timothy Baldwin. 2012. Geolocation prediction in social media data by finding location indicative words. In *Proceedings of COLING 2012*, pages 1045–1062. William G Hayward and Michael J Tarr. 1995. Spatial language and spatial representation. *Cognition*, 55(1):39–84. David Hilbert. 1935. Über die stetige abbildung einer linie auf ein flächenstück. In *Dritter Band: Analysis·* Grundlagen der Mathematik· *Physik Verschiedenes*, pages 1–2. Springer. Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D Hawkins, and Yoav Artzi. 2022. Abstract visual reasoning with tangram shapes. arXiv preprint arXiv:2211.16492. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Isabelle Kniestedt, Iulia Lefter, Stephan Lukosch, and Frances M Brazier. 2022. Re-framing engagement for applied games: A conceptual framework. *Entertainment Computing*, 41:100475. Saul A Kripke. 1972. Naming and necessity. In *Semantics of natural language*, pages 253–355. Springer. Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. *arXiv preprint* arXiv:2010.07954. Kevin Lynch. 1960. The image of the environment. The image of the city, 11:1–13. Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. Def, 2(6):4. Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. arXiv preprint arXiv:1704.08795. Amir More, Amit Seker, Victoria Basmova, and Reut Tsarfaty. 2019. Joint transition-based models for morpho-syntactic parsing: Parsing strategies for mrls and a case study from modern hebrew. *Transactions of the Association for Computational Linguistics*, 7:33–48. Tzuf Paz-Argaman, Yuval Atzmon, Gal Chechik, and Reut Tsarfaty. 2020. Zest: Zero-shot learning from text descriptions using textual similarity and visual summarization. *arXiv preprint arXiv:2010.03276*. Tzuf Paz-Argaman and Reut Tsarfaty. 2019. Run through the streets: A new dataset and baseline models for realistic urban navigation. *arXiv preprint* arXiv:1909.08970. Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, and Anton van den Hengel. 2020. Reverie: Remote embodied visual referring expression in real indoor environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9982–9991. Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Greenfeld, and Reut Tsarfaty. 2022. Alephbert: Language model pre-training and evaluation from sub-word to sentence level. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 46–56. Alexander W Siegel and Sheldon H White. 1975. The development of spatial representations of large-scale environments. Advances in child development and behavior, 10:9–55. Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2020. Vision-and-dialog navigation. In *Conference on Robot Learning*, pages 394–406. PMLR. Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From spmrl to nmrl: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (mrls)? *arXiv preprint arXiv:2005.01330*. Reut Tsarfaty, Amit Seker, Shoval Sadde, and Stav Klein. 2019. What's wrong with hebrew nlp? and how to make it right. arXiv preprint arXiv:1908.05453. Jan Oliver Wallgrün, Morteza Karimzadeh, Alan M MacEachren, and Scott Pezanowski. 2018. Geocorpora: building a corpus to test and train microblog geoparsers. *International Journal of Geographical* Information Science, 32(1):1–29. Benjamin Wing and Jason Baldridge. 2011. Simple supervised document geolocation with geodesic grids. In *Proceedings of the 49th annual meeting of the* association for computational linguistics: Human language technologies, pages 955–964. Benjamin Wing and Jason Baldridge. 2014. Hierarchical discriminative classification for text-based geolocation. In *Proceedings of the 2014 conference on* empirical methods in natural language processing (EMNLP), pages 336–348. ## A Data Collection Details B Participant Interface C Experimental Setup Details We used the services of an Israeli surveying company to distribute the assignment to native Hebrewspeakers participants in Israel only. The survey company was charged with distributing the assignments to a balanced set of participants in terms of their demographic and geographic characteristics (e.g., an equal number of males and females). All participants were given full payment, nonrespective of whether they correctly completed the task. The first page the participants viewed contains a disclosure about the assignments being part of academic research and the purpose of the assignments. The assignment protocol was approved by a behavioral review board. This approval was also presented to the participants on the initial screen. Also, the participants were required to read an informed consent form and sign an agreement box. The tasks are performed via an online assignment application, depicted in Figures 3-5. The cross-entropy loss function was optimized with Adam optimizer (Kingma and Ba, 2015). The hyperparameter tuning is based on the average results run with three different seeds. The Learning rate was searched in [1e-5, 1e-4, 1e-3] and a 1e-5 was chosen. The S2cell level was searched in [13, 15, 17] and 13 was chosen. Number-of-epochs for early stopping was based on their average learning curve. ## Play "Treasure Hunt"! Your task is to describe to a friend (who has a map) where in the city the treasure is hidden. As your friend does not know well the city (Tel-Aviv), you must provide a detailed geographic description where the place is located, so the friend can find the place once he arrives to the area Example of a good description: V - Located in North Herzl Street, East of Ayalon River, in the Emek hospital Example of a bad description: X - In a big building with stores Rules of the game: Do not mention the exact address. Do not mention the name of the place. Describe where it is, not what it is. Descriptions must contain at least 6 words. Attention! Descriptions will be checked – descriptions not done by the rules will not be eligible for points Figure 3: Participant Interface translated from Hebrew: instructions for the writing task. ![7_image_0.png](7_image_0.png) In the following page you will be presented with 20 location descriptions in Tel Aviv Below each description there is an interactive map that you can pan/move, zoom in/out - .etc If you recognize the first place described, mark it on the map in maximum zoom, and write how clear the description was. Otherwise enter 0 To find the places mentioned in the description you can use the search button g Here is an example - mark the following location Your destination is the Azrieli Center. It is East of Menachem Begin Rd., and West of ![8_image_0.png](8_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? limitation section ✗ A2. Did you discuss any potential risks of your work? The data doesn't contain private or sensitive information ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The model is a standard encoder-decoder run for around 10 minutes with early stopping on 1 GPU. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? in the appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? in the appendix ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? in the appendix ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? in the appendix ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? in the appendix ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? in the appendix
fang-etal-2023-modeling
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
https://aclanthology.org/2023.findings-acl.461
Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-ATTACK. Our experimental results show that SDM-ATTACK achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-ATTACK.Resources of this work will be released after this paper{'}s publication.
## Modeling Adversarial Attack On Pre-Trained Language Models As Sequential Decision Making Xuanjie Fang1∗ , Sijie Cheng1, 2, 3, 4∗ , Yang Liu2, 3, 4, 5**, Wei Wang**1† 1School of Computer Science, Fudan University, Shanghai, China 2Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China 3Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 4Beijing National Research Center for Information Science and Technology, Beijing, China 5Shanghai Artificial Intelligence Laboratory, Shanghai, China {xjfang20,sjcheng20,weiwang1}@fudan.edu.cn, liuyang2011@tsinghua.edu.cn ## Abstract Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the *adversarial attack* task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a *sequential decision-making* problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-ATTACK. Extensive experimental results show that SDM-ATTACK achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-ATTACK. The code is available at https://github. com/fduxuan/SDM-Attack. ## 1 Introduction Nowadays, pre-trained language models (PLMs) have shown strong potential in various downstream tasks (Devlin et al., 2018; Brown et al., 2020). However, a series of studies about *adversarial attack* (Jin et al., 2020; Li et al., 2020a,b) have found that PLMs are vulnerable to some small perturbations based on the original inputs. The adversarial attack is essential to develop trustworthy and robust PLMs in Artificial Intelligence (AI) community (Thiebes et al., 2021; Marcus, 2020). Despite the adversarial attack achieving success in both image and speech domains (Chakraborty ∗Equal Contribution †Corresponding Author ![0_image_0.png](0_image_0.png) et al., 2018; Kurakin et al., 2018; Carlini and Wagner, 2018), it is still far from perfect in the natural language processing (NLP) field due to the discrete nature of language (Studdert-Kennedy, 2005; Armstrong et al., 1995). The main problem is to find an appropriate search algorithm that can make perturbations to mislead the victim models (i.e., PLMs) successfully (Morris et al., 2020; Yoo and Qi, 2021). As mentioned in recent studies (Jin et al., 2020), the challenges are preserving the following properties: 1) *human prediction consistency*, misleading the PLMs while keeping human judges unchanged; 2) *semantic similarity*, keeping the semantics of the original inputs; 3) *language fluency*, ensuring the correctness of grammar. Mainstream solutions are typically a detached two-stage framework. Specifically, they first rank the importance scores of all tokens according to the original input and then orderly substitute these tokens via heuristic rules. Previous studies propose different strategies to rank the editing order of tokens, such as temporal-based algorithm (Gao et al., 2018), probability-weighted saliency (Ren et al., 2019; Li et al., 2020b,a; Jin et al., 2020), and gradient-based ranking (Yoo and Qi, 2021). However, these methods face two limitations. On the one hand, they use a threshold to filter the unsatisfactory substitutions at last, but neglect to integrally consider the properties during computing importance scores. On the other hand, their editing order only depends on the original input without considering the subsequent influence of substitution, as computing the importance score at each step is computationally burdensome in practice. To solve the issues mentioned above, in this paper, we formally propose to transform the adversarial attack problem into a **sequential decisionmaking** task as shown in Figure 1. Rather than computing the importance scores all at once based on the original input, we regard the entire attack process as a sequence, where scores in the next step are influenced by the editing results in the current step. Furthermore, there are two types of decisionmaking problems during each step in the attack sequential process: 1) *word finder*, choosing the appropriate token to edit; 2) *word substitution*, replacing the token with a suitable substitution. Meanwhile, selecting edited tokens at each step should take the attack success rate and crucial properties, such as fluency, into account. As a sequential decision-making task without a direct signal in each step, we naturally leverage reinforcement learning (RL) to find an appropriate sequential attack path to generate adversaries. In this paper, we propose a model-agnostic method based on policy-based RL for modeling the adversarial attack into Sequential Decision Making, entitled SDM-ATTACK. Given the victim model as the environment with designed reward functions and the original input text as the initial state, the reinforced agent needs to decide on tokens to edit and synonyms to replace sequentially, until it attacks successfully. The experimental results show that SDM-ATTACK achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT against state-of-the-art baselines. Furthermore, we also demonstrate the effectiveness, generalizability, and transferability of SDM-ATTACK in our analysis. The main contributions of this work are summarized as the following: - To the best of our knowledge, we are the first to model the adversarial attack on PLMs into a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. - Considering the sequential attack process can receive the final state without any direct intermediate signals, we propose SDM-ATTACK to use reinforcement learning to ask the agent to find an appropriate attack path based on our designed indirect reward signals yielded by the environment. ## 2 Preliminaries As for NLP tasks, given a corpus of N input texts X = {x1, x2, x3, *· · ·* , xN } and an output space Y = {y1, y2, y3, · · · , yK} containing K labels, the language model F learns a mapping f : x → y , which learns to classify each input sample x ∈ X to the ground-truth label ygold ∈ Y: $$\operatorname{F}(\mathbf{x})={\underset{y_{i}\in Y}{\operatorname{arg\,max}}}\,P(y_{i}|\mathbf{x})\qquad\qquad{\mathrm{(1)}}$$ The adversary of text x ∈ X can be formulated as xadv = x + ϵ, where ϵ is a slight perturbation to the input x. The goal is to mislead the victim model F within a certain constraint C(xadv): $$\mathrm{F}(\mathbf{x}_{\mathrm{adv}})=\arg\max_{y_{i}\in Y}P(y_{i}|\mathbf{x}_{\mathrm{adv}})\neq\mathrm{F}(\mathbf{x}),\tag{2}$$ and $C(\mathbf{x}_{\mathrm{adv}},\mathbf{x})\geq\lambda$ where λ is the coefficient, and C(xadv, x) usually calculates the semantic or syntactic similarity (Cer et al., 2018; Oliva et al., 2011) between the input x and its corresponding adversary xadv. Recently, the adversarial attack task has been framed as a combinatorial optimization problem. However, previous studies (Gao et al., 2018; Ren et al., 2019; Yoo and Qi, 2021) address this problem without considering the subsequent influence of substitution at each step, making attack far from the most effective. In this paper, we formally define the adversarial attack as a sequential decisionmaking task, where the decisions in the next step are influenced by the results in the current step. ![2_image_0.png](2_image_0.png) ## 3 Methodology In this section, we model the adversarial attack on PLMs problem as a sequential decision-making task as shown in Figure 1, where the entire attack process is a sequence with two decision-making problems. Considering the lack of direct signal in each step during the attack process, we propose a model-agnostic method, named SDM-ATTACK, based on policy-based reinforcement learning. The illustration is shown in Figure 2. During each step in the attack process, the reinforced agent needs to take two actions: 1) *word finder*, choosing the appropriate token to edit; and 2) *word substitution*, replacing the token with a suitable substitution. Through an attack sequence toward the input, we obtain its adversary until the attack is successful. ## 3.1 Environment And Rewards We regard the victim models (i.e., PLMs) as the whole environment. Intuitively, the agent needs to generate adversaries against the environment and achieve as high a reward as possible. The t-step environment state is our intermediate generation x t = [wt1 , wt2 , ..., wtn] containing n words, where the initial state x 0is the original input. Considering the lack of direct signal in each step, our reward consists of a final discriminant signal rd to present the state of termination and an instant reward rt on every step. As for the final signal rd, once the model prediction of t-step state is different from the initial state, the environment will terminate this episode and yield a *success* signal. However, if the model prediction does not change when all the tokens are replaced or the maximum number of steps is reached, a *failure* signal will be given. Overall, the final signal rd is denoted as: $$r_{d}={\begin{cases}1,&successful{cases}\\ -1,&f a i l u r e\end{cases}}\quad(3)$$ As for the instant reward ri for each step, we hope that the t-step state x tcan not only mislead the victim model but also ensure semantics similarity and fluency. Firstly, we design one instant reward to evaluate attack success rates: $$r_{t}^{att}=\begin{cases}r_{d},&\text{{terminated}}\\ P(y_{\text{gold}}|\mathbf{x}^{t-1})-P(y_{\text{gold}}|\mathbf{x}^{t}),&\text{{survive}}\end{cases}\tag{4}$$ where $r_{d}$ is the final reward if the current episode is terminated. Secondly, we define a punishment by using an auto-regressive language model (LM) to measure fluency: $$r_{t}^{f l u}=\sum_{i}{\frac{1}{|\mathbf{x}^{t}|}}{\mathrm{(LM}}(x_{i}|\mathbf{x}^{t})-{\mathrm{LM}}(x_{i}|\mathbf{x}^{t-1})){\mathrm{~}}(5)$$ $$(6)$$ where LM(xi|x t) is the cross-entropy loss of the token xiin sentence x t. Thirdly, we also add semantic similarity constraints as another punishment: $$r_{t}^{sim}=\mbox{Sim}(\mathbf{x},\mathbf{x}^{t-1})-\mbox{Sim}(\mathbf{x},\mathbf{x}^{t})\tag{6}$$ Finally, our overall instant reward $r_{t}$ is defined as: ly, our overall instant reward $r_t$ is defined $$r_t=\beta_1r_t^{att}-\beta_2r_t^{flu}-\beta_3r_t^{sim}$$ **Decision Making** . $$\quad(7)$$ t(7) During each step in the whole attack process, there are two types of decision-making problems. The first is choosing the appropriate token to edit, while the second is replacing the token with a suitable substitution. In RL, the agent needs to determine the decisions according to the yielded rewards. Word Finder To find the appropriate token to edit, we first employ the masked language models (MLM) as an encoder to represent the state x t. Due to the setup of the sub-word tokenizer in MLM, the encoder first converts x tto a token sequence x t token = [o t1 , o t2 , ..., o tm]. We reverse the conversion mapping ϕ : x t token → x tto recover tokens into words in need. Then we obtain the hidden states h t = [h t 1, h t 2*, ...,* h t m], where h t i ∈ R dis the hidden state of token o t i with d dimensions. Furthermore, we maintain a word set W to restore the words of x that have been already modified as well as stop words and punctuation. We then adopt a simple binary representation b taccording to the word set W: $$\mathbf{b}_{i}^{t}={\begin{cases}\mathbf{0}\in\mathbb{R}^{d}&\phi(\mathbf{o}_{i}^{t})\in\mathbb{W}\\ \mathbf{1}\in\mathbb{R}^{d}&\phi(\mathbf{o}_{i}^{t})\notin\mathbb{W}\end{cases}}\quad{\mathrm{(8)}}$$ Then, we fuse both the hidden states h t i and the binary representation b t i to obtain the final representation e t i of the environment states: $$e_{i}^{t}=[\hbar_{i}^{t};b_{i}^{t}]$$ i] (9) where [; ] denotes the concatenation operation. During the process of training, we first adopt a simple linear layer to obtain the probability and further normalize it into a distribution. The probability distribution p(o t i|x t) of each token at t-step can be calculated as follows: $$p(\mathbf{o}_{i}^{t}|\mathbf{x}^{t})=\mathrm{softmax}(W\cdot\mathbf{e}_{i}^{t}+b)\qquad(10)$$ where *W, b* are the weight matrix and the bias vector, respectively. Then the agent samples the word wtto substitute according to the distribution and ensures the sampled word is not in the word set W. During the evaluation, the agent will directly select the token with the maximum probability at each step, which is formulated as follows: $$\mathbf{w}^{t}=\arg\max p(\mathbf{o}_{i}^{t}|\mathbf{x}^{t}),\phi(\mathbf{o}_{i}^{t})\notin\mathbb{W}\tag{11}$$ If the selected token wtis a sub-word, we reverse the sub-word into a complete word via the conversion mapping ϕ as the newly selected word. Word Substitution Following Jin et al. (2020), we adopt synonym substitution as our strategy after obtaining selected word wtin t-step. Firstly, we gather a synonym set Swt for wtthat contains top-k candidates from the external vocabulary, computing via cosine similarity (Mrkšic´ et al., 2016). Then, for each s ∈ Swt , we replace wtp with s in the sentence x tto get a substitution x ts = [w1, ..., wp−1, s, wp+1*, ...*wn]. Finally, according to the instant reward rtin the Equation 4, we select the substitution with the highest reward as the final adversaries x t adv. Meanwhile, the environment states further updates as follows: $$(12)$$ $\begin{cases}\color{blue}x^{t+1}=x^t_{\text{adv}}\\ \color{red}\mathbb{W}=\mathbb{W}\cup\{\color{blue}w^t_p\}\end{cases}$ #### Training. $$\mathbf{Agent\;Training}$$ 3.3 Agent Training The training target is to maximize the total return G(τ ), which is an accumulated reward based on the instant reward rt, defined in Equation 7, with a discount factor γ ∈ [0, 1): $$G(\tau)=\sum_{t=1}^{T}\gamma^{t}r_{t}$$ $$(13)$$ $\eqref{eq:walpha}$. trt (13) The expected return of the decision trajectory, i.e., attack path, is defined as follows: $$J(\theta)=\mathbb{E}[G(\tau)]$$ J(θ) = E[G(τ )] (14) $$(9)$$ Furthermore, we regard the agent as πθ with parameters θ and the attack path as τ = [(a f 1 , as1 ), *· · ·* ,(a f T , asT )], where a f tand a s trepresent actions of *word finder* and *substitution* in t-th step, respectively. The probability of this attack path is calculated as πθ(τ ) = QT t=1 πθ((a f t , as t)|st), where πθ((a f t , as t)|st) is the probability of actions in step t based on current environment state st. Meanwhile, we consider a s t a prior knowledge so that this probability can be simplified. The gradient is calculated by REINFORCE algorithm (Kaelbling et al., 1996): $$\nabla J(\theta)=\nabla\mathbb{E}[\log\pi_{\theta}(\tau)\cdot G(\tau)]$$ $\left(15\right)^{2}$ Detailed information of reinforce training is shown in appendix B. ## 4 Experiments 4.1 Experimental Setups Tasks and Datasets Following Li et al. (2020b); Jin et al. (2020), we evaluate the effectiveness of SDM-ATTACK mainly on two standard NLP tasks, text classification and textual entailment. As for text classification, we use diverse datasets from different aspects, including news topic classification (AG's News; Zhang et al., 2015), sentence-level sentiment analysis (MR; Pang and Lee, 2005) and document-level sentiment analysis (IMDB1and 1https://datasets.imdbws.com/ | Dataset | Method | A-rate↑ | Mod↓ | Sim↑ | Dataset | Method | A-rate ↑ | Mod ↓ | Sim↑ | |-------------|----------|-----------|--------|-------------|-----------|----------|------------|---------|--------| | A2T | 88.3 | 8.1 | 0.68 | A2T | 89.9 | 4.4 | 0.79 | | | | TextFooler | 90.5 | 9.0 | 0.69 | TextFooler | 88.7 | 7.6 | 0.76 | | | | IMDB | | | | | | | | | | | BERT-Attack | 89.8 | 12.4 | 0.66 | BERT-Attack | 88.2 | 5.3 | 0.78 | | | | SDM-ATTACK | 95.8 | 8.2 | 0.71 | SDM-ATTACK | 91.4 | 4.1 | 0.82 | | | | A2T | 53.7 | 13.5 | 0.57 | A2T | 58.5 | 12.6 | 0.55 | | | | TextFooler | 66.2 | 18.4 | 0.52 | TextFooler | 80.5 | 15.8 | 0.50 | | | | MR | | | | | | | | | | | BERT-Attack | 74.6 | 15.6 | 0.52 | BERT-Attack | 83.2 | 12.8 | 0.52 | | | | SDM-ATTACK | 77.9 | 15.3 | 0.53 | SDM-ATTACK | 85.6 | 12.3 | 0.57 | | | | A2T | 70.8 | 17.2 | 0.35 | A2T | 66.0 | 14.4 | 0.45 | | | | TextFooler | 84.3 | 17.2 | 0.38 | TextFooler | 76.5 | 15.0 | 0.45 | | | | MNLI | | | | | | | | | | | BERT-Attack | 81.9 | 16.5 | 0.38 | BERT-Attack | 78.1 | 14.0 | 0.46 | | | | SDM-ATTACK | 85.5 | 15.9 | 0.43 | SDM-ATTACK | 78.7 | 13.8 | 0.49 | | | Yelp Polarity; Zhang et al., 2015). As for textual entailment, we use a dataset of sentence pairs (SNLI; Bowman et al., 2015) and a dataset with multi-genre (MultiNLI; Williams et al., 2017). The statistics of datasets and more details can be found in Appendix A. Following Jin et al. (2020); Alzantot et al. (2018), we attack 1k samples randomly selected from the test set of each task. Baselines We compare SDM-ATTACK with recent state-of-the-art studies: 1) TextFooler (Jin et al., 2020): find important words via probability weighted word saliency and then apply substitution with counter-fitted word embeddings. 2) BERT-Attack (Li et al., 2020b): use mask-predict approach to generate adversaries. 3) A2T (Yoo and Qi, 2021): adopt faster search with gradientbased word importance ranking algorithm. We use open-source codes provided by the authors and TextAttack tools (Morris et al., 2020) to implement these baselines. Furthermore, to ensure fairness in comparing baselines and SDM-ATTACK, we apply constraints to all methods following Morris et al. (2020) in Appendix C. Victim Models We conduct the main experiments on a standard pre-trained language model BERT following (Jin et al., 2020; Li et al., 2020b). To detect the generalization of SDM-ATTACK, we explore the effects on more typical models as discussed in Section 5.1. All victim models are pretrained from TextAttack (Morris et al., 2020). Implementation Details We adopt BERT as the MLM model in word finder and GPT-2 (Radford et al., 2019) to measure fluency when computing rewards. To keep instant reward and punishment in a similar range, we set the hyper-parameters β1 to be 1, β2 to be 1 and β3 to be 0.2. Moreover, the discount factor γ is set to be 0.9 to achieve a trade-off between instant reward and long-term return. We set the episode number as M = 200 and the learning rate as α = 3e−6 with Adam as the optimizer. In word substitution, the parameter K of the synonyms number is 50. Our experiments are conducted on a single NVIDIA 2080ti. Automatic Evaluation Metrics Following previous studies (Jin et al., 2020; Morris et al., 2020), we use the following metrics as the evaluation criteria. 1) Attack success rate (A-rate): the degraded performance after attacking target model. 2) Modification rate (Mod): the percentage of modified words comparing to original text. 3) Semantic similarity (Sim): the cosine similarity between the original text and its adversary, computing via the universal sentence encoder (USE; Cer et al., 2018). Manual Evaluation Metrics We further manually validate the quality of the adversaries from three challenging properties. 1) Human prediction consistency (Con): the rate of human judgement which is consistent with ground-truth label; 2) Language fluency (Flu): the fluency score of the sentence, measured on a Likert scale of 1 to 5 from ungrammatical to coherent (Gagnon-Marchand et al., 2019); 3) Semantic similarity (Simhum): the semantic consistency between each input-adversary pair, where 1 means *unanimous*, 0.5 means *ambiguous*, 0 means *inconsistent*. | Dataset | Con↑ | Flu↑ | Simhum ↑ | | |------------|----------|--------|------------|------| | IMDB | Original | 0.95 | 4.5 | 0.95 | | SDM-ATTACK | 0.90 | 4.3 | | | | MNLI | Original | 0.88 | 4.0 | 0.83 | | SDM-ATTACK | 0.79 | 3.7 | | | ## 4.2 Results Automatic Evaluation As shown in Table 1, SDM-ATTACK consistently achieves the highest attack success rate to attack BERT in both text classification and textual entailment tasks, which indicates the effectiveness of SDM-ATTACK. Furthermore, SDM-ATTACK mostly obtains the best performance of modification and similarity metrics, except for AG's News, where SDM-ATTACK achieves the second-best. For instance, our framework only perturbs 4.1% of the words on the IMDB datasets, while the attack success rate is improved to 91.4% with a semantic similarity of 0.82. Although A2T performs better in modification and similarity metrics in Yelp and AG's News, their attack success rate is always much lower than SDMATTACK, even other baselines. Because the modification and similarity metrics only consider the successful adversaries, we conjecture that A2T can only solve the inputs which are simpler to attack. In general, our method can simultaneously satisfy the high attack success rate with a lower modification rate and higher similarity. Furthermore, We find that the attack success rate on document-level datasets, i.e., Yelp and IMDB, are higher than the other sentence-level datasets, which indicates that it is easier to mislead models when the input text is longer. The possible reason is the victim model tends to use surface clues rather than understand them to make predictions when the context is long. Manual evaluation In manual evaluation, we first randomly select 100 samples from successful adversaries in IMDB and MNLI datasets and then ask three crowd-workers to evaluate the quality of the original inputs and our generated adversaries. The results are shown in Table 2. As for the human prediction consistency, we regard the original inputs as a baseline. Taking IMDB as an example, humans can correctly judge 95% of the original inputs while they can maintain 90% accuracy to our gen- | Dataset | Model | A-rate↑ | Mod↓ | Sim↑ | |-----------|-----------|-----------|--------|--------| | RoBERTa | 84.4 | 13.9 | 0.52 | | | MR | WordCNN | 72.1 | 10.3 | 0.48 | | WordLSTM | 80.7 | 8.9 | 0.56 | | | RoBERTa | 88.3 | 8.3 | 0.70 | | | IMDB | WordCNN | 89.2 | 3.3 | 0.85 | | WordLSTM | 89.8 | 5.4 | 0.75 | | | SNLI | InferSent | 78.7 | 17.0 | 0.42 | | ESIM | 79.0 | 17.2 | 0.41 | | Table 3: Attack results against other models. | Dataset | Method | A-rate↑ | Mod↓ | Sim↑ | |----------------|-------------|-----------|--------|--------| | AG's News | BERT-Attack | 74.6 | 15.6 | 0.52 | | SDM-ATTACK-mlm | 76.2 | 15.0 | 0.51 | | | MR | BERT-Attack | 83.2 | 12.8 | 0.52 | | SDM-ATTACK-mlm | 84.3 | 11.5 | 0.53 | | erated adversaries, which indicates SDM-ATTACK can mislead the PLMs while keeping human judges unchanged. The language fluency scores of adversaries are close to the original inputs, where the gap scores are within 0.3 on both datasets. Furthermore, the semantic similarity scores between the original inputs and our generated adversaries are 0.95 and 0.83 in IMDB and MNLI, respectively. In general, SDM-ATTACK can satisfy the challenging demand of preserving the three aforementioned properties. Detailed design of manual evaluation and more results are shown in appendix E. ## 5 Analyses 5.1 Generalization We detect the generalization of SDM-ATTACK in two aspects, 1) attack more language models and 2) adapt to more substitution strategies. Firstly, we apply SDM-ATTACK to attack extensive victim models, such as traditional language models (e.g., WordCNN) and other state-of-the-art PLMs (e.g., RoBERTa; Liu et al., 2019). The results of text classification tasks in table 3 show that SDM-ATTACK not only has better attack effects against WordCNN and WordLSTM, but also misleads RoBERTa, which is a more robust model. For example, on the IMDB datasets, the attack success rate is up to 89.2% against WordCNN with a modification rate of only about 3.3% and a high semantic similarity of 0.85. As for the textual entailment task, SDM-ATTACK can also achieve remarkable attack ![6_image_0.png](6_image_0.png) success rates against InferSent and ESIM. Secondly, although we directly adopt the word substitution strategy in Textfooler, this strategy can actually be replaced by other methods. To demonstrate this assumption, we further replace our word substitution strategy with the mask-fill way in BERT-attack, named SDM-ATTACK-mlm. As shown in Table 4, SDM-ATTACK-mlm completely beat BERT-Attack, indicating the part of word substitution of SDM-ATTACK has generalization ability to extend to different types of strategies and archives high performance. More results are displayed in appendix E. ## 5.2 Efficiency In this section, we probe the efficiency according to varying sentence lengths in the IMDB dataset as shown in Figure 3. The time cost of SDMATTACK is surprisingly mostly better than A2T, which mainly targets obtaining cheaper computation costs with lower attack success rates in Table 1. Meanwhile, SDM-ATTACK can obviously beat BERT-attack and TextFooler, which need to conduct a model forward process for each token. Furthermore, with the increase of sentence lengths, SDM-ATTACK and A2T maintain a stable time cost, while the time cost of BERT-attack and TextFooler is exploding. These phenomena show the efficiency advantage of SDM-ATTACK, especially in dealing with long texts. ## 5.3 Transferability We evaluate the transferability of SDM-ATTACK to detect whether the SDM-ATTACK trained on one dataset can perform well on other datasets. We conduct experiments on a series of text classification tasks and use the randomly initialized BERT as a | Yelp | IMDB | MR | AG's News | | |-----------|--------|------|-------------|------| | Yelp | 87.6 | 85.8 | 40.5 | 43.6 | | IMDB | 82.9 | 89.3 | 51.4 | 43.4 | | MR | 81.8 | 88.2 | 66.5 | 39.6 | | AG's News | 62.4 | 59.2 | 29.9 | 53.2 | | Random | 58.9 | 56.1 | 27.8 | 38.3 | | Dataset | Acc↑ | A-rate↑ | Mod↓ | Sim↑ | |------------|--------|-----------|--------|--------| | Yelp | 97.4 | 95.8 | 8.2 | 0.71 | | +Adv Train | 97.0 | 82.5 | 13.5 | 0.63 | | IMDB | 91.6 | 91.4 | 4.1 | 0.82 | | +Adv Train | 90.5 | 79.2 | 8.5 | 0.74 | | SNLI | 89.1 | 85.5 | 15.9 | 0.43 | | +Adv Train | 88.2 | 78.6 | 17.1 | 0.42 | baseline. As shown in Table 5, SDM-ATTACK has high transferability scores across different datasets, which are consistently higher than random. In detail, the performances among Yelp, IMDB and MR, which all belong to sentiment analysis, are higher than AG's News. Moreover, IMDB and MR are corpora about movies where SDM-ATTACK tends to learn a general attack strategy in this field and can transfer well to each other. ## 5.4 Adversarial Training We further investigate to improve the robustness of victim models via adversarial training. Specifically, we fine-tune the victim model with both original training datasets and our generated adversaries, and evaluate it on the same test set. As shown in Table 6, compared to the results with the original training datasets, adversarial training with our generated adversaries can maintain close accuracy, while improving performance on attack success rates, modification rates, and semantic similarity. The victim models with adversarial training are more difficult to attack, which indicates that our generated adversaries have the potential to serve as supplementary corpora to enhance the robustness of victim models. | Method | Text (MR; Negative) | Result | Mod↓ | Sim↑ | Flu↑ | |-------------|---------------------------------------------------------------------------------------------------------|----------|--------|--------|--------| | Original | Davis is so enamored of her own creation that she can not see how insufferable the character is. | - | - | - | 5 | | A2T | Davis is so enamored of her own institution that she can not behold how unforgivable the hallmark is. | Failure | 22.2 | 0.16 | 3 | | TextFooler | Davis is well enamored of her own infancy that she could not admire how infernal the idiosyncrasies is. | Success | 33.3 | 0.23 | 3 | | BERT-Attack | Davis is often enamoted of her own generation that she can not see how insuffoure the queen is. | Failure | 27.8 | 0.09 | 2 | | SDM-ATTACK | Davis is so captivated of her own creation that she can't see how indefensible the character is. | Success | 11.1 | 0.57 | 5 | ## 5.5 Case Study Table 7 shows adversaries produced by SDMATTACK and the baselines. Overall, the performance of SDM-ATTACK is significantly better than other methods. For this sample from the MR dataset, only TextFooler and SDM-ATTACK successfully mislead the victim model, i.e., changing the prediction from negative to *positive*. However, TextFooler modifies twice as many words as SDMATTACK, demonstrating our work has found a more suitable modification path. Adversaries generated by A2T and BERT-Attack are failed samples due to the low semantic similarity. BERT-Attack even generates an invalid word "*enamoted*" due to its subword combination algorithm. We also ask crowdworkers to give a fluency evaluation. Results show SDM-ATTACK obtains the highest score of 5 as the original sentence, while other adversaries are considered difficult to understand, indicating SDMATTACK can generate more natural sentences. ## 6 Related Work Adversarial attack has been well-studied in image and speech domains (Szegedy et al., 2013; Chakraborty et al., 2018; Kurakin et al., 2018; Carlini and Wagner, 2018). However, due to the discrete nature of language, the adversarial attack against pre-trained language models is much more difficult. Earlier works mainly focus on designing heuristic rules to generate adversaries, including swapping words (Wei and Zou, 2019), transforming syntactic structure (Coulombe, 2018), and paraphrasing by back-translation (Ribeiro et al., 2018; Xie et al., 2020). However, these rule-based methods are label-intensive and difficult to scale. Recently, adversarial attack in NLP is framed as a combinatorial optimization problem. Mainstream studies design a series of search algorithms with two detached stages In the first stage, they iteratively search for modification positions, including saliency-based ranking (Liang et al., 2017; Ren et al., 2019; Jin et al., 2020; Garg and Ramakrishnan, 2020), gradient-based descent algorithm (Sato et al., 2018; Yoo and Qi, 2021), and temporal-based searcher (Gao et al., 2018). In the second stage, a series of studies designs different substitution strategies, including dictionary method (Ren et al., 2019), word embeddings (Kuleshov et al., 2018; Jin et al., 2020) or language models (Li et al., 2020b; Garg and Ramakrishnan, 2020; Li et al., 2020a). In this paper, we formally propose to define the adversarial attack task as a sequential decision-making problem, further considering that scores in the next step are influenced by the editing results in the current step. The other line of recent studies is samplingbased methods. Alzantot et al. (2018) and Wang et al. (2019) apply genetic-based algorithm, Zang et al. (2019) propose a particle swarm optimizationbased method, and Guo et al. (2021) generate adversaries via distribution approximate sampling. However, their execution time is much more expensive due to the properties of sampling, so it is unlikely to generate large-scale adversarial samples. In addition, Zou et al. (2019) conducts reinforcement learning on attacking the neural machine translation task, but their search path is fixed from left to right. In this paper, SDM-ATTACK can determine any search order to find the appropriate attack path. ## 7 Conclusion In this paper, we formally define the adversarial attack task as a sequential decision-making problem, considering the entire attack process as sequence with two types of decision-making problems, i.e., word finder and substitution. To solve this problem without any direct signals of intermediate steps, we propose to use policy-based RL to find an appropriate attack path, entitled SDM-ATTACK. Our experimental results show that SDM-ATTACK achieves the highest attack success rate. In this paper, we use our designed rewards as instant signals to solve these two decision-making problems approximately. We will further try to adopt hierarchical RL to optimize the solution. ## 8 Limitations We define the adversarial attack task as a sequential decision-making problem and apply policy-based reinforcement learning to model it. This work must follow this assumption: the decision process conforms to Markov decision process (MDP) that the conditional probability distribution of the future state depends only on the current state. Meanwhile, reinforcement learning training requires additional time costs and the results may be unstable. We only conduct the experiments on two NLP tasks with six selected datasets, which are all English corpus. Furthermore, our experimental results are mainly for BERT, with RoBERTa supplemented in the analysis. Thus, we lack the evaluation of other novel pre-trained language models, such as ELECTRA (Clark et al., 2020) and XLNET (Yang et al., 2019). Therefore, our work lacks multi-task, multi-model and multilingual verification in terms of generalization and transferability. ## 9 Ethics Statement We declare that this article is in accordance with the ethical standards of *ACL Code of Ethics*. Any third party tools used in this work are licensed from their authors. All crowd-workers participating in the experiments are paid according to the local hourly wages. ## 10 Acknowledgment We would like to thank anonymous reviewers for their insightful and constructive feedback. We appreciate Peng Li and Shuo Wang for their valuable discussions. We thank Qianlin Liu, Yanqi Jiang and Yiwen Xu for the crowdsourced work. This work is supported by the National Key R&D Program of China (2022ZD0160502) and the National Natural Science Foundation of China (No. 61925601, 62276152, 62236011). ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. *arXiv preprint arXiv:1804.07998*. David F Armstrong, William C Stokoe, and Sherman E Wilcox. 1995. *Gesture and the nature of language*. Cambridge University Press. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. *arXiv* preprint arXiv:1508.05326. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *CoRR*, abs/2005.14165. Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In *2018 IEEE security and privacy workshops (SPW)*, pages 1–7. IEEE. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. *arXiv* preprint arXiv:1803.11175. Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. Claude Coulombe. 2018. Text data augmentation made simple by leveraging nlp cloud apis. *arXiv preprint* arXiv:1812.04718. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jules Gagnon-Marchand, Hamed Sadeghi, Md Haidar, Mehdi Rezagholizadeh, et al. 2019. Salsa-text: self attentive latent space based adversarial text generation. In *Canadian Conference on Artificial Intelligence*, pages 119–131. Springer. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In *2018* IEEE Security and Privacy Workshops (SPW), pages 50–56. IEEE. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. *arXiv preprint arXiv:2004.01970*. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. 2021. Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 8018–8025. Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. 1996. Reinforcement learning: A survey. *Journal of artificial intelligence research*, 4:237–285. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, et al. 2018. Adversarial attacks and defences competition. In The NIPS'17 Competition: Building Intelligent Systems, pages 195–231. Springer. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020a. Contextualized perturbation for textual adversarial attack. *arXiv preprint arXiv:2009.07502*. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. Bert-attack: Adversarial attack against bert using bert. arXiv preprint arXiv:2004.09984. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. *arXiv preprint* arXiv:1704.08006. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Gary Marcus. 2020. The next decade in ai: four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177. John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. arXiv preprint arXiv:2005.05909. Nikola Mrkšic, Diarmuid O Séaghdha, Blaise Thom- ´ son, Milica Gašic, Lina Rojas-Barahona, Pei-Hao Su, ´ David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. *arXiv preprint arXiv:1603.00892*. Jesús Oliva, José Ignacio Serrano, María Dolores Del Castillo, and Ángel Iglesias. 2011. Symss: A syntax-based measure for short-text semantic similarity. *Data & Knowledge Engineering*, 70(4):390–405. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. *arXiv preprint cs/0506075*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In *Proceedings of the 57th annual meeting of the association for computational linguistics*, pages 1085– 1097. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Annual Meeting of the Association for Computational Linguistics (ACL). Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. arXiv preprint arXiv:1805.02917. Michael Studdert-Kennedy. 2005. How did language go discrete. *Language origins: Perspectives on evolution, ed. M. Tallerman*, pages 48–67. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*. Scott Thiebes, Sebastian Lins, and Ali Sunyaev. 2021. Trustworthy artificial intelligence. *Electronic Markets*, 31(2):447–464. X Wang, H Jin, and K He. 2019. Natural language adversarial attacks and defenses in word level. *arXiv* preprint arXiv:1909.06723. Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. *arXiv preprint arXiv:1901.11196*. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. *Advances in Neural* Information Processing Systems, 33:6256–6268. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32. Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of nlp models. *arXiv preprint* arXiv:2109.00544. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2019. Word-level textual adversarial attacking as combinatorial optimization. arXiv preprint arXiv:1910.12196. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28. Wei Zou, Shujian Huang, Jun Xie, Xinyu Dai, and Jiajun Chen. 2019. A reinforced generation of adversarial examples for neural machine translation. arXiv preprint arXiv:1911.03677. ## A Datasets We conduct experiments on the following datasets of two NLP tasks and detailed statistics are displayed in Table 8: - **Text Classification**: (1) Yelp (Zhang et al., 2015): A dataset for binary sentiment classification on reviews, constructed by considering stars 1 and 2 negative, and 3 and 4 positive. (2) IMDB: A document-level movie review dataset for binary sentiment analysis. (3) MR (Pang and Lee, 2005): A sentencelevel binary classification dataset collected from Rotten Tomatoes movie reviews. (4) AG's News (Zhang et al., 2015): A collection of news articles. There are four topics in this dataset: World, Sports, Business, and Science/Technology. - **Textual Entailment**: (1) SNLI (Bowman et al., 2015): A dataset of human-written English sentence pairs and manually annotated labels of entailment, neutral and contradiction. (2) MNLI (Williams et al., 2017): Another crowd-sourced collection of sentence pairs labeled with textual entailment information. Compare to SNLI, it includes more complex sentences, e.g, enres of spoken and written text. ## B Training Algorithm The training process is shown in Algorithm 1. Since a f tis chosen through a probability distribution, the agent is encouraged to explore more possible paths. The instant reward rtis obtained from environment after performing both two actions actions. Once the termination signal is raised, the environment will terminate this current episode and update the agent's parameters via a policy gradient approach. The expected return of decision trajectory is defined as follows: J(θ) = E[G(τ )] (16) Thus the gradient is calculated by REINFORCE algorithm (Kaelbling et al., 1996): ∇J(θ) = ∇E[log πθ(τ ) · G(τ )] (17) Then the expectation over the whole sequence is approximated by Monte Carlo simulations and can be expressed as follows: $$\nabla J(\theta)=\frac{1}{M}\sum_{m=1}^{M}\nabla\log\pi_{\theta}(\tau^{(m)})G(\tau^{(m)})\tag{18}$$ | Dataset | Train | Test | Avg Len | Classes | |-----------|---------|--------|-----------|-----------| | Yelp | 560k | 38k | 152 | 2 | | IMDB | 25k | 25k | 215 | 2 | | AG's News | 120k | 7.6k | 73 | 4 | | MR | 9k | 1k | 20 | 2 | | SNLI | 570k | 3k | 8 | 3 | | MNLI | 433k | 10k | 11 | 3 | Table 8: Overall statistics of datasets. ## Algorithm 1 Reinforce Training 1: Initialization: agent πθ with parameters θ, episode number M 2: for i ← 1 to M do 3: initialize t ← 1 4: **while** not receive termination signal do 5: get environment state st 6: compute πθ((a f t , as t)|st) ∽ πθ(a f t|st) 7: sample a f t based on probability 8: select a s t from prior knowledge 9: compute reward rt 10: update t ← t + 1 11: **end while** 12: initialize G(τ ) ← 0 13: for j ← T to 1 do 14: G(τ ) ← γG(τ ) + rj 15: accumulate Jj (θ) 16: **end for** 17: update θ ← θ + α∇J(θ) 18: **end for** where [τ (1), τ (2)*, ..., τ* (M)] are M samples of trajectories. The discount factor γ enables both longterm and immediate effects to be taken into account and trajectories with shorter lengths are encouraged. We randomly select 2500 items from the training corpus for training the agent of each dataset. The average convergence time is approximately between 2-16 hours, related to the length of the input. When attacking large batches of samples, the impact of training cost is negligible compared to the cumulative attack time cost. During training, We adopt random strategies and short-sighted strategies in the initial stage for early exploration and to obtain better seeds. ## C Implementation Constraint In order to make the comparison fairer, we set the following constraints for SDM-ATTACK as well as all baselines: (1) **Max modification rate**: To better | Dataset | Acc↑ | A-rate↑ | Mod↓ | Sim↑ | |------------|--------|-----------|--------|--------| | Yelp | 97.4 | 95.8 | 8.2 | 0.71 | | +Adv Train | 97.0 | 82.5 | 13.5 | 0.63 | | IMDB | 91.6 | 91.4 | 4.1 | 0.82 | | +Adv Train | 90.5 | 79.2 | 8.5 | 0.74 | | AG's News | 94.6 | 77.9 | 15.3 | 0.53 | | +Adv Train | 91.8 | 50.6 | 23.3 | 0.50 | | MR | 96.9 | 85.6 | 12.3 | 0.57 | | +Adv Train | 92.4 | 72.0 | 16.7 | 0.57 | | SNLI | 89.1 | 85.5 | 15.9 | 0.43 | | +Adv Train | 88.2 | 78.6 | 17.1 | 0.42 | | MNLI | 84.5 | 78.7 | 13.8 | 0.49 | | +Adv Train | 76.8 | 58.6 | 15.2 | 0.49 | Table 9: Adversarial training results. maintain semantic consistency, we only keep adversarial samples with less than 40% of the words to be perturbed. (2) **Part-of-speech (POS)**: To generate grammatical and fluent sentences, we use NLTK tools2to filter candidates that have a different POS from the target word. This constraint is not employed on BERT-Attack. (3) **Stop words** preservation: the modification of stop words is disallowed and this constraint helps avoid grammatical errors. (4) **Word embedding distance**: For Textfooler, A2T and SDM-ATTACK, we only keep candidates with word embedding cosine similarity higher than 0.5 from synonyms dictionaries (Mrkšic et al. ´ , 2016). For *mask-fill* methods, following BERT-Attack, we filter out antonyms (Li et al., 2020b) via the same synonym dictionaries for sentiment classification tasks and textual entailment tasks. ## D Tuning With Adversaries Table 9 displays adversarial training results of all datasets. Overall, after fine-turned with both original training datasets and adversaries, victim model is more difficult to attack. Compared to original results, accuracy of all datasets is barely affected, while attack success rate meets an obvious decline. Meanwhile, attacking model with adversarial training leads to higher modification rate, further demonstrating adversarial training may help improve robustness of victim models. ## E Supplementary Results At the beginning of manual evaluation, we provided some data to allow crowdsourcing workers to unify 2https://www.nltk.org/ | Dataset | Con↑ | Flu↑ | Simhum ↑ | | |-------------|------------|--------|------------|------| | Original | 0.95 | 4.5 | | | | IMDB | TextFooler | 0.84 | 4.0 | 0.88 | | Bert-Attack | 0.83 | 4.2 | 0.90 | | | SDM-ATTACK | 0.90 | 4.3 | 0.95 | | | MNLI | Original | 0.88 | 4.0 | | | TextFooler | 0.77 | 3.5 | 0.80 | | | Bert-Attack | 0.77 | 3.6 | 0.81 | | | SDM-ATTACK | 0.79 | 3.7 | 0.83 | | Dataset Method A-rate↑ Mod↓ Sim↑ Yelp BERT-Attack 89.8 12.4 0.66 SDM-ATTACK-mlm 90.0 10.6 0.65 IMDB BERT-Attack 88.2 5.3 0.78 SDM-ATTACK-mlm 88.5 5.1 0.78 AG's News BERT-Attack 74.6 15.6 0.52 SDM-ATTACK-mlm 76.2 15.0 0.51 MR BERT-Attack 83.2 12.8 0.52 SDM-ATTACK-mlm 84.3 11.5 0.53 the evaluation standards. We also remove the data with large differences when calculating the average value to ensure the reliability and accuracy of the evaluation results. More manual evaluation results are shown in Table 10. Table 11 displays the generalization ability of SDM-ATTACK with mask-fill strategy. However, the improvement effect is not particularly obvious. The mask-fill method makes the current candidate synonyms also affected by the sequence states. Compared to a fixed synonym dictionary, it has a larger prior knowledge and changing action space, which makes it harder to train the agent. Only increasing the size of the training corpus is not very effective. We will try adopting hierarchical RL to further solve this problem in the future. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✗ A2. Did you discuss any potential risks of your work? There is no risks in our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? We write this paper ourselves. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4,9 ✓ B1. Did you cite the creators of artifacts you used? 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 9 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Our experiments are performed on public datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4, appendix ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We provide statistics of datasets in appendix and detail information about baselines in section 4. C ✓ **Did you run computational experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 4 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 9 ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We have used publicly available datasets. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Our work does not involve collecting ethically relevant information. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 4
chen-etal-2023-towards-robust
Towards Robust Personalized Dialogue Generation via Order-Insensitive Representation Regularization
https://aclanthology.org/2023.findings-acl.462
Generating persona consistent dialogue response is important for developing an intelligent conversational agent. Recent works typically fine-tune large-scale pre-trained models on this task by concatenating persona texts and dialogue history as a single input sequence to generate the target response. While simple and effective, our analysis shows that this popular practice is seriously affected by order sensitivity where different input orders of persona sentences significantly impact the quality and consistency of generated response, resulting in severe performance fluctuations (i.e., 29.4{\%} on GPT2 and 83.2{\%} on BART). To mitigate the order sensitivity problem, we propose a model-agnostic framework, ORder Insensitive Generation (ORIG), which enables dialogue models to learn robust representation under different persona orders and improve the consistency of response generation. Experiments on the Persona-Chat dataset justify the effectiveness and superiority of our method with two dominant pre-trained models (GPT2 and BART).
# Towards Robust Personalized Dialogue Generation Via Order-Insensitive Representation Regularization Liang Chen, Hongru Wang, Yang Deng, Wai-Chung Kwan, Zezhong Wang, Kam-Fai Wong The Chinese University of Hong Kong MoE Key Laboratory of High Confidence Software Technologies {lchen,hrwang,wckwan,zzwang,kfwong}@se.cuhk.edu.hk {dengyang17dydy}@gmail.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Generating persona consistent dialogue response is important for developing an intelligent conversational agent. Recent works typically fine-tune large-scale pre-trained models on this task by concatenating persona texts and dialogue history as a single input sequence to generate the target response. While simple and effective, our analysis shows that this popular practice is seriously affected by *Order Sensitivity* where different input orders of persona sentences significantly impact the quality and consistency of generated response, resulting in severe performance fluctuations (i.e., 29.4% on GPT2 and 83.2% on BART). To mitigate the order sensitivity problem, we propose a model-agnostic framework, ORder Insensitive Generation (**ORIG**), which enables dialogue models to learn robust representation under different persona orders and improve the consistency of response generation. Experiments on Persona-Chat dataset justify the effectiveness and superiority of our method with two dominant pre-trained models (GPT2 and BART).1 ## 1 Introduction Developing a persona-consistent dialogue model has been one of the key issues and crucial problems in open-domain dialogue systems (Huang et al., 2020). Zhang et al. (2018a) define the problem of personalized dialogue generation, which aims to generate personalized responses based on textually described persona profiles. Many efforts have been made on developing dialogue models that generate responses consistent with the provided persona profile (Song et al., 2019, 2020a,b; Wu et al., 2020a). The recent development in transformer-based pre-trained models (Vaswani et al., 2017; Devlin et al., 2018; Liu et al., 2019; Chen, 2020) has led to great successes in dialogue systems (Wolf et al., 1The code is available at https://github.com/ ChanLiang/ORIG. Figure 1: A dialog extract from Persona-Chat showing different orderings of the same persona can lead to different and even inconsistent responses. 2019; Wu et al., 2020b; Ham et al., 2020; Kulhánek et al., 2021; Cao et al., 2022; Deng et al., 2022b,c, 2023). Inspired by these successes, previous works incorporate those pre-trained models in persona-based response generation by concatenating the dialogue history and persona as input to generate the response in an auto-regressive manner (Song et al., 2021; Liu et al., 2022). However, a fine-tuned model can generate a high-quality and persona-consistent response in a certain ordering of personas, while varying this order may lead to a generic and even inconsistent response as illustrated by the example in Figure 1. We empirically show that the worst ordering of persona can lead to a 29.4% decline in BLEU score compared with the best ordering. Ideally, a well-trained dialogue generation model should be able to generate a persona-consistent response regardless of the ordering of personas in the input. We perform experiments and analyses to identify the cause of the ordering sensitivity. We find that the ordering of persona in the input leads to different representations of context and response. We also show that the model can attend to the appropriate persona and generate high-quality responses under some representations but not under others. This leads to instability in response generation. Motivated by the above findings, we propose ORder Insensitive Generation (**ORIG**), which is a simple and effective framework that helps models learn more robust and better representations for different persona orders. More specifically, we formulate ORIG as a constrained optimization problem, which optimizes a persona response generation objective under the constraint: given different orderings of persona, the response representations of the model are the same. Then we optimize it through a stochastic optimization approach. Experimental results on the Persona-Chat dataset show that ORIG significantly improves the robustness of pre-trained models (GPT2 (Radford et al., 2019) and BART (Lewis et al., 2020)) under different orderings of input persona, as well as advances their generation performance. In summary, our contributions are threefold: (1) We identify the order sensitivity problem in persona dialogue generation and conduct an empirical analysis to reveal its underlying reasons. (2) We propose a model-agnostic framework, ORIG, that helps different persona dialogue models learn robust representations while achieving better performance. (3) We perform extensive experiments on the Persona-Chat dataset, showing that ORIG outperforms previous models and is more robust and less sensitive to different persona orderings. ## 2 Related Work Maintaining a consistent persona is essential for building a human-like dialogue system, where most works regard persona as a set of sentences along with each dialog (Zhang et al., 2018a; Gu et al., 2019; Song et al., 2019; Wu et al., 2021; Cao et al., 2022; Deng et al., 2022a). Song et al. (2021) disentangled the task of persona-based dialogue generation into two sub-tasks: consistency understanding and dialogue generation while Cao et al. (2022) aims to alleviate the problem of limited data by data manipulation methods. Despite satisfactory performance in previous work, the impacts of different orders of personas are still under-explored, resulting in unstable and inconsistent responses. Our work is also related to work on order sensitivity in prompt-based few-shot learning (Zhao et al., 2021; Lu et al., 2022). Zhao et al. (2021) found that the different order of training examples in the prompt can cause accuracy to vary from near chance to state-of-the-art in the few-shot clas- | Model | BLEU-1 | BLEU-2 | ROUGE | CIDEr | |-------------|----------|----------|---------|---------| | GPT2-best | 16.79 | 9.25 | 18.44 | 17.56 | | GPT2-worst | 11.85 | 5.83 | 11.79 | 5.51 | | BART-best. | 28.17 | 18.29 | 31.07 | 46.53 | | BART-worst. | 4.73 | 1.99 | 4.37 | 1.34 | sification setting. Similarly, order sensitivity for In-context Learning also exists regardless of model size and the prompt format (Lu et al., 2022). Distinguishing from them, we focus on order sensitivity in the language generation task in finetuning setting, especially the impacts of persona orderings to generate persona-consistent responses. ## 3 **Order Sensitivity Problem And Analysis** In this section, we first illustrate the seriousness of the order sensitivity problem by showing a huge performance fluctuation in persona dialogue models when fed the same personas in the best and worst orders. Then we analyse why their performance is volatile to different persona orderings. To illustrate the problem, we finetune PLMs on the Persona-Chat by concatenating the persona and dialogue context together to predict the target response, including GPT2 and BART. After the training converges, we test them on two settings: (1) the best case: for each test sample, we feed the models all possible permutations of persona sentences and keep the maximum score for each sample as the final score; (2) the worst-case: perform the same process as (1), but take the minimum score. Table 1 shows the results for two models. Surprisingly, we find the ordering of input persona has a big impact on the models' performance: GPT2's worst case is 29.4% lower than its best case, while BART's is 83.2% lower. Moreover, we find that the huge fluctuation in models' performance is closely related to the response representation changes caused by different orderings of input persona sentences. Concretely, we measure the similarity of the responses representation of the same test sample under different input orders of persona. We show their token-level similarity in the Table 2 (persona and context are omitted for brevity), where the bidirectional KL function is employed as the distance function. Ideally, models should have the consistent response representation: KL distance between the same re- ## Bart Great(0.185) And(0.105) How(0.312) Was(0.289) Your(0.124) Day(0.304) ? Table 2: The token-level representation of the same response can be very different when the ordering of input persona changes. The value denotes the KL distance of the same tokens representation returned by the models fed with two different orderings of persona. ! r , ) personan … persona1 … personan context Transformer response response Order Shuffle " , ') (!||") Transformer persona context 1 Figure 2: Our proposed framework ORIG sponse should be zero. However, their distances are significantly higher than zero. It reveals that the models behave more likely a left-to-right language model whose representation is prone to the different orderings of the previous input (e.g. persona sentences). That is highly undesirable for a robust personalized dialogue model. Thus, regularization of representation for the response tokens is necessary to help personalized dialogue models capture order-invariant representation. ## 4 Method We introduce the proposed framework, named ORIG: ORder Insensitive Generation (**ORIG**). As shown in Figure 2, we transform the persona ordersensitivity problem as a constrained optimization problem that optimises a persona dialogue model under the uncertainty of the input persona order. ## 4.1 Problem Formulation Given the dialogue context C = {u1*, . . . , u*m} and a set of persona descriptions P = {p1*, . . . , p*n}, the goal is to generate a personalized response r. Formally, the generation problem can be formulated as the following chain rule: $$P(r|C,P;\theta)=\prod\nolimits_{t=1}^{T}P\left(r_{t}|r_{1:t-1},C,P;\theta\right){\mathrm{~}}(1)$$ where θ is the parameters of the dialogue model. ## 4.2 Orig Framework According to the analysis in Section 3, the observation reveals that varying the order of input personas leads to different representations of the dialogue response, thus resulting in fluctuations in performance. To learn more robust and consistent representations, we propose the ORIG framework that complements the response generation process with a constraint: given the different orderings of a persona, the model's response representations need to be the same. Then the order-insensitive personalized dialogue generation problem is modelled as the following constrained optimization problem $$\begin{array}{c}{{\operatorname*{min}_{\theta}[-\log P(r|C,P;\theta)]}}\\ {{\mathrm{s.t.}\quad{\mathcal{D}}[P(r|C,P;\theta),P(r|C,{\hat{P}};\theta))]=0}}\\ {{\qquad\qquad\qquad(P,C,r)\sim D}}\\ {{\qquad\qquad\qquad\qquad{\hat{P}}\sim\mathrm{Shuffle}(P)}}\end{array}$$ where P(r|*C, P*; θ) are the model's predictions over the dialogue response, D denotes the dialogue corpus, and the function D is KL divergence to measure the difference between two distributions, and the Shuffle operator samples each persona ordering uniformly from the full permutation of P. ## 4.3 Optimization As for optimization, we first apply the Lagrange multipliers strategy to convert the constrained problem into an unconstrained problem $$\begin{array}{c}{{{\mathcal{L}}_{\theta}=-\log P(r|C,P;\theta)}}\\ {{\qquad+\gamma\cdot{\mathcal{D}}[P(r|C,P;\theta),P(r|C,\hat{P};\theta)]}}\end{array}\quad(6)$$ where γ is the multiplier corresponding to the equality constraints (3). Then we can update the parameters θ of dialogue models by stochastic gradient descent. ## 5 Experiments 5.1 Experimental Setups Datasets We evaluate the models on the PersonaChat dataset (Zhang et al., 2018a), where each dialogue session has at least 6 turns of interactions. And each interaction is conditioned on a persona that is described with 5 profile sentences. Baselines To verify the generality of our framework across different architectures, we perform experiments on the two most popular pre-trained architectures: Transformer encoder-decoder (BART) and Transformer decoder (GPT2). Implementation Details We choose GPT2 base (117M) and BART base (139M) as the base models and compare the base models finetuned with classical max likelihood estimation (MLE) and our | Automatic Evaluations | Human Evaluations | | | | | | | | | |-------------------------|---------------------|--------|-------|---------|-------|---------|-------|-----------|------------| | Model | BLEU-1 | BLEU-2 | ROUGE | Entropy | CIDEr | C-score | Flu. | Con. Coh. | Per. Cons. | | GPT2 | 13.95 | 7.22 | 14.82 | 6.53 | 10.10 | 0.718 | 1.531 | 1.281 | 1.719 | | GPT2-ORIG | 14.61 | 7.43 | 14.94 | 6.54 | 10.60 | 0.733 | 1.726 | 1.512 | 1.719 | | BART | 14.19 | 7.61 | 15.05 | 6.67 | 11.07 | 0.443 | 1.906 | 1.312 | 1.438 | | BART-ORIG | 14.64 | 7.90 | 15.20 | 6.41 | 13.27 | 0.446 | 1.938 | 1.332 | 1.457 | proposed ORIG. Our implementation was based on HuggingFace's Transformers library (Wolf et al., 2020). During training, the learning rate is set as 2 × 10−5, and the batch size for GPT2 and BART is set as 64 and 32, respectively. We trained both models for 10 epochs with Adam (Kingma and Ba, 2015) optimizer until they converged. During decoding, We employ a top-p (p=0.9) (Holtzman et al., 2020) plus top-k (k=50) sampling strategy, which is used to avoid sampling from the unreliable tail of the distribution (only consider a subset of vocabulary composed of k words with the highest probability or some most probable words whose sum of probabilities equals p at each decoding step). The random seed for all experiments is set to 42. Evaluation Metrics We perform both automatic and human evaluations. (1) Automatic metrics: We adopt BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), Entropy(Zhang et al., 2018b) and CIDEr (Vedantam et al., 2015) for lexicalbased measurement. Following previous work, we also adopt the C-score (Madotto et al., 2019) to indicate the consistency between the generated response and the input personas. C-score is calculated by the entailment score of a RoBERTa model finetuned on the DialogueNLI dataset. (2) Human evaluation: We randomly sampled 200 samples from the test set and ask 3 crowdworkers to rate the generated responses in the following three aspects: response fluency, context coherence and persona consistency. The scores {0, 1, 2} indicate unacceptable, acceptable and excellent, respectively. The degree of agreement during human evaluation is measured by Fleiss' kappa (Fleiss, 1971). ## 5.2 Experimental Results Improves performance in the original test set Table 3 shows different models' performance in the original test set without any modifications (for ORIG, "Shuffle" is used during training but is optional during testing. The Table 3 caption signifies ![3_image_0.png](3_image_0.png) Model mean variance best worst GPT2 14.78 0.00193 **16.79** 11.85 GPT2-ORIG **14.93 0.00016** 14.95 **14.25** BART 15.01 0.01123 **28.17** 4.73 BART-ORIG **15.18 0.00532** 26.44 **5.80** the absence of "Shuffle" during testing. This is to evaluate if ORIG performs well in the normal setting). From automatic metrics, we can see base models trained with our ORIG framework outperform the baselines. It justifies that our framework can be applied to different models to improve their performance. From human evaluation results, models with ORIG are superior to others on almost all metircs, especially on GPT2. This is consistent with the results of automatic metrics. The average kappa value of the annotation is 0.632, indicating good agreement during human evaluation. Reduces variance and improves mean and worstcase performance Figure 3 shows that aside from reducing the variance, ORIG also improves mean and worst-case performance (detailed results in Table 4) across two models consistently, especially in GPT2 (the worst case performance is very close to the best case). We reduce the variance on GPT2 and BART by 91.6% and 51.8%, respectively. Meanwhile, we improve worst-case performance by 20.3% and 22.6% on GPT2 and BART respectively. The only drop is the best case. This is because our distance function D is unidirectional, which pulls in the two representations in Equation 3 indiscriminately, causing the best case to go down and the worst to go up. We leave more complicated and directional distance constraints for future studies. ## 6 Conclusion We show that the current practice of applying pretrained models to the personalized dialogue generation task is volatile across different input orders of personas. Through the analysis, we find that the problem arises from the representation changes induced by the input changes. Motivated by these, we propose our ORIG, a model-agnostic framework for finetuning the persona dialogue model such that it obtains a persona order-invariant representation. Experiments on two dominant pre-trained dialogue models show that our framework improves performance and reduces order volatility. ## Limitations In this section, we discuss the limitations of this work. First, on the problems side, it's non-trivial to consider the order of all kinds of grounding knowledge, but we have only explored Persona-Chat. We hope to apply our method to more grounded generation tasks such as knowledge-grounded and document-grounded dialogue in the future. Second, on the methods side, our framework is trainingbased, but we hope more lightweight techniques could be developed to improve the model's robustness even without training the model. ## Acknowledgements We would like to thank Professor Helen Meng and Xixin Wu for their helpful discussion and feedback on the course SEEM5640. We also thank anonymous reviewers for their constructive comments. Thanks to Dr Honshan HO for his support. This research work is partially supported by CUHK under Project No. 3230366. ## References Yu Cao, Wei Bi, Meng Fang, Shuming Shi, and Dacheng Tao. 2022. A model-agnostic data manipulation method for persona-based dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7984–8002, Dublin, Ireland. Association for Computational Linguistics. Liang Chen. 2020. Variance-reduced language pretraining via a mask proposal network. Yang Deng, Yaliang Li, Wenxuan Zhang, Bolin Ding, and Wai Lam. 2022a. Toward personalized answer generation in e-commerce via multi-perspective preference modeling. *ACM Trans. Inf. Syst.*, 40(4):87:1– 87:28. Yang Deng, Wenxuan Zhang, Wai Lam, Hong Cheng, and Helen Meng. 2022b. User satisfaction estimation with sequential dialogue act modeling in goaloriented conversational systems. In *WWW '22: The* ACM Web Conference 2022, pages 2998–3008. Yang Deng, Wenxuan Zhang, Weiwen Xu, Wenqiang Lei, Tat-Seng Chua, and Wai Lam. 2022c. A unified multi-task learning framework for multigoal conversational recommender systems. *CoRR*, abs/2204.06923. Yang Deng, Wenxuan Zhang, Yifei Yuan, and Wai Lam. 2023. Knowledge-enhanced mixed-initiative dialogue system for emotional support conversations. arXiv preprint arXiv:2305.10172. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, and Quan Liu. 2019. Dually interactive matching network for personalized response selection in retrieval-based chatbots. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1845–1854, Hong Kong, China. Association for Computational Linguistics. Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583–592, Online. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS), 38(3):1–32. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR*. Jonáš Kulhánek, Vojtech Hude ˇ cek, Tomáš Nekvinda, ˇ and Ondˇrej Dušek. 2021. AuGPT: Auxiliary tasks and data augmentation for end-to-end dialogue with pre-trained language models. In *Proceedings of the* 3rd Workshop on Natural Language Processing for Conversational AI, pages 198–210, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yifan Liu, Wei Wei, Jiayi Liu, Xianling Mao, Rui Fang, and Dangyang Chen. 2022. Improving Personality Consistency in Conversation by Persona Extending. In *Proceedings of the 31st ACM International Conference on Information & Knowledge Management*, pages 1350–1359. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland. Association for Computational Linguistics. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In *Proceedings of ACL 2019*, pages 5454–5459. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Haoyu Song, Yan Wang, Kaiyan Zhang, Wei-Nan Zhang, and Ting Liu. 2021. BoB: BERT over BERT for training persona-based dialogue models from limited personalized data. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–177, Online. Association for Computational Linguistics. Haoyu Song, Yan Wang, Weinan Zhang, Xiaojiang Liu, and Ting Liu. 2020a. Generate, delete and rewrite: A three-stage framework for improving persona consistency of dialogue generation. In *Proceedings of* the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5821–5831. Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting Persona Information for Diverse Generation of Conversational Responses. In *Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence*, pages 5190–5196, Macao, China. International Joint Conferences on Artificial Intelligence Organization. Haoyu Song, Wei-Nan Zhang, Jingwen Hu, and Ting Liu. 2020b. Generating Persona Consistent Dialogues by Exploiting Natural Language Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8878–8885. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. R. Vedantam, C. Zitnick, and D. Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR, pages 4566–4575, Los Alamitos, CA, USA. IEEE Computer Society. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents. *arXiv:1901.08149 [cs]*. Bowen Wu, Mengyuan Li, Zongsheng Wang, Yifu Chen, Derek F. Wong, Qihang Feng, Junhong Huang, and Baoxun Wang. 2020a. Guiding variational response generator to exploit persona. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 53–65. Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021. Transferable persona-grounded dialogues via grounded minimal edits. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2368–2382, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020b. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 917–929, Online. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing Dialogue Agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In *NIPS 2018*, pages 1810–1820. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In *International Conference on Machine Learning*, pages 12697–12706. PMLR. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation Section ✗ A2. Did you discuss any potential risks of your work? The risk has not yet been identified. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5.1 ✓ B1. Did you cite the creators of artifacts you used? 5.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. 5 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. It's a public dataset. ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 5.1 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We provide a simple version in 5.1 ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? All annotators are students. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
dasgupta-etal-2023-cost
Cost-effective Distillation of Large Language Models
https://aclanthology.org/2023.findings-acl.463
Knowledge distillation (KD) involves training a small {``}student{''} model to replicate the strong performance of a high-capacity {``}teacher{''} model, enabling efficient deployment in resource-constrained settings. Top-performing methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.
# Cost-Effective Distillation Of Large Language Models Sayantan Dasgupta, Trevor Cohn∗and **Timothy Baldwin** School of Computing & Information Systems University of Melbourne sayandg@umich.edu, {trevor.cohn, tbaldwin}@unimelb.edu.au ## Abstract Knowledge distillation (KD) involves training a small "student" model to replicate the strong performance of a high-capacity "teacher" model, enabling efficient deployment in resource-constrained settings. Topperforming methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.1 ## 1 Introduction An unfortunate problem affecting large language models, such as BERT (Devlin et al., 2018) or GPT (Radford et al., 2019), is their high compute costs, as a consequence of their complex architectures and vast numbers of parameters. This is particularly apparent in initial (pre)training, but also impacts the cost of fine-tuning to specific tasks, and the practicality of their deployment on resource-constrained edge devices (Sun et al., 2020). *Knowledge distillation* (KD; Hinton et al. (2014)) attempts to mitigate these concerns through learning a small "student" model to replicate the behaviour of a larger, unwieldy "teacher". The idea is that much of the performance of the teacher can be captured by the student, despite it having many fewer parameters, and thereby better portability. Several distillation methods have been proposed for large language models, including DistilBert (Sanh et al., 2019), which distills the 12-layer ∗Now at Google DeepMind. 1Code available at https://github.com/Sayan21/MAKD BERT transformer (Devlin et al., 2018) into a 6 layer student model with only a small loss in the performance on downstream tasks. Broadly, existing KD approaches are either architecturespecific or agnostic. The former group includes Jiao et al. (2020) and Sun et al. (2019a) which incorporate a loss term to encourage matching hidden between teacher and student, and thus requiring aligned teacher and student architectures. Approaches like Turc et al. (2019), on the other hand are architecture-agnostic, treating the teacher model as a black box using only the logits from language modelling heads for distillation it into a smaller LM. There are numerous advantages to the architecture-agnostic approach: (1) it is possible to distill a teacher model into a different student architecture, e.g. Tang et al. (2019) distills the BERT transformer into a simple single-layer Bi-LSTM; and (2) it frees the student to use different inference techniques, e.g., to better handle long sequences (Xiong et al., 2021; Vyas et al., 2020). While the training of large language models incurs substantial compute resources - for instance the training cost of GPT3 (Brown et al., 2020) was estimated at $4.6 million using Nvidia Tesla V100 GPUs (Sharir et al., 2020). the cost of pretraining a given model is incurred only once. On the other hand, practitioners apply models to specific tasks, often involving fine-tuning of the LLM on their task-specific datasets, after which fine-tuned LLMs are then distilled into smaller LLMs for faster inference on real-time applications. This process incurs more modest compute costs, however, given the myriad of different applications, the process is repeated many times, meaning the aggregate cost can be significant, rivaling the cost of pre-training.2 If we consider the per-instance training cost, finetuning is as costly as pre-training. Arguably this 2Witness the explosion of BERT fine-tuning papers in the literature, and OpenAI's claim that GPT3 is being used in 300 applications: https://openai.com/blog/gpt-3-apps. ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) (b) Our Approach is less of an issue for small datasets, as the finetuning costs will be also be small, however in this setting fine-tuning can be unstable (Zhang et al., 2020) because there are not enough data points to reliably tune the parameters. In this paper, we propose an architectureagnostic approach for LLM distillation to eliminate the fine-tuning step. The standard KD for an LLM is shown in Figure 1(a), whereas our approach corresponds to Figure 1(b). The boxes with red-font stand for computationally expensive steps. The boxes in the dotted line are replicated by the practitioners and contribute to the major cost, whereas the boxes outside represent a one-off cost and can be ignored as such. We show the derivation of our approach along with its convergence properties, and then we describe our training strategy. We finally demonstrate the effectiveness of our approach based on distilling BERT models evaluated against the GLUE benchmark (Wang et al., 2018). ## 2 Methodology We follow the empirical risk management framework for deriving our KD approach. For simplicity, we assume *temperature* τ = 1 from the original definition in (Hinton et al., 2014). Let us assume that for a problem over a domain *X, Y* , the Bayesian optimal probabilities are p0(x) = E[Y |X = x]. Then the ideal KD loss is a loss between the student probabilities f(X) and p0(X) is l(*f, p*0), and the optimal student is $$f_{0}=\arg\operatorname*{min}_{f\in{\mathcal{F}}}{\mathbf{E}}_{X}[l(f(X),p_{0}(X))]\,,\quad(1)$$ note we use *f, p* and f(x), p(x) interchangeably. Given that we do not know p0, the best we can do is to train a teacher from a function class F over some loss to find an estimate pˆ. We replace p0(X) in the loss by the empirical distribution, pˆ(X), to arrive at the KD loss EX[l(f(X), pˆ(X))]. This is the KD loss defined over the entire population of *X, Y* . Given a training set D of n data points {xi, yi} n i=1, we can estimate it as $$\mathbf{E}_{\mathcal{D}}[l(f(X),{\hat{p}}(X))]={\frac{1}{n}}\sum_{i=1}^{N}l(f(x_{i}),{\hat{p}}(x_{i}))\,.$$ This is the typical KD loss used in Hinton et al. (2014), also known as Vanilla KD. The loss l(f, pˆ) is usually Kullbach-Liebler (KL) divergence DKL(ˆp∥f) for τ = 1, or the squared difference of the logits. If the student obtained from optimizing the KD loss is ˆf = arg minf∈F PN i=1 l(f(xi), pˆ(xi)), then with high probability (Dao et al., 2020) it satisfies $$\|\hat{f}-f_{0}\|_{n}^{2}=O\left(\frac{1}{n}+\|\hat{p}-p_{0}\|_{n}^{2}+\delta_{n}({\mathcal{F}},p_{0})^{2}\right)\,,\eqno(2)$$ where *∥ · ∥* stands for the L2 norm of the difference between the parameters of the two classification functions. δn(F, p0) 2is the local Rademacher radius of the class of function F, and is usually convex when F is the family of neural network or kernel functions (Dao et al., 2020). It is specific to the classification function class of the teacher and is a constant when the teacher is fixed. The student error ∥f − f0∥n thus depends on the second order teacher error ∥pˆ − p0∥ 2n = 1 n Pn i=1 ∥pˆ(xi) − p0(xi)∥ 22 . ## 2.1 Taylor Series Expansion Of The Loss Let us first start with a scalar p ∈ [0, 1]. If L(p) is a convex loss on p, then the following inequality holds (Böhning and Lindsay, 1988), $$\mathcal{L}(p)\leq\mathcal{L}(\hat{p})+(p-\hat{p})\frac{d\mathcal{L}(p)}{d p}\Big{|}_{p=\hat{p}}+\frac{1}{2}(p-\hat{p})^{2}C\tag{3}$$ where C = arg maxp d 2L(p) dp2 is the maximum curvature of the loss w.r.t. the entire domain of p. For example, for a binary cross entropy loss | Method | Pre-training (DLM) | Task-specific (DT ) | Architecture-agnostic | |---------------------------------|--------------------------------|---------------------------|-------------------------| | DistilBERT (Sanh et al., 2019) | BERT-base (truncated) + KD | Fine-tuning | No | | Patient-KD (Sun et al., 2019a) | BERT-base (truncated) | Patient-KD | No | | StudentBERT (Turc et al., 2019) | LM pretraining | Vanilla KD | Yes | | TinyBERT4 (Jiao et al., 2020) | KD with loss between attention | KD with data augmentation | No | | matrices & hidden layers | w.r.t fine-tuned BERT-base | | | | MobileBERT (Sun et al., 2020) | KD with layer transfer loss | Fine-tuning | No | | Enhanced KD (ours) | LM pretraining | KD with Taylor series | Yes | Table 1: Detail of the two stages performed during KD under different approaches $${\cal L}(p)=-y\log(p)-(1-y)\log(1-p),$$ $$C=\arg\max_{p}\left(\frac{y}{p^{2}}+\frac{1-y}{(1-p)^{2}}\right)\,.\tag{4}$$ Observe that $C\to\infty$ as $p\to0$ or $p\to1$. Now, when p ∈ [0, 1]K is a vector of probabilities for K classes, we can extend the result to $${\mathcal{L}}(p)\leq{\mathcal{L}}({\hat{p}})+{\Big\langle}p-{\hat{p}},\,{\frac{d{\mathcal{L}}(p)}{d p}}{\Big|}_{p={\hat{p}}}{\Big\rangle}+{\frac{1}{2}}\|p-{\hat{p}}\|_{2}^{2}C$$ with C now being the maximum value of the determinant of the Hessian, which is equivalent to the curvature of the loss. This is also similar to the inequalities for a β-smooth convex function (Bubeck et al., 2015, §3.2). However, the constant β is not really informative, unlike our case where we can connect C to the curvature of the loss, $$C=\arg\operatorname*{max}_{p}\operatorname*{det}\left|{\frac{d^{2}{\mathcal{L}}(p)}{d p^{2}}}\right|.\qquad\qquad(5)$$ Coming back to KD, if we assume the teacher probabilities are p ∈ [0, 1]K and the student probabilities are f ∈ [0, 1]K, then the vanilla KD loss is defined as l(*f, p*). As long as l(*f, p*) is convex w.r.t. to p, the following inequality holds, $$\begin{array}{c}{{l(f,p_{0})\leq l(f,\hat{p})+\langle p_{0}-\hat{p},\nabla_{\hat{p}}l(f,\hat{p})\rangle}}\\ {{+\frac{1}{2}\|p_{0}-\hat{p}\|_{2}^{2}C(f)}}\end{array}$$ Now we replace the derivatives with the partial derivatives as ∇pˆl(f, pˆ) = ∂l(p) ∂p p=ˆp . The maximum curvature will be a function of the student probabilities f, $$C(f)=\arg\operatorname*{max}_{p}\operatorname*{det}\left|{\frac{\partial^{2}l(f,p)}{\partial p^{2}}}\right|.\qquad(6)$$ Recall that l(*f, p*0) is the ideal KD loss, as defined in Equation (1). Although we cannot estimate it, we can now obtain an upper bound on it and minimize this upper bound in our algorithm. The most common KD loss used in the literature is the KL divergence between the student and the teacher probabilities DKL(ˆp∥f), when we keep τ = 1. For KL divergence l(f, pˆ) = Ppˆlog(ˆp/f) the first order derivative is, $$\nabla_{\hat{p}}l(f,{\hat{p}})=1+\log{\hat{p}}-\log f$$ and C(f) = arg maxp ∇2pˆ l(f, pˆ) will not contain any term involving f. This means we can exclude this term from KD. Removing the constant terms, the loss function becomes, $$l(f,p_{0})\leq l(f,{\hat{p}})+\langle p_{0}-{\hat{p}},-\log(f)\rangle\quad(7)$$ As we do not have knowledge of p0, we cannot compute the loss directly. But we can take an unbiased estimate of p0 as y from the training data D, enabling the computation of the Taylor series term. As such, our KD loss is, $$\begin{array}{l}{{{\mathcal L}_{K D}={\bf E}_{x,y\sim{\mathcal D}}[l(f,{\hat{p}})]+\langle y-{\hat{p}},-\log(f)\rangle}}\\ {{\qquad\qquad\geq{\bf E}_{x,y\sim{\mathcal D}}[l(f,p_{0})]\qquad\qquad\qquad(8)}}\end{array}$$ Following Mackey et al. (2018), an O(n−1/(2k+2)) estimate of the teacher p with k Neyman orthogonal factors gives us an O(1/ √n) estimation of the student f. For Vanilla KD (i.e. k = 0), we see in Equation (2) that an O(1/ √n) estimation of the student must have a O(1/ √n) estimation of the teacher p, which is a more conservative requirement. The Taylor series term satisfies the condition of the first-order orthogonal term (k = 1). That means now a O(1/n1/4) estimate of teacher error ∥p − p0∥n is enough to give us an O(1/ √n) bound of the student error ∥f − f0∥n. O(1/n1/4) is a weaker convergence guarantee than O(1/ √n). This simply means now we can train a good student even from a weaker estimate of the teacher. Finally, combining this with the explicit classification loss L*class* for the student, the overall loss function for some λ ∈ [0, 1] is $${\mathcal{L}}=\lambda{\mathcal{L}}_{c d a s s}+(1-\lambda){\mathcal{L}}_{K D}$$ ## 3 Training Strategy Existing methods generally rely on a two-stage approach: (1) pre-train the student model on the entire or a truncated part of the same dataset as the teacher (DLM ), and (2) perform fine-tuning or KD on a task-specific dataset (DT ). This avoids the costly fine-tuning of BERT on task-specific datasets. For example, Turc et al. (2019) and Sun et al. (2019a) perform simple pretraining of the student model on DLM, while Sanh et al. (2019) and Jiao et al. (2020) perform KD on DLM . While Sanh et al. (2019) and Sun et al. (2020) only perform output layer fine-tuning on the task-specific dataset, others perform KD on DT . The details of the different stages of training are summarized in Table 1. To test our method, we choose to perform KD on BERT language models (DLM ) from Huggingface (Wolf et al., 2020) and perform KD using only the task-specific dataset DT . We do not use a fine-tuned teacher on the task-specific dataset. Fine-tuning of BERT is not only expensive but may be unstable for small datasets (Zhang et al., 2020). While the teachers without fine-tuning will be weak, as described in Section 2.1, our proposed approach is designed to be robust to this. ## 4 Experiments We use datasets from GLUE (Wang et al., 2018) for our experiments, specifically: SST-2 (Socher et al., 2013) for sentiment classification; MRPC (Dolan and Brockett, 2005), QQP, and STS-B for paraphrase similarity matching (Conneau and Kiela, 2018); and MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), and RTE (Wang et al., 2018) for natural language inference. We use KL divergence loss and the first Taylor series term (see Equation 8). For datasets with real-valued outputs, we can use Platt scaling (Platt et al., 1999) with a sigmoid function centered at the mean to convert it to a probability. For example, for STSB the output is a real number between 0 and 5, which we convert the target t into a probability via Platt Scaling p = 1/(1 + exp(−(t − 2.5))). $$({\mathfrak{g}})$$ The teacher model is BERT-base (Devlin et al., 2018), with 109 million parameters across 12 layers, and 768d hidden states. We conduct experiments for three student models as listed in Table 2. We take our baseline results for Vanilla KD from the corresponding student model in Turc et al. (2019). We present results for our method based on: (a) a 4-layer student model, which we compare with the 4-layer TinyBERT model (Jiao et al., 2020) and MobileBERT (Sun et al., 2020);3 and (b) a 6-layer student model, which we similarly compare against 6-layer TinyBert and DistilBERT (Sanh et al., 2019) models. We constrain all experiments to run on a single RTX-3090 GPU with 24GB RAM. The benchmark TinyBERT, MobileBERT, and Distilbert models were downloaded from the Huggingface repository (Wolf et al., 2020) and used without further modification. We present the results of 6-layer TinyBERT from Zhou et al. (2022). The only hyper-parameter we optimize with our method is λ, in the range [0, 1] at a step-size of 0.1, with a fixed temperature of τ = 1 and learning rate of η = 5 × 10−5(for the Adam optimizer). In the results in Table 2, we register improvements in the GLUE metrics using the modified loss for all our student architectures against the baseline of Vanilla KD (Turc et al., 2019). Relative to the other KD methods, we get consistently better results for smaller datasets like MRPC, RTE, and STSB, but are slightly below the best KD models for the larger datasets, noting that these are all architecture-specific and rely on additional finetuning or data augmentation. The effect of dataset size follows from the theory in Equation (2), which shows that the teacher error typically follows the sample complexity ∥p0 − pˆ∥n ∈ O(n−1/(2k+2)), with k = 0 being the best case (Mackey et al., 2018). The difference between p0 and pˆ is large for smaller n, and this teacher error in turn reflects in the student error in Vanilla KD. I.e., our technique for expanding the loss makes a large difference for smaller n. TinyBERT is overall the strongest performer for larger datasets (> 10K samples) but achieves this using expensive task-specific fine-tuning and data augmentation. Data augmentation helps singlesentence tasks more than paired tasks because it is difficult to align the extra data in a pair according to 3MobileBERT uses a 6-layer architecture, but has similar \#parameters as our 4-layer model. | Task | # of P(M) | QQP | MNLI (m/mm) | SST-2 | QNLI | MRPC | RTE | STSB | |------------------------------|-------------|-------|---------------|---------|--------|--------|-------|--------| | # of Training Samples (in K) | 363.8 | 392.7 | 67.3 | 104.7 | 3.7 | 2.5 | 5.7 | | | BERT base | 109 | 87.9 | 84.6/84.9 | 93.0 | 91.2 | 90.4 | 71.4 | 89.8 | | Vanilla KD (2 x 128) | 4 | 62.2 | 70.2/70.3 | 83.2 | 81.5 | 71.1 | 57.2 | 73.6 | | Our method (2 x 128) | 4 | 64.4 | 71.7/70.5 | 83.4 | 81.6 | 72.1 | 62.1 | 76.2 | | Vanilla KD (4 x 312) | 15 | 66.5 | 75.4/74.9 | 87.6 | 84.8 | 83.2 | 62.6 | 77.1 | | MobileBERTTINY | 15 | 68.9 | 81.5/81.6 | 91.7 | 89.5 | 87.9 | 65.1 | 80.1 | | TinyBERT† 4 (4 x 312) | 15 | 71.3 | 82.5/81.8 | 91.9 | 87.7 | 86.4 | 66.6 | 80.4 | | Our method (4 x 312) | 15 | 68.8 | 80.6/80.1 | 89.9 | 86.5 | 88.1 | 66.7 | 82.2 | | Vanilla KD (6 x 768) | 66 | 70.7 | 82.8/82.2 | 91.0 | 88.9 | 86.8 | 65.3 | 81.0 | | DistilBERT (6 x 768) | 66 | 70.1 | 82.6/81.3 | 92.5 | 88.9 | 86.9 | 58.4 | 81.3 | | TinyBERT† 6 (6 x 368) | 66 | 71.6 | 84.6/83.2 | 93.1 | 90.4 | 87.3 | 66.8 | 83.7 | | Our method (6 x 768) | 66 | 71.4 | 82.8/82.5 | 91.6 | 89.3 | 89.0 | 67.5 | 84.0 | the task. This is why TinyBert performs better than even BERT-base for SST2. We achieved the best results over tasks with small datasets, which is where task-specific KD is more difficult. The simplicity of our approach also makes it compatible with KD for more complex tasks like machine translation (Wang et al., 2021). A fairer comparison would be against the results of TinyBert without data augmentation, but those results were not reported in their publication. ## 5 Conclusion We have proposed a general approach to improve KD on language models. We constrain the experiments on BERT mainly due lack of benchmarks on other LLMs as well as resource limitations. But any LLM distillation will show a similar trend. Existing KD methods are highly customized to the specifics of the teacher model, and require additional pretraining, fine-tuning, or data augmentation. Our approach is much simpler and agnostic to both architecture and task. We ran our experiments on an RTX3090 GPU with 24GB RAM which cost only $0.11 an hour, which is considerably cheap compared to other approaches that include teacher fine-tuning. We showed that our method is particularly effective on small datasets, and competitive with other KD methods which are much more computationally intensive and tailored to the teacher. A possible reason could be since the fine-tuning of BERT on small datasets like MRPC, STSB, or RTE can be unstable (Zhang et al., 2020), eliminating it makes the KD more robust and improves the results. All other methods such as TinyBert (Jiao et al., 2020) or PatientKD (Sun et al., 2019b) use fine-tuned teachers. DistilBert (Sanh et al., 2019) does not use a fine-tuned teacher, but it is only limited to students with a hidden state of 784 due to the cosine loss it uses and lacks generalization across architectures. ## 6 Ethical Issues As we distill the knowledge from an existing model (here BERT-base), our approach does not introduce any extra ethical concerns during knowledge distillation. However, if a bias is already present in the teacher model, it might get transferred to the student model (Hooker et al., 2020). This is not specific to our algorithm but is a common risk for all types of knowledge distillation. ## 7 Limitations A key limitation of our experiments is that we only consider English corpora. The exclusive use of English datasets is unlikely to have a substantive effect on distillation performance, and we would expect the results to transfer to other languages and datasets, however, languages with rich morphology may present modeling challenges arising from tokenization, that is, with many small word-pieces, language modeling (and its distillation) is likely to be a considerably harder task. As it stands, our work follows the standard evaluation protocols in peer benchmarks e.g., Jiao et al. (2020), Sanh et al. (2019), and Turc et al. (2019). We only use BERT-base (Devlin et al., 2018) as our teacher model and benchmark against students that use it as a teacher model. For larger teacher models such as BERT-large or GPT2 (Radford et al., 2019), the inference time as well as memory requirement would be much higher, and would necessitate larger GPU clusters. This is a consequence of the cost of the forward pass with the teacher model, rather than our distillation algorithm, which has a much lighter footprint. We argue that the result from one transformer-based pretrained language model should generalize well to other transformer-based pre-trained models. Thus our results are representative, despite our smallerscale evaluation protocol. Another shortcoming of transformer models, in general, is their scalability to long text. In this setting, model-agnostic knowledge distillation, like our technique, enjoys a distinctive advantage. We can incorporate techniques like Beltagy et al. (2020) or Xiong et al. (2021) to speed up attention in the student model enabling it to scale to long texts, even when paired with a different architecture for the teacher. Jiao et al. (2020) and Sanh et al. (2019) rely on specific model internals during distillation, and therefore the student model has to be similar to the teacher. ## References Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150. Dankmar Böhning and Bruce G Lindsay. 1988. Monotonicity of quadratic-approximation algorithms. *Annals of the Institute of Statistical Mathematics*, 40(4):641–663. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Sébastien Bubeck et al. 2015. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, and Lester Mackey. 2020. Knowledge distillation as semiparametric inference. In *International Conference on* Learning Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005). Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the knowledge in a neural network. In NIPS 2014 Deep Learning Workshop. Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. Characterising bias in compressed models. *arXiv preprint* arXiv:2010.03058. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020. Lester Mackey, Vasilis Syrgkanis, and Ilias Zadik. 2018. Orthogonal machine learning: Power and limitations. In *International Conference on Machine Learning*, pages 3375–3383. PMLR. John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in large margin classifiers*, 10(3):61–74. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. *OpenAI* Blog, 1(8). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Or Sharir, Barak Peleg, and Yoav Shoham. 2020. The cost of training nlp models: A concise overview. arXiv preprint arXiv:2004.08900. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323–4332, Hong Kong, China. Association for Computational Linguistics. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019b. Patient knowledge distillation for bert model compression. arXiv arXiv:1908.09355. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2158–2170, Online. Association for Computational Linguistics. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from BERT into simple neural networks. *arXiv preprint arXiv:1903.12136*. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962. Apoorv Vyas, Angelos Katharopoulos, and François Fleuret. 2020. Fast transformers with clustered attention. *Advances in Neural Information Processing* Systems, 33:21665–21674. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021. Selective knowledge distillation for neural machine translation. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6456–6466, Online. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. 2021. Nyströmformer: A nyström-based algorithm for approximating self-attention. In *Proceedings of* the AAAI Conference on Artificial Intelligence, pages 14138–14148. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample BERT fine-tuning. *arXiv preprint arXiv:2006.05987*. Wangchunshu Zhou, Canwen Xu, and Julian McAuley. 2022. BERT learns to teach: Knowledge distillation with meta learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7037– 7049, Dublin, Ireland. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 (Limitations) ✓ A2. Did you discuss any potential risks of your work? Section 7 (Ethics) ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, 4 ✓ B1. Did you cite the creators of artifacts you used? Section 3, 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We only used open-source artifacts. We will open-source our own code and models too using MIT license. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our distilled encoder-only models for general-purpose NLP do not violate the intended use of the artifacts. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. We only used standard anonymized datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 3, 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Only the single-run result is provided because the experiments are too computationally intensive. It will waste energy and cause unnecessary CO2 emissions. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3, 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
bang-etal-2023-task
Task-Optimized Adapters for an End-to-End Task-Oriented Dialogue System
https://aclanthology.org/2023.findings-acl.464
Task-Oriented Dialogue (TOD) systems are designed to carry out specific tasks by tracking dialogue states and generating appropriate responses to help users achieve defined goals. Recently, end-to-end dialogue models pre-trained based on large datasets have shown promising performance in the conversational system. However, they share the same parameters to train tasks of the dialogue system (NLU, DST, NLG), so debugging each task is challenging. Also, they require a lot of effort to fine-tune large parameters to create a task-oriented chatbot, making it difficult for non-experts to handle. Therefore, we intend to train relatively lightweight and fast models compared to PLM. In this paper, we propose an End-to-end TOD system with Task-Optimized Adapters which learn independently per task, adding only small number of parameters after fixed layers of pre-trained network. We also enhance the performance of the DST and NLG modules through reinforcement learning, overcoming the learning curve that has lacked at the adapter learning and enabling the natural and consistent response generation that is appropriate for the goal. Our method is a model-agnostic approach and does not require prompt-tuning as only input data without a prompt. As results of the experiment, our method shows competitive performance on the MultiWOZ benchmark compared to the existing end-to-end models. In particular, we attain state-of-the-art performance on the DST task of 2.2 dataset.
# Task-Optimized Adapters For An End-To-End Task-Oriented Dialogue System Namo Bang∗ **Jeehyun Lee**∗ Department of Artificial Intelligence, Sogang University, Korea {namo950815, jhlee22, mwkoo}@sogang.ac.kr Myoung-Wan Koo ## Abstract Task-Oriented Dialogue (TOD) systems are designed to carry out specific tasks by tracking dialogue states and generating appropriate responses to help users achieve defined goals. Recently, end-to-end dialogue models pre-trained based on large datasets have shown promising performance in the conversational system. However, they share the same parameters to train tasks of the dialogue system (NLU, DST, NLG), so debugging each task is challenging. Also, they require a lot of effort to fine-tune large parameters to create a task-oriented chatbot, making it difficult for non-experts to handle. Therefore, we intend to train relatively lightweight and fast models compared to PLM. In this paper, we propose an End-to-end TOD system with Task-Optimized Adapters which learn independently per task, adding only small number of parameters after fixed layers of pretrained network. We also enhance the performance of the DST and NLG modules through reinforcement learning, overcoming the learning curve that has lacked at the adapter learning and enabling the natural and consistent response generation that is appropriate for the goal. Our method is a model-agnostic approach and does not require prompt-tuning as only input data without a prompt. As results of the experiment, our method shows competitive performance on the MultiWOZ benchmark compared to the existing end-to-end models. In particular, we attain state-of-the-art performance on the DST task of 2.2 dataset.1 ## 1 Introduction Task-oriented dialogue systems are trained to achieve specific goal to enhance efficiency and convenience in various fields such as customer service centers and healthcare information retrieval. Task-oriented dialogue systems are divided into key components: understanding the user's intent (NLU), tracking the current dialogue states (DST), and generating responses based on previous sessions (NLG). Pipeline-based systems separately train each component, so they have the advantage of optimizing each module and raising the performance of a given task. User feedback is, however, difficult to propagate to each module, and inputs to the component is dependent on the result of the previous module (Chen et al., 2017). Recently, dialogue systems have been trained in an end-toend manner with transfer learning or pre-training networks with large dialogue corpora. However, building efficient end-to-end TOD systems requires a large amount of data and has some limitations due to parameter sharing. End-to-end models backpropagate to transfer the gradients of the output and end back to the entire neural network. They pose an issue of parameter efficiency as updating all parameters for every downstream scenario. Also, it is challenging to debug each task and take task-flow characteristics into account. Therefore, we propose a simple structure of adding adapters to the core modules (NLU, DST, NLG) of the TOD system as shown in Figure 1. By using the adapters, it is possible to optimize each task with only a small amount of training parameters, remaining the pre-trained model's parameters fixed. Additionally, it is safe from the catastrophic forgetting problem (French, 1999), which causes pre-trained models to lose important skills acquired during the pre-training process. The key is to apply a transfer learning strategy that yields compact and extensible downstream models in the dialogue system (Houlsby et al., 2019). This makes it easy for people to train large-scale end-to-end TOD models. Also, by applying REINFORCE (Sutton et al., 1999), we attempt to reduce the expected score gap caused by the small parameters of adapter compared to the full fine-tuning. Specifically, we use ![1_image_0.png](1_image_0.png) Joint Goal Accuracy, and weighted sum of BLEU and Success rate as rewards for training DST and NLG adapter. To best of our knowledge, this is first work that use Joint Goal Accuracy as a reward for E2E TOD system. To address the aforementioned problems, we propose a Task-Optimized Adapter for an end-toend Task-Oriented Dialogue system (**TOATOD**) applying reinforcement learning to DST and NLG tasks. In summary, our key contributions are as follows: - We present a new architecture that can debug per task of the end-to-end model using the separated adapters. - Without updating the original parameters of PLM, we train end-to-end TOD system efficiently with a few trainable parameters. - It is a novel approach to design a reward function not just for NLG, but also for DST task with metric-aware reinforcement learning. - The performance of the proposed approach outperforms on the DST task of MultiWOZ 2.2 and shows comparable results to full finetuning on the NLU and NLG tasks. ## 2 Background Pipeline-based Task-Oriented Dialogue System Conventional task-oriented dialogue systems usually based on the pipeline method, consisted of language understanding (NLU), dialogue state tracking (DST), policy learning (POL), and language generation (NLG). This kind of modularization allows for each component to be optimized independently, making it easier to update and understand how the model is working. Pipeline-based systems, however, have several limitations. Each of the modules train sequentially, so it is hard to align modules to the common optimization targets (Liu and Lane, 2018). This makes the system more complex and harder to backpropagate cumulated errors. The performance of the previous components affects the next modules, so if upper modules perform poorly, errors that occurred earlier may propagate and be amplified in downstream components (Liu and Lane, 2018). End-to-end Task-Oriented Dialogue System Endto-end task-oriented dialogue systems, on the other hand, are easier to optimize and train to directly map the input to output in a single model. They can leverage large amounts of data for robust learning and the entire system can be optimized under end-to-end settings, which leads to better performance. A general approach for building end-to-end systems is to fine-tune pre-trained language models (Budzianowski and Vulic´, 2019). This approach utilizes the strength of pre-trained networks, which help the models to leverage the pre-trained knowledge while also adapting to task-specific data. For example, SimpleTOD (Hosseini-Asl et al., 2020) solved task-oriented dialogue as causal language modeling task using several versions of GPT (Radford and Narasimhan, 2018; Radford et al.). Pre-training of Dialogue Language Model Recently, methods with pre-training dialogue language model (Wu et al., 2020; Zhang et al., 2020b; Peng et al., 2021; Su et al., 2022; He et al., 2022a), instead of fine-tuning pre-trained networks have outperformed the previous baselines on the benchmark. For instance, SPACE-3 (He et al., 2022a) captures the contextualized knowledge from largescale dialogue corpora by pre-training the unified language model. However, there are still some issues with these methods. A large amount of param- ![2_image_0.png](2_image_0.png) eters is required for training backbone models like BERT, T5 (Raffel et al., 2020), GPT, and UniLM (Dong et al., 2019). As shown in Table 1, T5*base* and T5*small* require the trainable parameters over 220M and 60M respectively. And they disregard the task-flow features of task-oriented dialogue systems. Also, it's still hard to debug per module because the model parameters are shared and jointly optimized. PPTOD (Su et al., 2022) integrated modules into a unified model with task-specific prompts and alleviated the error accumulation in plug-and-play way. Still, this method is not completely free from the interference among tasks due to fully shared parameters. Adapter tuning for NLP Since the pre-trained models for NLP tasks have become mainstream, they are mainly used for transfer learning downstream tasks. However, a parameter efficiency issue has been raised because updating all PLM parameters is expensive for every downstream scenario. To address the issue, the adapter module is proposed to transfer PLM like BERT with parameterefficient tuning (Houlsby et al., 2019) and shows comparable performance to full fine-tuning. The main idea behind the adapter is to train the network to the downstream task with task-specific parameters while maintaining the original pre-trained parameters. The module is composed of two feedforward layers and a non-linear layer, which can be inserted into the transformer blocks of an end-toend model. It projects d-dimensional input features into a smaller dimension m, and then projects back into the original dimension, so the total of parameters add per layer with biases is 2md + d + m. The number of additional parameters per task can be restricted by setting *m < d* (Li et al., 2021). In most of the previous research, the adapter was used for efficient learning. But in this approach, we adapt the adapter modules not just for efficient learning (Stickland and Murray, 2019), but also task-optimized learning as recent studies that have mainly focused on separating parameters (Lin et al., 2021; Feng et al., 2022; Bapna and Firat, 2019). Reinforcement learning for Text Generation Although token-based supervised learning is a widely adopted training method in text generation tasks, as highlighted by Ranzato et al., 2016, there are two major problems associated with this approach. The first problem is the exposure bias problem, where during training, the model is exposed to the ground-truth outputs, thus allowing it to learn to generate text that is similar to the training data. However, during evaluation, the model is not exposed to the ground-truth outputs and instead, generates texts based on the previous words generated by the model itself. This can lead to errors and a significant deviation in the generated text from the training data. The second problem is the tokenlevel loss problem, where the training objective function is based solely on individual words' prediction, neglecting the predicted token sequence's overall coherence and fluency. To address these problems, researchers have applied reinforcement learning to text generation tasks (Ranzato et al., 2016; Li et al., 2016; Paulus et al., 2018; Chen et al., 2020; Wang et al., 2021; Ye et al., 2022) using metrics such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) as rewards for sequence-level training. In our study, we also use the REINFORCE method and task-oriented dialogue metrics to continually train our model after supervised learning, not only to address these challenges but also to mitigate performance degradation caused by the use of an Adapter. ## 3 Methodology 3.1 Adapter For Each Task (Nlu, Dst, Nlg) The Adapter module is designed to adapt the pretrained network for each of the tasks (NLU, DST, | Pre-trained | Trainable per task | | |---------------------|---------------------------------------|------------| | Model | T5base T5small TOATODbase TOATODsmall | | | # of Prams 220M 60M | 36M (14%) | 7.9M (12%) | Table 1: This table shows the size of the pre-trained T5 model (frozen shared parameters) and trainable Adapter per task in our models. We do not experiment with a large model, because the trainable parameter size of TOATOD*large* is bigger than T5*base*. NLG) in a task-oriented dialogue system. As illustrated in Figure 2, the adapters are inserted after the feed-forward layer following the multi-head attention layer of the transformer blocks of the end-to-end model. It enables the model to learn task-specific representations while preserving the shared parameters learned during pre-training. As described in Table 1, our model consists of largesize frozen shared parameters per task and small size of trainable parameters that account for about 14% of the entire network. While original network's parameters are frozen, the j th adapter of task i ∈ {NLU, DST, NLG}, Aij computes as below: $$A_{i j}=L N\;(W_{u p}\;*\;R e L U\;\left(W_{d o w n}\;*\;H_{j}\right)+\;H_{j})$$ The output of the j th feed-forward layer with residual connection in the transformer block is represented as Hj ∈ R n×d, where n is the input dimension, and d is the hidden dimension. As shown in Figure 2, the overall architecture of the adapter module includes multiple feed-forward projections, referred to as down-projection and up-projection, followed by layer normalization (LN). The downprojection with W*down* ∈ R d×h projects the input Hj , which is passed by the ReLU activation function, and then the up-projection with Wup ∈ R h×d projects the output back to the original dimension. The bottleneck dimension h is a hyperparameter to project the original input to a smaller dimension. And each adapter has a residual connection to avoid vanishing gradient (Rebuffi et al., 2017; He et al., 2016). ## 3.2 Metric-Aware Reinforcement Learning For Dst & Nlg Module The overall loss function for the metric-aware reinforcement learning is given by the equation (2): $$J\left(\theta\right)=\alpha\times J_{p o l i c y}\left(\theta\right)+\left(1-\alpha\right)\times C E\left(y,{\hat{y}}\right)\quad\mathrm{(2)}$$ The equation describes how the network updates the parameters in order to maximize both the likelihood of the generated response (token loss) and the quality of the response (policy loss). The token loss can be described as a categorical cross-entropy loss. yˆ denotes the predicted probability of ground truth and y is the target probability. The token loss CE(y, yˆ), which measures how well a set of predicted token probabilities match the target tokens for a given context of dialogue when reinforcement learning. By applying REINFORCE method, the network can update weights towards the direction that allows the model for getting more rewards even when the reward function is non-differentiable. $$J_{p o l i c y}\left(\theta\right)=-\;l o g P({\hat{y}})\times R e w a r d(y,\;{\hat{y}})$$ The policy loss J*policy*(θ) is introduced to measure of how well the model can generate a token sequence with high probabilities that result in high rewards. yˆ denotes the predicted token sequence by model. The policy loss described in the equation (3) is calculated as the negative log probability of the token sequence that has the highest probability multiplied by the reward. The parameter α in the overall loss (2) is a hyperparameter, that is a scalar value between 0 and 1 to weigh the importance of these losses. In this way, the model is trained to predict the correct labels (categorical cross-entropy loss) and to make good decisions that result in high rewards (policy loss) at the same time. We define the reward functions of the DST and NLG modules as follows: $$R e w a r d_{D S T}=\;J G A(y,\;\hat{y})\;+\;1$$ The RewardDST is calculated as the sum of the Joint Goal Accuracy (JGA) and a constant value of 1. The JGA measures how well the model predicted the values for every slot in the dialogue turns. Using JGA as a reward, the model is encouraged to accurately track the state of the dialogue, which is crucial for generating appropriate responses and improving the performance of task-oriented dialogue systems. RewardNLG =(1 − β) × E[BLEU(yu, yˆu)] + β × Success(y, yˆ ) + 1 (5) $$({\mathfrak{H}})$$ In the equation (5), BLEU score and Success rate are used as rewards to guide the learning process of NLG module. yu denotes the ground truth token sequence of each utterance, and yˆu denotes the predicted token sequence. To calculate the success rate, we apply the batch of a session-level, not an utterance-level. y and yˆ without u mean the session level ground truth and prediction. The weighting factor β is adjusted to balance these two metrics. The hyperparameter β may have to be carefully chosen because there is a trade-off relationship between these two metrics in the RL setting (Wu et al., 2021), where increasing one metric may come at the cost of decreasing the other. We experiment to choose α and β on the Section 6.1.3 and 6.1.4. ## 4 Experimental Setup 4.1 Datasets We experiment our method for dialogue state tracking (DST) and end-to-end response generation (NLG) tasks on the MultiWOZ 2.1 and 2.2 datasets, and the intent prediction (NLU) task with the Banking77, CLINC150, and HWU64 datasets. MultiWOZ - 2.1, 2.2 The MultiWOZ (Budzianowski et al., 2018) dataset has been widely used to evaluate the performance of TOD systems. It consists of 8438, 1000, and 1000 for training, dev, test sets with multi-turn dialogues, collected through a Wizard-of-Oz (WOZ) setup. The dialogues cover a wide range of domains and topics. MultiWOZ 2.2 (Zang et al., 2020) is the improved version of MultiWOZ 2.1 (Eric et al., 2020) that has corrected annotation errors, inconsistencies, and ontology issues, also added span annotations to standardize. Banking 77 (Casanueva et al., 2020) This dataset is a collection of 77 real-life customer banking service queries. It consists of 13,083 utterances. Each query is labeled with a single intent, however, it is hard to differentiate because they correspond to very similar tasks. CLINC150 (Larson et al., 2019) This dataset is multi-domain dataset which contains 23,700 utterances that cover 150 intent classes over 10 domains. HWU64 (Liu et al., 2019) This dataset consists of 25,716 examples. It maps user utterances to structured, but mode abstract. The data provides annotation with the 64 intents from 21 different domains. | Model | Backbone Model (Trainable Prams) | MultiWOZ 2.1 MultiWOZ 2.2 | | |--------------------|------------------------------------|-----------------------------|--------| | TRADE | - | 46.0 | 45.4 | | DS-DST | BERTbase (110M) | 51.2 | 51.7 | | DST-Picklist | BERTbase (110M) | 53.3 | | | TripPy | BERTbase (110M) | 55.3 | | | ConvBERT +DG+Multi | BERTbase (110M) | 58.7 | | | Trippy +SaCLog | BERTbase (110M) | 60.61 | | | SimpleTOD | DistilGPT-2 (82M) | 56.45 | | | UniLM | UniLM (340M) | 54.25* | 54.25* | | AG-DST | PLATO-2 (310M) | 57.26 | 57.26 | | SPACE-3 | UniLM (340M) | 57.50 | 57.50 | | D3STbase | T5base (220M) | 54.2 | 56.1 | | D3STlarge | T5large (770M) | 54.5 | 54.2 | | SDP-DST | T5base (220M) | 56.66 | 57.60 | | PPTODbase | T5base (220M) | 57.10 | | | PPTODlarge | T5large (770M) | 57.45 | | | D3STXXL | T5XXL (11B) | 57.80 | 58.7 | | TOATODsmall | T5small (7.9M) | 53.02 | 61.92 | | TOATODbase | T5base (36M) | 54.97 | 63.79 | ## 4.2 Baselines & Settings For the DST task, we compare our models with other strong baselines including TRADE (Wu et al., 2019), DS-DST (Zhang et al., 2020a), DST-Picklist (Zhang et al., 2020a), TripPy (Heck et al., 2020), ConvBERT+DG+Multi (Mehri et al., 2020a), TripPy+SaCLog (Dai et al., 2021), SimpleTOD (Hosseini-Asl et al., 2020), AG-DST (Tian et al., 2021), UniLM, SPACE-3, D3ST (Zhao et al., 2022), SDP-DST (Lee et al., 2021), and PPTOD. On the NLG task, we choose models trained on PLM in an end-to-end setting such as DoTS (Jeon and Lee, 2021), PPTOD, UBAR (Yang et al., 2021), MTTOD (Lee, 2021), RSTOD (Cholakov and Kolev, 2022), GALAXY (He et al., 2022b), MinTL (Lin et al., 2020), SOLOLIST, BORT (Sun et al., 2022a) and Mars (Sun et al., 2022b). For the NLU task, we compare TOATOD with existing baselines of each dataset. We use T5*small* and T5*base* as backbone models initialized with PPTOD (Su et al., 2022)'s pretrained weights, which are trained on a large dialogue dataset. As described in the Appendix A, the bottleneck dimension h of the adapter is 1/2 of | Model | Backbone Model (Trainable Params) | MultiWOZ 2.1 | MultiWOZ 2.2 | | | | | | | |-------------|-------------------------------------|----------------|-----------------|---------|---------|----------|-------|-------|--------| | Inform | Success | BLEU | Combined Inform | Success | BLEU | Combined | | | | | DoTS | BERTbase (110M) | 86.65 | 74.18 | 15.90 | 96.32 | - | - | - | - | | PPTOD | T5base (220M) | 87.09 | 79.08 | 19.17 | 102.26 | - | - | - | - | | UBAR | GPT2 (1.5B) | 95.70 | 81.80 | 16.50 | 104.94 | 83.4 | 70.3 | 17.6 | 94.4 | | MTTOD | T5base (360.9M)* | 90.99 | 82.08 | 19.68 | 106.22 | 85.9 | 76.5 | 19.0 | 100.2 | | RSTOD | T5small (105.5M)* | 93.50* | 84.70* | 19.24* | 108.34* | 83.5 | 75.0 | 18.0 | 97.3 | | GALAXY | UniLM (340M) | 95.30 | 86.20 | 20.01 | 110.76 | 85.4 | 75.7 | 19.64 | 100.2 | | MinTL | BARTlarge (440M) | - | - | - | - | 73.7 | 65.4 | 19.4 | 89.0 | | SOLOIST | GPT2 (1.5B) | - | - | - | - | 82.3 | 72.4 | 13.6 | 90.9 | | BORT | T5small (60M) | - | - | - | - | 85.5 | 77.4 | 17.9 | 99.4 | | Mars | T5small (60M) | - | - | - | - | 88.9 | 78.0 | 19.9 | 103.4 | | TOATODsmall | T5small (7.9M) | 92.10 | 80.40 | 18.29 | 104.54 | 85.80 | 74.00 | 18.00 | 97.90 | | TOATODbase | T5base (36M) | 97.00 | 87.40 | 17.12 | 109.32 | 90.00 | 79.80 | 17.04 | 101.94 | the hidden dimension of the T5 model. We use the Adafactor (Shazeer and Stern, 2018) optimizer with 15 epochs and set batch size as 16, learning rate of 1e-4 during supervised learning of DST and NLG tasks. We sweep a wide range of learning rates: {1e-5, 1e-6, 1e-7}. For reinforcement learning, we train models 10 epochs for DST and 3 epochs for NLG. We do not train NLG-optimized adapters for the MultiWOZ 2.2, because there are not significant changes of response annotations from MultiWOZ 2.1 and intend to train robust models with noised dataset. We follow the preprocessing method from UBAR to delexicalize slot values for each system responses. We evaluate our models using the older version of the standardized evaluation script for MultiWOZ 2.1, and the newly opened version for MultiWOZ 2.2, released by Nekvinda and Dusek, 2021. It has been adopted by the official MultiWoZ dataset github 2. Other implementation details are described in Appendix C. ## 5 Experimental Results 5.1 Dialogue State Tracking We evaluate our models on the DST task with the MultiWOZ 2.1 & 2.2 datasets. We compute Joint Goal Accuracy on the test set, which measures how many values are filled accurately compared to the ground truth states for all slots. Joint Goal Accuracy is considered as more difficult and important metric in most research (Zhou and Small, 2019; Dey et al., 2022), because once wrong prediction has been made, it cannot get the score at that turn. ## 5.1.1 Evaluation Result We compare our best models, TOATOD*small* and TOATOD*base* to the models trained with a pretrained network. Table 2 shows that our models are competitive to the end-to-end models on the current benchmark. In the 2.1 dataset, our models show a relatively good performance, despite the small number of trainable parameters. As shown in the Table 5, the Joint Goal Accuracy of TOATOD*base* only with task-optimized adapter (SL) is 53.33, which is slightly lower than other models using T5*base* as backbone. So, we reduce the performance degradation applying metric-aware REINFORCE and report the final results. Trainable parameters of our models are less than 1/2 of the models with the smallest parameters. The result implies that the adapter module helps the network more adaptable to the DST task by only activating a few parts of the model. Because of relatively small parameters, our model is more robust to overfitting problem with confused labels. As mentioned in Section 4.1, MultiWOZ 2.2 is the cleaned version of 2.1 dataset, so the performance is better in the 2.2. Among the top results on the 2.2 dataset, our models obtain state-of-the-art performance. It demonstrates that TOATOD optimizes well on the given task remaining the prior knowledge learned from the pre-trained network. 2https://github.com/budzianowski/multiwoz ## 5.2 End-To-End Response Generation | Model | Banking77 CLINC150 HWU64 | |---------|----------------------------| We test our methods with end-to-end response generation (NLG) task on the MultiWOZ 2.1 & 2.2 as in DST evaluation. Four metrics are used to measure the quality of generated responses. We measure if the system provides the appropriate entity (Inform rate), answers all the requested information (Success rate), and responds fluently (BLEU score). And the Combined Score for end-to-end response generation is computed as 'BLEU+0.5 × (Inform+Success)'. Under the end-to-end settings, the models have to predict proper dialogue states and then generate responses based on the states. ## 5.2.1 Evaluation Result From Table 3, our models achieve comparable results (1.x point different from SOTA model) in all datasets. The adapter module helps each part of the model be fine-tuned independently, therefore we can optimize the DST and NLG task respectively. Our model performs well on NLG task based on the belief states from DST modules and base knowledge gained from the large-scale dialog dataset during pre-training. TOATOD*base* attains the best score on the Inform and Success rate. It shows that the reinforcement learning of our approach is effective for adjusting trade-off problem between BLEU score and others. ## 5.3 Intent Classification BERT-FIXED 87.19' 91.79' 85.77' BERT-DG 91.75* 95.98* 90.89* cist-dial (mslm) 91.17* 95.80* 91.36* USE 92.81' 95.06' 91.25' USE+CONVERT 93.36' 97.16' 92.62' ConvBERT+Pre+Multi 93.44* 92.38* **97.11*** SPACE 2.0 **94.77*** 97.80* 94.33* TOATOD*small* 92.40 **98.45** 90.42 TOATOD*base* 92.17 98.01 90.79 Table 4: Accuracy score (%) on all three NLU task dataset with full training. The values with ' are from banking77 paper (Casanueva et al., 2020), and * are from the leaderboard for DialoGLUE paper (Mehri et al., 2020b) and benchmark 3. We test our models on the NLU task with Banking77, CLINC150, and HWU64. Intent prediction is the task to identify the intent behind a given in-3https://eval.ai/web/challenges/ challenge-page/708/leaderboard/1943 put. The task is normally framed as a classification problem, so we set the metric as turn accuracy. ## 5.3.1 Evaluation Result From Table 4, while our models do not achieve the highest score on NLU task, it is important to note that they perform well in relation to the size when compared to other models. This highlights the effectiveness of our task-optimized adapter approach in achieving a balance between model performance and efficiency. ## 6 Further Analysis And Discussion 6.1 Reinforcement Learning 6.1.1 W/O Reinforcement Learning Table 5: Task performance of TOATOD*base* before and after applying REINFORCE. SL means supervised learning and RL means reinforcement learning. The test results of the TOATOD*small* is attached to Appendix B. | Task | DST | NLG | | |----------------------|-------|--------------------------------------|--------------| | Metrics | JGA | Slot F1 Inform Success BLEU Combined | | | SL (2.2) 62.92 93.72 | 85.30 | 77.00 | 18.44 99.59 | | RL (2.2) 63.79 93.96 | 90.00 | 79.80 | 17.04 101.94 | | SL (2.1) 53.33 91.68 | 88.90 | 81.40 | 18.73 103.88 | | RL (2.1) 54.97 92.01 | 97.00 | 87.40 | 17.12 109.32 | Described on the Table 5, after applying reinforcement learning, the performance of DST and NLG modules are enhanced. Models with reinforcement learning obtain best JGA on the DST task and get the highest Combined Score on the NLG task. The BLEU scores fall slightly but the Combined Score rise with Inform and Success, which means that incorporating reinforcement learning into the training process leads our model to complete tasks more efficiently. We conduct hyperparameter tuning only for the TOATOD*base*, so there is some gap in the degree of performance improvement between TOATOD*small* and TOATOD*base*. ## 6.1.2 Hyperparameters Of Reinforce The hyperparameter α is used for balancing the importance between cross-entropy loss and policy loss. We re-train DST and NLG modules by optimizing the mixed objective function to get higher rewards. The second hyperparameter β is a scaling factor to control the trade-off between BLEU score $${\mathrm{board}}/1941$$ and Success rate for NLG module. The BLEU score is calculated based on the number of matching n-grams between the generated text and the reference text. A higher score indicates that the generated text is more similar to the label text. Success rate measures the proportion of dialogues, where the model successfully completes the task. A higher success rate means the model is better at achieving the user's goal of the dialogue. In some cases, however, the model may generate text that is close to the reference text (high BLEU score) but not relevant to the current dialogue (low Success rate). So, we aim to reduce the gap between two metrics via hyperparameter tuning. ## 6.1.3 Α **Of Dst-Optimized Adapter** In the Table 7, the result of α experiment on DST task indicates that 1.0 yielded the best performance, suggesting the significance of the policy loss in the overall loss function. By maximizing policy loss, the model is encouraged to make decisions that result in high rewards, which improves the performance of the DST task. ## 6.1.4 Α And Β **Of Nlg-Optimized Adapter** We experiment with several combinations of α and β within {α: 0.3, 0.5, 0.7, 0.9, 1.0 / β: 0.4, 0.5, 0.6, 0.7}. Regardless of hyperparameters, performances improve after applying the reinforcement learning. The hyperparameters of α=0.5 and β=0.7, result the best Combined Score, which is 5.44 point higher than the performance of the supervised learning. As shown in the Figure 3, on the experiment α 1.0 0.7 0.9 1.0 1.0 0.3 0.5 0.7 1.0 β 0.4 0.5 0.5 0.5 0.6 0.7 0.7 0.7 0.7 Inform **97.50** 97.40 **97.50 97.50 97.50** 93.40 97.00 97.30 **97.50** Success 85.70 86.40 85.90 85.70 85.80 85.40 **87.40** 87.10 86.10 BLEU 16.01 16.35 16.07 16.03 16.04 **18.36** 17.12 16.52 16.10 Combined 107.67 108.25 107.77 107.63 107.69 107.76 **109.32** 108.72 107.90 | α | 1.0 | 0.9 | 0.7 | |---------|-------|-------|-------| | JGA | 54.97 | 54.96 | 54.76 | | Slot F1 | 92.01 | 92.02 | 91.95 | ![7_image_0.png](7_image_0.png) after removing impact of CE loss (α=1), we found that the higher the β, success rate increases compared to BLEU score as we expected. As shown in Table 6, when the β is fixed, the bigger weight (smaller α) on the CE loss ensures the higher BLEU score. On the other hand, the success rate started to decline from the certain point. Trade-off issue appears at this point. Therefore, α and β need to be properly tuned in NLG task for good performance. ## 7 Conclusion We propose TOATOD, task-optimized adapters for an end-to-end task dialogue system. By adapting task-optimized adapters, we utilize the end-to-end models without updating the pre-trained parameters and enabling debugging per task, which is different from previous research. In addition, we apply REINFORCE algorithm with metric-aware reward function directly not only on the NLG task but also DST task to prevent score degradation. As a result, we attain comparable performance to the previous SOTA models on every benchmark with very small number of trainable parameters. For the DST task of MultiWOZ2.2, our TOATOD model outperforms the current SOTA systems. ## Limitations We train the task-optimized adapters based on the pre-trained weights of dialogue LM. Therefore, if applied to other dialogue tasks such as chit-chat and conversational QA system, the performance could be lower than that shown in our research. And we need future works to clarify the reason why the performance was better on the MultiWOZ 2.2 dataset, which is expected that our model does not overfit to the confused labels. Our model inferences in on the end-to-end manner, but trains like modular system for each task. End-to-end learning is currently under study. We could adapt the multitask end-to-end learning to our method, which may lead to the better performance. Also, we could analyze the inner working of task-optimized adapters applying XAI technologies. ## Ethics Statement We honor the ethical codes set out in the ACL code of Ethics. All of the datasets used in our study are from previous studies and do not have privacy issues. ## Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2022-0-00621,Development of artificial intelli- gence technology that provides dialog-based multi-modal explainability) ## References Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China. Association for Computational Linguistics. Paweł Budzianowski and Ivan Vulic. 2019. ´ Hello, it's GPT-2 - how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. In *Proceedings of the 3rd Workshop on Neural Generation and Translation*, pages 15–22, Hong Kong. Association for Computational Linguistics. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for task- oriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. *CoRR*, abs/2003.04807. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems. ACM SIGKDD Explorations Newsletter, 19. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020. Reinforcement learning based graph-to-sequence model for natural question generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. Radostin Cholakov and Todor Kolev. 2022. Efficient task-oriented dialogue systems with response selection as an auxiliary task. *CoRR*, abs/2208.07097. Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, and Xiaodan Zhu. 2021. Preview, attend and review: Schema-aware curriculum learning for multi-domain dialogue state tracking. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 879–885. Association for Computational Linguistics. Suvodip Dey, Ramamohan Kummara, and Maunendra Desarkar. 2022. Towards fair evaluation of dialogue state tracking by flexible incorporation of turn-level performances. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 318–324, Dublin, Ireland. Association for Computational Linguistics. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural* Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13042–13054. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 422–428, Marseille, France. European Language Resources Association. Shaoxiong Feng, Xuancheng Ren, Kan Li, and Xu Sun. 2022. Hierarchical inductive transfer for continual dialogue learning. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 693–699. Association for Computational Linguistics. Robert French. 1999. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3:128–135. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society. Wanwei He, Yinpei Dai, Min Yang, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022a. Unified dialog model pre-training for task-oriented dialog understanding and generation. In *Proceedings of the 45th International ACM SIGIR Conference on Research and* Development in Information Retrieval, SIGIR '22, page 187–200, New York, NY, USA. Association for Computing Machinery. Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, Jian Sun, and Yongbin Li. 2022b. GALAXY: A generative pre-trained model for taskoriented dialog with semi-supervised learning and explicit policy injection. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, ThirtyFourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 10749–10757. AAAI Press. Michael Heck, Carel van Niekerk, Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. TripPy: A triple copy strategy for value independent neural dialog state tracking. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 35–44, 1st virtual meeting. Association for Computational Linguistics. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799. PMLR. Hyunmin Jeon and Gary Geunbae Lee. 2021. Domain state tracking for a simplified dialogue system. CoRR, abs/2103.06648. Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-ofscope prediction. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 1311–1316, Hong Kong, China. Association for Computational Linguistics. Chia-Hsuan Lee, Hao Cheng, and Mari Ostendorf. 2021. Dialogue state tracking with a language model using schema-driven prompting. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4937–4949. Association for Computational Linguistics. Yohan Lee. 2021. Improving end-to-end task-oriented dialog system with a simple auxiliary task. In *Conference on Empirical Methods in Natural Language* Processing. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1192– 1202, Austin, Texas. Association for Computational Linguistics. Junyi Li, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. Pretrained language models for text generation: A survey. *CoRR*, abs/2105.10311. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Zhaojiang Lin, Andrea Madotto, Yejin Bang, and Pascale Fung. 2021. The adapter-bot: All-in-one controllable conversational model. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 16081–16083. AAAI Press. Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2020. Mintl: Minimalist transfer learning for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3391– 3405. Association for Computational Linguistics. Bing Liu and Ian Lane. 2018. End-to-end learning of task-oriented dialogs. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 67–73, New Orleans, Louisiana, USA. Association for Computational Linguistics. Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2019. Benchmarking natural language understanding services for building conversational agents. In Increasing Naturalness and Flexibility in Spoken Dialogue Interaction - 10th International Workshop on Spoken Dialogue Systems, IWSDS 2019, Syracuse, Sicily, Italy, 24-26 April 2019, volume 714 of *Lecture Notes in Electrical Engineering*, pages 165–183. Springer. Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tür. 2020a. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. *CoRR*, abs/2009.13570. Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tür. 2020b. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. *CoRR*, abs/2009.13570. Tomás Nekvinda and Ondrej Dusek. 2021. Shades of bleu, flavours of success: The case of multiwoz. CoRR, abs/2106.05555. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In *6th International Conference on* Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2021. SOLOIST: building task bots at scale with transfer learning and machine teaching. *Trans. Assoc. Comput. Linguistics*, 9:907–824. Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In *Advances in Neural Information Processing Systems 30: Annual Conference* on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 506– 516. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In *Proceedings of the 35th International Conference* on Machine Learning, volume 80 of *Proceedings* of Machine Learning Research, pages 4596–4604. PMLR. Asa Cooper Stickland and Iain Murray. 2019. BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5986–5995. PMLR. Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2022. Multi-task pre-training for plug-and-play task-oriented dialogue system. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 4661–4676, Dublin, Ireland. Association for Computational Linguistics. Haipeng Sun, Junwei Bao, Youzheng Wu, and Xiaodong He. 2022a. BORT: back and denoising reconstruction for end-to-end task-oriented dialog. In Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2156–2170. Association for Computational Linguistics. Haipeng Sun, Junwei Bao, Youzheng Wu, and Xiaodong He. 2022b. Mars: Semantic-aware contrastive learning for end-to-end task-oriented dialog. CoRR, abs/2210.08917. Richard S. Sutton, David A. McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In *Advances in Neural Information Processing* Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 - December 4, 1999], pages 1057– 1063. The MIT Press. Xin Tian, Liankai Huang, Yingzhan Lin, Siqi Bao, Huang He, Yunyi Yang, Hua Wu, Fan Wang, and Shuqi Sun. 2021. Amendable generation for dialogue state tracking. In *Proceedings of the 3rd Workshop on* Natural Language Processing for Conversational AI, pages 80–92, Online. Association for Computational Linguistics. Jianhong Wang, Yuan Zhang, Tae-Kyun Kim, and Yunjie Gu. 2021. Modelling hierarchical structure between dialogue policy and natural language generator with option framework for task-oriented dialogue system. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 917–929, Online. Association for Computational Linguistics. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808–819, Florence, Italy. Association for Computational Linguistics. Qingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2021. Alternating recurrent dialog model with large-scale pre-trained language models. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 1292– 1301. Association for Computational Linguistics. Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021. UBAR: towards fully end-to-end task-oriented dialog system with GPT-2. In *Thirty-Fifth AAAI Conference* on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14230–14238. AAAI Press. Chenchen Ye, Lizi Liao, Fuli Feng, Wei Ji, and TatSeng Chua. 2022. Structured and natural responses co-generation for conversational search. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 155–164, New York, NY, USA. Association for Computing Machinery. Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. MultiWOZ 2.2 : A dialogue dataset with additional annotation corrections and state tracking baselines. In *Proceedings of the 2nd Workshop on* Natural Language Processing for Conversational AI, pages 109–117, Online. Association for Computational Linguistics. Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wang, Philip S. Yu, Richard Socher, and Caiming Xiong. 2020a. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, *SEM@COLING 2020, Barcelona, Spain (Online), December 12-13, 2020, pages 154–167. Association for Computational Linguistics. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DIALOGPT : Largescale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 270–278. Association for Computational Linguistics. Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, Izhak Shafran, and Yonghui Wu. 2022. Descriptiondriven task-oriented dialog modeling. *CoRR*, abs/2201.08904. Li Zhou and Kevin Small. 2019. Multi-domain dialogue state tracking as dynamic knowledge graph enhanced question answering. *CoRR*, abs/1911.06192. ## Appendices A Units Of Adapters | Dim | 1/2 | 1/4 | 1/8 | |------------------|-------|-------|-------| | JGA | 51.59 | 51.11 | 50.43 | | Slot f1 | 91.02 | 91.07 | 90.40 | | Trainable Params | 7.9M | 7.2M | 6.6M | Table 8: Adapter units experiment results. We test different bottleneck dimension of adapter module on the DST task. In this experiment, we evaluate the performance with several bottleneck dimensions, h = 256, 128, 64 with TOATOD*small*, which is 1/2, 1/4, 1/8 size of T5*small*'s embedding dimension of 512. The main focus of the experiment is to investigate the effect of the bottleneck dimension h of the adapter module on the performance of the model. To evaluate the performance of the model, we use the Joint Goal Accuracy for DST task. We keep the other hyperparameters constant across all the experiments, including learning rate of 1e-4 and evaluate the performance on the test set of MultiWOZ 2.1. We report the result on the Table 8. The result implies that there is a trade-off between the bottleneck dimension and the performance of the model. As the bottleneck dimension increases, the performance of the model also improves. The best performance is achieved with a bottleneck dimension of 256, where the JGA is 51.59. It is important to carefully choose the bottleneck dimension when using the adapter module in the task-oriented dialogue system. As described in the Table 1, the trainable parameters of our model with bottleneck dimension of 256 is still significantly smaller than the PLM's parameters, so we choose the size of 1/2. ## B W/O Reinforcement Learning Of Toatodsmall | Task | DST | NLG | | |----------------------|-------|--------------------------------------|--------------| | Metrics | JGA | Slot F1 Inform Success BLEU Combined | | | SL (2.2) 61.29 93.46 | 78.80 | 69.50 | 18.46 92.61 | | RL (2.2) 61.92 93.65 | 85.80 | 74.00 | 18.00 97.90 | | SL (2.1) 52.58 91.31 | 84.30 | 74.40 | 18.82 98.17 | | RL (2.1) 53.01 91.61 | 92.10 | 80.50 | 18.28 104.58 | ## C Implementation Details We used 8 A100 (80G) GPUs, but they were fully used during only reinforcement learning. During supervised learning, we used 4 GPUs. While reinforcement learning of the DST task-optimized adapters, we set learning rate as 1e-5, and batch size as 32 (utterance-level). On the contrary, while reinforcement training of the NLG task-optimized adapters, we set learning rate as 1e-6 and batch size as 4 (session-level). During the entire training process, we set random seed as 42. And for the NLU task, we used 1 RTX A5000 (24GB) GPU and trained models without reinforcement learning. We set batch size as 64 and single run with random seeds. When training T5*small* and T5*base* for the Banking 77 and CLINC150, we used learning rates of 0.001 and 0.15. And we used learning rates of 0.01 and 0.1 for the T5*small* and T5*base* with the HWU64. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 2, 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. All of the datasets used in our study are from previous studies and do not have privacy issues. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4.2 & Appendix C ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Table 1 & Appendix C The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.2 & 6 & Appendix C ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix C ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chakrabarty-etal-2023-spy
{I} Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
https://aclanthology.org/2023.findings-acl.465
Visual metaphors are powerful rhetorical devices used to persuade or communicate creative ideas through images. Similar to linguistic metaphors, they convey meaning implicitly through symbolism and juxtaposition of the symbols. We propose a new task of generating visual metaphors from linguistic metaphors. This is a challenging task for diffusion-based text-to-image models, such as DALL$\cdot$E 2, since it requires the ability to model implicit meaning and compositionality. We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6,476 visual metaphors for 1,540 linguistic metaphors and their associated visual elaborations. Evaluation by professional illustrators shows the promise of LLM-Diffusion Model collaboration for this task.To evaluate the utility of our Human-AI collaboration framework and the quality of our dataset, we perform both an intrinsic human-based evaluation and an extrinsic evaluation using visual entailment as a downstream task.
# I Spy A Metaphor: Large Language Models And Diffusion Models Co-Create Visual Metaphors Tuhin Chakrabarty1∗ , Arkadiy Saakyan1∗ , Olivia Winn1∗ , Artemis Panagopoulou2 Yue Yang2, Marianna Apidianaki2**, Smaranda Muresan**1 1Columbia University 2University of Pennsylvania {tuhin.chakr,a.saakyan,olivia}@cs.columbia.edu ## Abstract Visual metaphors are powerful rhetorical devices used to persuade or communicate creative ideas through images.Similar to linguistic metaphors, they convey meaning implicitly through symbolism and juxtaposition of the symbols. We propose a new task of generating visual metaphors from linguistic metaphors. This is a challenging task for diffusion-based text-to-image models, such as DALL·E 2, since it requires the ability to model implicit meaning and compositionality. We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chainof-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models.Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6,476 visual metaphors for 1,540 linguistic metaphors and their associated visual elaborations. Evaluation by professional illustrators shows the promise of LLM-Diffusion Model collaboration for this task.To evaluate the utility of our Human-AI collaboration framework and the quality of our dataset, we perform both an intrinsic humanbased evaluation and an extrinsic evaluation using visual entailment as a downstream task. ## 1 Introduction Visual metaphors are rhetorical devices that serve to communicate a message through an image. They are often used as a means of persuasion in advertising (Phillips, 2003; Phillips and McQuarrie, 2004), as their use leads to more favorable attitude toward the ad (Mcquarrie and Mick, 1999). Similarly to linguistic metaphors (Lakoff, 1993), a visual metaphor takes a concept from a source domain ∗*Equal contribution. ![0_image_0.png](0_image_0.png) Figure 1: Visual metaphors generated by DALL·E 2 for the linguistic metaphor "My bedroom is a pig sty".We can take the original verbal metaphor as the input (left) or use GPT-3 with Chain of Thought prompting (right). and applies it to a target domain. In the case of visual metaphors, these domains need to be in some way visually grounded. Large diffusion-based text-to-image models, such as DALL·E 2 (Ramesh et al., 2022a), PARTI (Yu et al., 2022), Stable Diffusion (Rombach et al., 2022), or IMAGEN (Saharia et al., 2022), can generate visually compelling images conditioned on input texts. However, in order to generate visual metaphors from linguistic metaphors, models are required to first identify the implicit meaning as well as the objects, properties, and relations involved, and then find a way to combine them in the generated image. For instance, given the linguistic metaphor "*My bedroom is a pig sty*", as shown in Figure 1, a model would ideally need to extract the implicit meaning of the bedroom being "*messy*", and then compose the concepts "*Bedroom*", "*Messy*" & "Pig". However, as shown in the left two images, when presented just with the linguistic metaphor, DALL·E 2 generates images of a bedroom where pink is the prevalent color (perhaps due to pig's skin color), sometimes with the presence of a pig as a toy in a corner, and with little indication of a mess in the room. The visual metaphor generation task is greatly impacted by two common challenges in text-toimage models, namely *under-specification* and attribute-object binding (Hutchinson et al., 2022; Ramesh et al., 2022a; Saharia et al., 2022). Underspecification refers to the fact that finite and reasonable-length linguistic descriptions of realworld scenes by necessity omit a great deal of visual information (Hutchinson et al., 2022). Attribute Binding is the task of binding the attributes to the correct objects, and is a fundamental problem for a more complex and reliable compositional generalization. Our proposed contributions address these challenges: - **A novel approach for generating visual** metaphors through the collaboration of large language models (LLMs) and diffusion-based text-to-image models. Our LLM - Instruct GPT-3 (davinci-002) (Ouyang et al., 2022) with Chain-of-Thought (CoT) prompting (Wei et al., 2022) - generates a **visual elaboration** of the linguistic metaphors. To design our CoT prompting elements, we take inspiration from prior work on VisualBlends (Chilton et al., 2019) that put an emphasis on the objects to be represented in the visual metaphor. In addition, we also consider the implicit meaning to finally generate a visual elaboration that contains the essential objects and the implicit meaning of the linguistic metaphor. For our linguistic metaphor "My bedroom is a pig sty", the visual elaboration generated by Instruct GPT-3 (davinci-002) with CoT prompting is "A bedroom with clothes and garbage everywhere with a pig in the center rooting around." (See Table 1). The generated visual elaboration becomes the input to diffusion-based text-to-image models such as DALL·E 2 or Stable Diffusion to generate visual metaphors (see Figure 1 right). - **A high-quality visual metaphor dataset built** through Human-AI collaboration. We propose a collaboration between humans, LLM, and the top-performing diffusion-based model (DALL·E 2) to create a high-quality dataset of 6,476 visually metaphoric images. These represent **1,540** distinct linguistic metaphors and their associated visual elaborations generated through CoT prompting. We call our dataset **HAIVMet** (Human-**AI V**isual Metaphor) (Section 3). ## - **A Thorough Evaluation Of Llm-Diffusion** Model collaboration and Human-AI collaboration. In order to evaluate the power of LLMDiffusion Model collaboration, we recruit professional illustrators and designers and ask them to compare the output of DALL·E 2 and Stable Diffusion v2.1 when the input corresponds to the linguistic metaphor alone, or to the LLM-produced visual elaboration. Our evaluation shows the power of the LLM-Diffusion Model collaboration and the superiority of DALL·E 2 compared to Stable Diffusion v2.1 (Section 4.1). To evaluate the utility of Human-AI collaboration and the quality of our dataset, we perform an intrinsic evaluation using the same expert evaluators and an extrinsic evaluation using a downstream task (Section 4.2). For the latter, we choose the Visual Entailment task: given an image and a hypothesis sentence, the model is asked to predict whether the sentence is implied by the image. We show that fine-tuning a state-of-the-art vision-language model on our dataset leads to ∼23-points improvement in accuracy compared to when it is only finetuned on SNLI-VE (Xie et al., 2019), a large-scale visual entailment dataset. We release our dataset, code, prompts, and illustrator annotations at https://github.com/ tuhinjubcse/VisualMetaphors. ## 2 Related Work Generative Art. There has recently been a huge surge of AI-generated artwork and imagery with the new diffusion-based models being substantially better than previous Variational Autoencoders (VAE) and Generative Adversarial Networks (GANs). Some of the most popular current models are DALL·E 2 (Ramesh et al., 2022b), MidJourney,1 Craiyon,2and Stable Diffusion (Rombach et al., 2021). These image generation models are able to handle a wide variety of prompts, though recent work has shown that there are still 1https://www.midjourney.com/ 2https://www.craiyon.com/ aspects of accurate depiction that these models fail to capture (Leivada et al., 2022). Recently, Kleinlein et al. (2022) showed that diffusion models can handle language that is content-based and aimed at a neutral description of the scene, and fail to capture the underlying abstraction of figurative language. Recent work has also explored cutting-edge systems showcasing the power of large language models and text-to-image models in aiding creative processes across various applications. Wang et al. (2023) present PopBlends, a system that leverages traditional knowledge extraction methods and large language models to automatically generate conceptual blends for pop culture references, significantly increasing the number of blend suggestions while reducing mental demand for users. Similarly, Liu et al. (2023) introduce Generative Disco, an AI system that generates music visualizations using large language models and text-to-image models, offering an enjoyable, expressive, and easy-to-use tool for professionals in the creative field. Wang et al. (2023) present ReelFramer, a system where GPT4 and DALLE2 collaborate in order to assist journalists in transforming written news stories into engaging short video narratives, by generating scripts, character boards, and storyboards. The proposed user study shows ReelFramer's effectiveness in easing the process and making framing exploration rewarding for journalism students. Visual Metaphor. Visual metaphors are often abstract and can be challenging to interpret. Petridis and Chilton (2019) test several theories about how people interpret visual metaphors. They find that visual metaphors are interpreted correctly, without explanatory text, with 41.3% accuracy. Indurkhya and Ojha (2013) highlight the important role of perceptual similarity between the source and the target image (in terms of color, shape, etc) in metaphor comprehension and creative interpretation. Achlioptas et al. (2021) propose the ArtEmis dataset which contains emotion attribution and explanation annotations for 80K artworks from WikiArt, including several visual metaphors and similes. Their dataset serves to train captioning systems to express emotions and associated explanations derived from visual stimuli, instead of generating images conditioned on text. Zhang et al. (2021) collect a multimodal metaphor dataset from Twitter posts and advertisement posters that contain a metaphor in the caption, in the image, or both. However, they do not generate any new data and, as of yet, the data has not been publicly released. Liu et al. (2022b) release Opal, a system that guides users in generating diverse and relevant text-to-image illustrations for news articles by utilizing structured exploration. Unlike research on generating textual metaphors (Yu and Wan, 2019; Chakrabarty et al., 2020, 2021; Veale, 2016; Abe et al., 2006; Terai and Nakagawa, 2010), visual metaphor generation has received less attention. Akula et al. (2023) proposed MetaCLUE, a set of vision tasks that serve to evaluate the metaphor understanding and generation capabilities of stateof-the-art vision and language models. Their results show that most tested state-of-the-art models struggle to produce satisfactory results, in both a zero-shot and a finetuning setting. Hwang and Shwartz (2023) focus on building a dataset for captioning and interpreting memes that are a widely popular tool for web users to express their thoughts using visual metaphors. More recently, Yosef et al. (2023) present the Image Recognition of Figurative Language dataset, designed to evaluate vision and language models' understanding of figurative language, including metaphors, similes, and idioms. The dataset features multimodal examples and introduces two novel benchmark tasks, aimed at promoting the development of models that can effectively comprehend figurative language.Current baseline models have shown significantly poorer performance compared to human understanding, highlighting the challenges this domain poses for machine learning. ## 3 Human-Ai Collaboration For Visual Metaphor Dataset Creation We propose a three-step Human-AI collaboration approach for generating visual metaphors from linguistic metaphors. This process involves 1) selecting linguistic metaphors that are visually grounded; 2) using large language models to generate visual elaborations of linguistic metaphors that capture relevant objects and implicit meaning, with expert edits when required; 3) using diffusion-based models to generate visual metaphors from visual elaborations, with filtering of low quality samples by experts. A detailed pipeline diagram for our dataset creation is shown in Figure 2. We source our linguistic metaphors from six resources, removing any duplicates: **FLUTE** (Chakrabarty et al., 2022b), **Advertisements** (Hussain et al., 2017), **CoPoet** (Chakrabarty et al., 2022a), **FigQA** (Liu et al., 2022a), Figure-of-Speech, 3 **CrossLing Metaphors** (Tsvetkov et al., 2014) and **Metaphor Paraphrase** (Bizzoni and Lappin, 2018). Your task will be to elaborate a metaphor with rich visual details along with the provided objects to be included and implicit meaning. Make sure to include the implicit meaning and the objects to be *included in the explanation* 1. **Metaphor**: My lawyer is a shark. Objects to be included: Lawyer, Shark Implicit Meaning: fierce Visual elaboration: A shark in a suit with fierce eyes & a suitcase & a mouth open with pointy teeth. 2. **Metaphor**: I've reached my boiling point. Objects to be included: Person, Boiling Pot Implicit Meaning: anger Visual elaboration: A boiling pot of water with a person's head popping out of the top, steam coming out of their ears, and an angry expression on their face. 3. **Metaphor**: Joe: that's because you're like a snail surfing on molasses. Objects to be included: Person like a snail, Snail on molasses Implicit Meaning: slow Visual elaboration: A person with a snail shell on their back slowly sliding down a hill of molasses. 4. **Metaphor**: Absence is the dark room in which lovers develop negatives Objects to be included: Darkroom, Negative Film Strip with a red heart, Person Implicit Meaning: ominous and lonely Visual elaboration: An ominous dark room with a film strip negatives hanging and a red heart in the center with a person in the corner looking sad and lonely 5. **Metaphor**: My heart is a rose thorn Objects to be included: Heart, Thorn Implicit Meaning: prickly Visual elaboration: A heart with a prickly thorn coming out of the center and barbs going outwards. 6. **Metaphor**: My bedroom is a pig sty Objects to be included: Messy bedroom, Pig Implicit Meaning: dirty Visual elaboration: A bedroom with clothes & garbage everywhere with a pig in the center rooting around. Table 1: Chain-of-Thought (CoT) prompt to elicit a visual elaboration for a given metaphor. We provide the first five examples in a few-shot learning setting and the model jointly generates Objects to be Included, Implicit Meaning, and Visual elaboration (highlighted in brown) step-by-step. 1) Visually Grounded Linguistic Metaphors: Given that not all linguistic metaphors can be rendered as visual metaphors, we manually select those that are visually grounded. Concrete subjects can clearly be visually grounded, but some abstract subjects can be visually grounded as well through their usual representations in media. For ![3_image_0.png](3_image_0.png) example, "*love*" can be represented as two people holding hands with hearts above them, "*confusion*" as question marks, or "*idea*" as a lightbulb over someone's head. Linguistic metaphors that describe non-visual phenomena (e.g., a smell, a sound) are removed unless the act of experiencing the sense is the subject of the sentence, which can be visualized with, e.g., a facial expression. We consider emotional phenomena as visual since often emotions and feelings are expressed through facial expression and/or body posture which can be visualized. 2) Visual Elaboration Generation with Chainof-Thought Prompting: Existing text-to-image generation models do not perform well when their input contains linguistic metaphors, since they lack the ability to model implicit meaning and compositionality. Recently, Wei et al. (2022) proposed a prompting method for improving the reasoning abilities of language models. This method, called Chain-of-Thought (CoT) prompting, enables models to decompose multi-step problems into intermediate steps. We take advantage of CoT prompting by using the relevant objects and implicit meaning of the metaphors as our intermediate steps, to then elicit detailed textual visualizations of linguistic metaphors using Instruct GPT-3 (davinci-002). We refer to this detailed textual visualization as a **visual elaboration**. We hypothesize that these visual elaborations obtained from CoT prompting will help text-to-image models create better visual metaphors, as the objects and implicit meaning will be explicitly contained in the input. ![4_image_0.png](4_image_0.png) Table 1 shows the instruction and CoT prompt used to elicit a visual elaboration for a given linguistic metaphor. The first five examples are given as few-shot examples and the model (Instruct GPT3 (davinci-002)) then jointly generates the objects to be Included, implicit meaning, and visual elaboration (highlighted in brown) step-by-step. As our prompts follow a certain structure for step-bystep reasoning, a zero-shot approach would not work well. We found that using five few-shot examples was sufficient to generate elaborations of good quality. We selected five representative examples of visualizable metaphors for the prompt. We used the same examples for generation of every visual elaboration. While this approach leads to good-quality outputs, not all generated visual elaborations are perfect. We recruit three expert annotators with multiple years of experience in figurative language research and ask them to validate the generated visual elaborations and to slightly edit them if needed, in order to make sure they accurately represent the implicit meaning and the objects involved. Our pipeline is illustrated in Figure 2. As can be seen in Figure 3, for the given linguistic metaphor "*The news of the accident was a dagger in her* heart", the first visual elaboration is almost correct but it misses the crucial information about the metaphoric source, i.e "*the news of the accident*". An expert performs a minor edit by adding the phrase "*woman receiving a phone call*" in order to convey the metaphoric source which leads to a perfect visual metaphor. Experts performed minor edits on 29% of the generated visual elaborations. ## 3) Visual Metaphor Generation Using Diffusionbased Models And Human Quality Check: For this part of the data curation process, we first prompt DALL·E 2 to generate multiple images4 for a single visual elaboration (cf. Figure 2). Postgeneration, each set of generated images is examined jointly by three experts to determine whether they accurately and fully represent the meaning of the original linguistic metaphor. The experts need to validate whether the image contains the relevant objects and whether the objects are positioned correctly or have the appropriate indicators of movement or action, also referred to as **Attribute Binding** (Ramesh et al., 2022a; Saharia et al., 2022). For example, for the phrase "*Her eyes were like* peonies", the image would need to depict both a face and peonies and the peonies would need to be in the place of the eyes rather than around the head (which was the case in some images). Images that do not meet the above criterion were discarded. The dataset curated in this way contains **1,540** unique linguistic metaphors (and their associated visual elaborations) and **6,476** unique images. Each linguistic metaphor has **four** associated images, on average. We call our data **HAIVMet** (Human-AI Visual Metaphor). ## 4 Evaluation Our goal is to assess the impact of the LLMDiffusion Model collaboration (Section 4.1), and of the Human-AI collaboration on building a highquality dataset. ## 4.1 Llm-Diffusion Model Collaboration Models. Diffusion models are trained to recover the original version of an image after random noise has been applied to it (Ramesh et al., 2022a). Both DALL·E 2 and Stable Diffusion are diffusion-based text-to-image models. Stable Diffusion is open source; DALL·E 2 is not. Note that in this evaluation, there is no human intervention (no editing of the output of Instruct GPT-3 with CoT prompting, nor filtering of images produced by diffusion-based 4DALL·E 2 automatically generates four images per prompt. models). We use the following LLM-Diffusion Model collaboration setups, where the input to the diffusion models is the visual elaboration of the linguistic metaphor generated using Instruct GPT-3 (davinci-002) with CoT prompting: - LLM-DALL·E 2: DALL·E 2 (Ramesh et al., 2022a) with the LLM-generated visual elaboration as input. - **LLM-SD**: The Stable Diffusion (Rombach et al., 2022) v2.1 model, with the same input as LLMDALL·E 2. - **LLM-SD***Structured* We use the diffusion method of Feng et al. (2022) which combines the structured representations of prompts (for example, their constituency tree) with the diffusion guidance process, using the same input as LLMDALL·E 2. We also use DALL·E 2 and Stable Diffusion (SD) with the linguistic metaphor given directly as input (no collaboration with the LLM). This comparison allows us to assess the benefit that can be drawn from LLM-Diffusion Model collaboration. Human Evaluation Setup. Among the popular automatic evaluation metrics, both Fréchet Inception Distance (FID) (Heusel et al., 2017) and CLIP (Radford et al., 2021) scores are not tailored towards metaphorical images, and are not reliable in assessing whether the generated images capture the essence of visual metaphors (Akula et al., 2023). We also chose not to rely on non-expert crowdworkers as even with training they have been found to be unreliable for open-ended tasks (Karpinska et al., 2021). Following the recommendation from Karpinska et al. (2021), we recruit three professional artists with experience in concept illustration and visual arts through the Upwork5 platform. We ask them to evaluate the visual metaphors that are generated by the five approaches described above for a subset of 100 randomly selected linguistic metaphors from our dataset. For each metaphor, we ask to rank the five generated images on the basis of how well they represent the metaphor. Additionally, we collect targeted feedback by asking the raters to provide natural language instructions for improving the images. Five text fields are shown under each image, and the annotators are invited to make up to five recommendations. In the occasional case where the image is "Perfect" 5https://www.upwork.com or absolutely not worthy of transformation ("Lost Cause"), the annotators do not need to provide any feedback for improvement. The suggested types of instructions are the following: 1) Add an object; 2) Remove an object; 3) Move an object; 4) Replace an object with another object; 5) Change an object's property (e.g., color, size). The annotators are encouraged to supply whatever type of change they believe is required to improve the visual metaphor; the only stipulation to the instructions is that each one must denote a single action/change. We identify the average rank assigned to a model across metaphors and annotators. We also report the percentage of "Lost Cause" cases in order to identify systems that generate the least amount of bad images. Additionally, we compare the models on the basis of the average number of instructions that have been proposed for improving their produced images. The number of suggested changes acts as a proxy for how close the image is to the perfect representation of the metaphor. "Perfect" images are considered to have 0 edits, and images that are a "Lost Cause" are considered to have 5 edits to ensure fairness in this computation. | Model | Avg | % Lost | Avg # of | |------------------|-------|--------------|------------| | Rank | Cause | Instructions | | | SD | 3.82 | 31.6 | 2.25 | | LLM-SD | 3.40 | 23.3 | 1.83 | | LMM-SDStructured | 3.05 | 18.3 | 1.57 | | DALL·E 2 | 2.76 | 16.6 | 1.44 | | LMM-DALL·E 2 | 1.96 | 6.0 | 0.76 | ## 4.1.1 Results And Analysis Table 2 shows that without collaboration with a LLM (i.e., just with the linguistic metaphor as input), DALL·E 2 performs better than SD (line 4 vs. line 1). The main take away is that LLMDiffusion Model collaboration outperforms simple Diffusion Models (LLM-DALL·E 2 vs. DALL·E 2, LLM-SD and LLM-SD*Structured* vs. SD). That is, using Instruct-GPT3 with CoT prompting to produce visual elaborations as input to diffusion models consistently improves the performance over providing the diffusion models directly with linguistic metaphors. Overall, LLM-DALL·E 2 emerges as the best system. Only 6% are "Lost Cause" images, affirming our choice for using ![6_image_0.png](6_image_0.png) ![6_image_2.png](6_image_2.png) ![6_image_1.png](6_image_1.png) LLM-DALL-E 2 to create HAIVMet . Rank 1 (best) was assigned to LLM-DALL-E 2 in 44.6% of cases, followed by 24.0% for DALL·E 2, 14.0% for LLM- SD Structured , 10.0% for LMM-SD, and 7.3% for SD. Using the same prompts as for LMM-DALL-E 2, we still observe an improvement in LLM-SD over the original SD output. Finally, as expected, LLM-SD Structured improves over LMM-SD. In Figure 4, we show examples of visual metaphors generated using the linguistic metaphors or their visual elaborations as CoT prompts. We observe that the latter, where CoT prompting is involved, are of higher quality. For instance, a good visual metaphor for the metaphorical expression "Books are the mirror to the soul" would require books, a mirror, and superimposing the mirror with some approximate depiction of a soul (usually illustrated as a person). However, the images that DALL·E 2 and Stable Diffusion generate (columns 3 and 5, respectively), just contain books. This problem is fixed with CoT prompting, as seen in columns 2, 4, and 6. The observations are similar for the metaphor " I feel like a lily in February ", where the implicit meaning of being out of place is depicted by lilies blooming in February over a snowy (instead of sunny) landscape. How do expert illustrators perceive modelgenerated visual metaphors? One of the goals of our evaluation, besides obtaining a subjective ranking of the tested models, was to analyze some of the flaws in the output. As stated above, for every image that was not considered "Perfect" or "Lost Cause", we collected suggestions from experts about changes that would improve the image as a visual metaphor. Examples are given in Figure 5. This helps us understand where models might still be lacking, and the extent to which future interaction with illustrators might shape modelgenerated outputs to be acceptable. We find that issues in the output may be due to a model not being able to accurately depict a prompt, due to under-specification in terms of the objects to be represented or to the implicit property not being properly depicted. For instance, the CoT prompt for the metaphor "*It was a moonless night, the* air was still and the crickets were like living shadows" accurately describes it as "An illustration of a moonless night sky with still air and crickets crawling around as living shadows.". However, the model fails to understand the word moonless and adds a moon to the picture. Additionally, while it adds the crawling crickets to the picture, there are no shadows. This affects the way we perceive the metaphor since its implicit meaning is "dark and creepy". However, the rest of the image is high quality in terms of depiction. On the contrary, for the metaphor "He was like a butterfly in autumn, waiting to be destroyed by the first frost", the CoT prompt "*An illustration of a butterfly perched on* an autumn leaf with the first frost starting to form around it" misses out on the source 'He' (ideally a fragile man) but the model depicts it perfectly. Table 2 shows that nearly all models have room for improvement. Future work can use these suggestions in the form of natural language instructions to edit model-generated images, as demonstrated in recent work by Brooks et al. (2022). ## 4.2 Human-Ai Collaboration Evaluation Intrinsic Evaluation. To better understand if Human-AI collaboration leads to better quality visual metaphors, we conduct another round of evaluation with the same group of professional artists. Our experimental setup is the same as in our previous evaluation, except that instead of five images, we provide them with two visual metaphors for the same input: one from the **HAIVMet** corpus and the other from LLM-DALL·E 2 used in the previous round of evaluation (with their order shuffled). We then ask them to objectively provide a ranking between the two systems or tie them if they are both ![7_image_0.png](7_image_0.png) of the same quality. They are also asked to provide instructions for improving them (unless they are Perfect or Lost Cause). We get the final verdict using majority voting. We obtain an inter-annotator agreement of 0.57 based on Fleiss's kappa (Fleiss, 1971) ("moderate agreement"). Our results in Table 3 show that while 37% of the images are of similar quality, from the remaining images professionals preferred instances from **HAIVMet** 45% of the time compared to LLM-DALL·E 2 18% of time. Finally, the **HAIVMet** data has an almost negligible number of Lost Causes, providing further evidence of its high quality. ![7_image_1.png](7_image_1.png) Extrinsic Evaluation: Visual Entailment Task. Apart from being a rich source of visual metaphors, our dataset can also be useful in downstream applications. We showcase this by using it in a Visual Entailment (VE) task, where a vision-language model needs to predict whether a hypothesis is entailed by an image (cf. Figure 6). We use OFA (Wang et al., 2022), a state-of-the-art VE model finetuned on SNLI-VE (Xie et al., 2019). SNLI-VE only contains real-world images, but OFA is pretrained on ∼20M image-text pairs some of which are synthetic. We extract 958 metaphors from our dataset that are associated with **literal** natural language entailment pairs from FLUTE (Chakrabarty et al., 2022b), CrossLing Metaphors (Tsvetkov ![8_image_1.png](8_image_1.png) Table 4: Visual Entailment Results. OFA (Wang et al., 2022) fined-tuned on SNLI-VE (Xie et al., 2019) vs. SNLI-VE+**HAIVMet**. Bold indicates best performance. et al., 2014) and Metaphor Paraphrase (Bizzoni and Lappin, 2018) (see Appendix C for details on the data construction procedure). We split the data into train, validation and test sets, which contain 708, 100 and 150 metaphors (3686/506/831 imagetext pairs), respectively. We fine-tune OFA-base (182M parameters) for 10 epochs with learning rate 6e-5 and polynomial decay (weight=0.01), and batch size 8 on an NVIDIA RTX A6000 GPU for 8 hours. We select the model that has best performance on the development set. We show that accuracy on the test set improves by ∼23 points compared to OFA's performance when it is only finetuned on SNLI-VE. This result is indicative of the quality and usefulness of our dataset which can help vision-language models capture metaphoric meaning ## 5 Compositionality In Visual Metaphors In prior work, Gutiérrez et al. (2016) showed that metaphorical meaning is not only a property of individual words but arises through cross-domain composition. Gal (2019) further argues that a metaphor is a visual material rather than conceptual. It is a mechanism of syntactic structure, forms, and material composition, which goes along with the perception of structures and compositions. Many images from our HAIVMet data showcase the compositional nature of visual metaphors, as can be seen in Table 5. For example, to visualize the metaphor "*Love is a crocodile in the river of desire*" the model needs to show both a human and a crocodile while depicting a sense of desire by embodying love as a concept. Similarly, for "He froze with fear when he saw it", the metaphor needs to not only depict fear but also combine it with the state of being frozen. We can successfully achieve these difficult compositional visualizations through efficient human-AI collaboration. ## 6 Conclusion We show that using Chain-of-Thought prompting for generating visual elaborations of linguistic metaphors leads to significant improvements ![8_image_0.png](8_image_0.png) in the quality of visual metaphors generated by diffusion-based text-to-image models. These models excel at depicting literal objects and actions, but cannot make the leap from figurative phrases to visual depiction without a detailed explanation of the implicit meaning. Though there are still particular aspects of visual composition and figurative imagery that current models fail to capture, the breadth of information collected in this dataset not only allows us to understand the current limitations of image generation but also provides the data necessary to improve visual metaphor generation in the future. We plan to further examine the effect of prompt phrasing on the quality of the generated visual metaphors,and how that effect differs across different models. ## Limitations While the results of Human-AI collaboration for visual metaphor generation are very promising, such a procedure might be time-consuming but at the same time necessary for maintaining quality. We want to acknowledge that both our LLM and bestforming Diffusion models are released through a paid API and are not open-sourced. While our best-performing system uses Chain Of Thought Prompting, there are several other prompting or task decomposition techniques that we did not perform an extensive comparison with.Last but not least, there is still enough room for potential improvement in generating visual metaphors which can be achieved by designing better prompts or by improving the compositional generalization of diffusion models. We also recognize the inherent limitation of an English-only basis for our visual metaphors and hope in the future to expand to other languages for source material. ## Ethics Statement The use of text-to-image generation models is subject to concerns about intellectual property and copyrights of the images generated since the models are trained on web-crawled images. Our task is restricted to generating visual metaphors from linguistic metaphors, and the human-AI collaboration setup should be considered as a creative aid tool. All data collected by human respondents were anonymized and only pertained to the data they were being shown. We do not report demographic or geographic information, given the limited number of respondents, so as to maintain full anonymity. Workers on UpWork were informed that that the work they were doing was going to be used for research purposes. They were paid a wage of 20$ per hour as decided by the workers themselves. Workers were paid their wages in full immediately upon the completion of their work. ## References Keiga Abe, Sakamoto Kayo, and Masanori Nakagawa. 2006. A computational model of the metaphor generation process. In Proceedings of the 28th Annual Meeting of the Cognitive Science Society, pages 937– 942, Vancouver, Canada. Psychology Press. Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, and Leonidas Guibas. 2021. ArtEmis: Affective Language for Visual Art. In *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 11564– 11574. Arjun R. Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T. Freeman, Yuanzhen Li, and Varun Jampani. 2023. Metaclue: Towards comprehensive visual metaphors research. In *CVPR 2023*. Yuri Bizzoni and Shalom Lappin. 2018. Predicting human metaphor paraphrase judgments with deep neural networks. In *Proceedings of the Workshop on* Figurative Language Processing, pages 45–55, New Orleans, Louisiana. Association for Computational Linguistics. Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2022. Instructpix2pix: Learning to follow image editing instructions. *arXiv preprint arXiv:2211.09800*. Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng. 2020. Generating similes effortlessly like a pro: A style transfer approach for simile generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6455–6469, Online. Association for Computational Linguistics. Tuhin Chakrabarty, Vishakh Padmakumar, and He He. 2022a. Help me write a poem: Instruction tuning as a vehicle for collaborative poetry writing. *arXiv* preprint arXiv:2210.13669. Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, and Smaranda Muresan. 2022b. FLUTE: Figurative language understanding through textual explanations. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 7139–7159, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, and Nanyun Peng. 2021. MERMAID: Metaphor generation with symbolism and discriminative decoding. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4250–4261, Online. Association for Computational Linguistics. Lydia B. Chilton, Savvas Petridis, and Maneesh Agrawala. 2019. Visiblends: A flexible workflow for visual blends. In *Proceedings of the 2019 CHI* Conference on Human Factors in Computing Systems, CHI '19, page 1–14, New York, NY, USA. Association for Computing Machinery. Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2022. Training-free structured diffusion guidance for compositional text-to-image synthesis. *arXiv preprint* arXiv:2212.05032. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Michalle Gal. 2019. Visual metaphors and cognition: Revisiting the non-conceptual. In Kristof Nyiri and Andras Benedek, editors, Perspective on Visual Learning, Vol. 1. The Victory of the Pictorial Aga, pages 79–90. PhilPapers. E. Dario Gutiérrez, Ekaterina Shutova, Tyler Marghetis, and Benjamin Bergen. 2016. Literal and metaphorical senses in compositional distributional semantic models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 183–193, Berlin, Germany. Association for Computational Linguistics. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30. Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka. 2017. Automatic understanding of image and video advertisements. *2017* IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1100–1110. Ben Hutchinson, Jason Baldridge, and Vinodkumar Prabhakaran. 2022. Underspecification in scene description-to-depiction tasks. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1172– 1184, Online only. Association for Computational Linguistics. EunJeong Hwang and Vered Shwartz. 2023. Memecap: A dataset for captioning and interpreting memes. Bipin Indurkhya and Amitash Ojha. 2013. An Empirical Study on the Role of Perceptual Similarity in Visual Metaphors and Creativity. *Metaphor and Symbol*, 28(4):233–253. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438. Marzena Karpinska, Nader Akoury, and Mohit Iyyer. 2021. The perils of using Mechanical Turk to evaluate open-ended text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1265–1285, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ricardo Kleinlein, Cristina Luna-Jiménez, and Fernando Fernández-Martínez. 2022. Language does more than describe: On the lack of figurative speech in text-to-image models. arXiv preprint arXiv:2210.10578. George Lakoff. 1993. The Contemporary Theory of Metaphor. In Andrew Ortony, editor, *Metaphor* and Thought, pages 202–251. Cambridge University Press. Evelina Leivada, Elliot Murphy, and Gary Marcus. 2022. Dall-e 2 fails to reliably capture common syntactic processes. Emmy Liu, Chen Cui, Kenneth Zheng, and Graham Neubig. 2022a. Testing the ability of language models to interpret figurative language. Vivian Liu and Lydia B Chilton. 2022. Design guidelines for prompt engineering text-to-image generative models. In *CHI Conference on Human Factors in* Computing Systems, pages 1–23. Vivian Liu, Tao Long, Nathan Raw, and Lydia Chilton. 2023. Generative disco: Text-to-video generation for music visualization. arXiv preprint arXiv:2304.08551. Vivian Liu, Han Qiao, and Lydia Chilton. 2022b. Opal: Multimodal image generation for news illustration. In *Proceedings of the 35th Annual ACM Symposium* on User Interface Software and Technology, pages 1–17. Edward F. Mcquarrie and David Glen Mick. 1999. Visual Rhetoric in Advertising: Text-Interpretive, Experimental, and Reader-Response Analyses. Journal of Consumer Research, 26(1):37–54. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730–27744. Curran Associates, Inc. Savvas Petridis and Lydia B. Chilton. 2019. Human errors in interpreting visual metaphor. In Proceedings of the 2019 on Creativity and Cognition, C&C '19, page 187–197, New York, NY, USA. Association for Computing Machinery. Barbara J Phillips. 2003. Understanding visual metaphor in advertising. *Persuasive imagery*, pages 304–317. Barbara J. Phillips and Edward F. McQuarrie. 2004. Beyond Visual Metaphor: A New Typology of Visual Rhetoric in Advertising. *Marketing Theory*, 4(12):113–136. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In *BlackboxNLP@ EMNLP*. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022a. Hierarchical textconditional image generation with clip latents. *arXiv* preprint arXiv:2204.06125. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022b. Hierarchical textconditional image generation with clip latents. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2021. Highresolution image synthesis with latent diffusion models. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*. Asuka Terai and Masanori Nakagawa. 2010. A computational system of metaphor generation with evaluation mechanism. In International Conference on Artificial Neural Networks, pages 142–147, Thessaloniki, Greece. Springer. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In *Proceedings of the 52nd Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 248–258, Baltimore, Maryland. Association for Computational Linguistics. Tony Veale. 2016. Round up the usual suspects: Knowledge-based metaphor generation. In *Proceedings of the Fourth Workshop on Metaphor in NLP*, pages 34–41, San Diego, California. Association for Computational Linguistics. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In *International Conference on Machine Learning*, pages 23318–23340. PMLR. Sitong Wang, Samia Menon, Tao Long, Keren Henderson, Dingzeyu Li, Kevin Crowston, Mark Hansen, Jeffrey V Nickerson, and Lydia B Chilton. 2023. Reelframer: Co-creating news reels on social media with generative ai. *arXiv preprint arXiv:2304.09653*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706. Ron Yosef, Yonatan Bitton, and Dafna Shahaf. 2023. Irfl: Image recognition of figurative language. *arXiv* preprint arXiv:2303.15445. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789. Zhiwei Yu and Xiaojun Wan. 2019. How to avoid sentences spelling boring? Towards a neural approach to unsupervised metaphor generation. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 861–871, Minneapolis, Minnesota. Association for Computational Linguistics. Dongyu Zhang, Minghao Zhang, Heting Zhang, Liang Yang, and Hongfei Lin. 2021. MultiMET: A multimodal dataset for metaphor understanding. *ACLIJCNLP 2021 - 59th Annu. Meet. Assoc. Comput.* Linguist. 11th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pages 3214–3225. ## A Appendix A.1 Hyperparameters For Chain Of Thought ![12_Image_0.Png](12_Image_0.Png) Prompting We use the Instruct GPT-3 (davinci-002) model for Chain-of-Thought (CoT) prompting. To generate Objects to be Included, **Implicit Meaning** and **Visual Elaboration** we use the following hyperparameters: temperature=0.7,max tokens=256,top p=1.0,best of=1, frequency penalty=0.5,presence penalty=0.5. ## B Does Better Prompting Lead To Better Images? Language models are sensitive to prompting (Jiang et al., 2020), as are text-to-image diffusion-based models (Liu and Chilton, 2022). We employ CoT prompting to generate visual elaborations of linguistic metaphors using Instruct GPT-3 (davinci002). The alternative to CoT would be classic Completion prompting, which would require Instruct GPT-3 (davinci-002) to provide visual elaborations for the metaphors without first reasoning about objects and implicit meaning. We evaluate whether or not requiring Instruct GPT-3 (davinci-002) to reason about both the included objects and the implicit meaning **before** providing a visual elaboration improves the quality of the generated visual metaphor, by comparing to Completion prompting where the visual elaboration is directly predicted without the intermediate reasoning steps. For a fair comparison, we require the prompts to be as similar in content as possible, and use the same 5 few-shot examples as for CoT, only removing the intermediate information (objects to be included, implicit meaning) for the Completion prompt. We verify the hypothesis that CoT improves image quality through a small-scale human evaluation. We consider 50 metaphors for this experiment and generate visual descriptions using the prompt template shown in Table 6, which replicates the metaphors and visual elaborations in Table 1 but without the instruction section or the step by step reasoning used in CoT. The resulting prompts are passed to DALL·E 2 to generate images. We provide 3 annotators with the list of 50 metaphors, as well as the two images that are generated by Instruct GPT-3 (davinci-002) using CoT and Completion prompting without any further post-processing. Figure 7 shows the instructions provided to the annotators and an annotation example. To mitigate the subjectivity of the task, which is confirmed by a fair average pairwise Cohen's Kappa score (κ=0.26), we consider the majority vote selection for each example. Our results show that annotators select 27/50 images that are generated using CoT prompts, 11/50 using Completion prompts, and 12/50 images are judged to be of equal quality regardless of the prompting strategy used. Our results indicate that prompting can significantly improve the quality of the generated images suggesting that future work should investigate ways to further improve the quality of the generated visual metaphors by extracting more detailed specifications from LLMs. ## C Visual Entailment Data In order to perform the visual entailment task, we require metaphors that are associated with literal hypotheses and their corresponding labels (entailment, contradiction, neutral). **FLUTE** (Chakrabarty et al., 2022b) offers such data without any further processing. For the metaphors in CrossLing Metaphors (Tsvetkov et al., 2014) and Metaphor Paraphrases (Bizzoni and Lappin, 2018) we employ **recasting**, namely *"leveraging existing* datasets to create NLI examples", (Poliak et al., 2018) to convert them into textual entailment data. The metaphors in **Metaphor Paraphrases** (Bizzoni and Lappin, 2018) are each associated with four ranked candidate literal sentence. Each sentence is annotated with a value from 1 to 4, indicating the degree to which the sentence is a paraphrase of the original metaphoric sentence, where 4 stands for exact paraphrase. We consider each sentence and each of the candidate paraphrases as a sentence pair for a textual entailment classification problem 1. **Metaphor**: My lawyer is a shark. An illustration of a shark in a suit with fierce eyes and a suitcase and a mouth open with pointy teeth 2. **Metaphor**: I've reached my boiling point. An illustration of a boiling pot of water with a person's head popping out of the top, steam coming out of their ears, and an angry expression on their face. 3. **Metaphor**: Joe: that's because you're like a snail surfing on molasses. An illustration of a person with a snail shell on their back slowly sliding down a hill of molasses. 4. **Metaphor**: Absence is the dark room in which lovers develop negatives. An illustration of an ominous dark room with a film strip negatives hanging and a red heart in the center with a person in the corner looking sad and lonely. 5. **Metaphor**: My heart is a rose thorn. An illustration of a heart with a prickly thorn coming out of the center and barbs going outward. 6. **Metaphor**: My bedroom is a pig sty An illustration of a messy bedroom with clothes and garbage strewn about and a pig in the center rooting through the mess. Table 6: Simple Completion prompt to elicit visual elaboration for a given metaphor, using the same 5 few shot examples as in the CoT prompting strategy, but without the objects to be included and the implicit meaning. ![13_image_0.png](13_image_0.png) ## And Manually Annotate Them. The **CrossLing Metaphors** (Tsvetkov et al., 2014) dataset consists of 200 metaphoric English sentences, 200 literal English sentences, and their Russian translations. For the purposes of this study we were only concerned with using the 200 English metaphoric sentences to construct entailment pairs. We manually created three literal hypotheses with corresponding labels (entailment, contradiction, and neutral). The data was presented to 3 annotators to verify the quality of the labels. The annotators were presented with both the metaphoric premise and the literal hypothesis, and had to decide whether the hypothesis was entailed, contradicted, or neutral to the statement. The mean pairwise annotator agreement for the labels was .79. The gold label for the data was assigned by majority vote. ## C.1 Evaluation Interface Figure 9 and 10 show the evaluation interface for LLM-Diffusion Model collaboration and Human AI collaboration respectively. For the LLMDiffusion Model 5 images are presented in randomly shuffled order while for Human AI collaboration 2 images are presented one from LLMDALLE and the other from **HAIVMet**. ![14_image_0.png](14_image_0.png) # Visual Metaphor: All Was Like A Winter Morning After It Had Snowed All Night Enter your ranking among the images, separated by a comma. Example: 5,4,2,3,1 ![15_image_0.png](15_image_0.png) # Visual Metaphor: Love Is A Warrior'S Yearning Enter your ranking among the images between 2 images separated by a comma. Example: 1,2. If they are both are same just type "Tie" ![16_image_0.png](16_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Un-numbered section after Conclusion ✓ A2. Did you discuss any potential risks of your work? Un-numbered section after Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 1, 3, And 4 ✓ B1. Did you cite the creators of artifacts you used? Sections 1, 3, and 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 4.1 ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our dataset is free to use without restriction, and as such does not require specification for the intended use. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Un-numbered Code of Ethics section after the Conclusion and Limitations ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 1 and Limitations ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 1, 3, and 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 3 and 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Sections 3, 4; Appendix A and B ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Sections 3, 4; Appendix A and B ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Code of Ethics section ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? There was no IRB, and as such no protocol needed to be approved. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We did not collect such information from our annotators as they were not the subjects of our research.
rahamim-etal-2023-text
Text Augmentation Using Dataset Reconstruction for Low-Resource Classification
https://aclanthology.org/2023.findings-acl.466
In the deployment of real-world text classification models, label scarcity is a common problem and as the number of classes increases, this problem becomes even more complex. An approach to addressing this problem is by applying text augmentation methods. One of the more prominent methods involves using the text-generation capabilities of language models. In this paper, we propose Text AUgmentation by Dataset Reconstruction (TAU-DR), a novel method of data augmentation for text classification. We conduct experiments on several multi-class datasets, showing that our approach improves the current state-of-the-art techniques for data augmentation.
# Text Augmentation Using Dataset Reconstruction For Low-Resource Classification Adir Rahamim∗ Technion - Israel Institute of Technology adir.rahamim@campus.technion.ac.il ## Esther Goldbraich Ibm Research Esthergold@Il.Ibm.Com Abstract In the deployment of real-world text classification models, label scarcity is a common problem. As the number of classes increases, this problem becomes even more complex. One way to address this problem is by applying text augmentation methods. One of the more prominent methods involves using the text-generation capabilities of language models. We propose Text AUgmentation by Dataset Reconstruction (TAU-DR), a novel method of data augmentation for text classification. We conduct experiments on several multi-class datasets, showing that our approach improves the current state-of-the-art techniques for data augmentation. ## 1 Introduction The deployment of deep learning models in the real-world requires an abundance of labels. However, labeled data is often difficult and expensive to obtain, especially when the models are deployed in highly specialized domains. Therefore, in this paper, we focus on data augmentation for text classification in low-resource environments. Text classification (Sebastiani, 2002) is fundamental to machine learning and natural language processing. It includes various tasks, such as intent classification (Kumar et al., 2019; Rabinovich et al., 2022), which is a vital component of many automated chatbot platforms (Collinaszy et al., 2017); sentiment analysis (Tang et al., 2015); topic classification (Tong and Koller, 2001; Shnarch et al., 2022); and relation classification (Giridhara et al., 2019). The design and development of such AI applications may begin with a dataset containing only a limited amount of data. To improve the performance of downstream models in such low-resource settings, a data augmentation mechanism is often implemented (Wong ∗The work was completed during an internship at IBM Research. Guy Uziel IBM Research guy.uziel1@ibm.com ## Ateret Anaby-Tavor Ibm Research Atereta@Il.Ibm.Com et al., 2016). To achieve this, new data are synthesized from existing training data. It has been demonstrated that the use of such mechanisms can significantly improve the performance of various neural network models. For computer vision and speech recognition, a number of well-established methods are available for synthesizing labeled data and enhancing classification accuracy. Some of the basic methods, which are also class preserving, include transformations such as cropping, padding, flipping, and shifting along time and space dimensions (Cui et al., 2015; Krizhevsky et al., 2017). However, the application of simple transformation for textual data augmentation is more challenging, since simple transformations often invalidate and distort the text, thereby producing grammatically and semantically incorrect texts that are different from the actual text distribution. Consequently, rule-based data augmentation methods for texts typically involve replacing one word with a synonym, deleting a word, or changing a word (Wei and Zou, 2019; Dai and Adel, 2020). Recent advances in text generation models (Radford et al., 2018) facilitate an innovative approach for handling scarce data situations. In an effort to reduce the cost of obtaining labeled in-domain data, Wang et al. (2021) use the self-training framework to generate pseudo-labeled training data from unlabeled in-domain data. Xu et al. (2021) have recently demonstrated the difficulty in extracting such domain-specific unlabeled data from general corpora. A number of existing works (Ding et al., 2020; Anaby-Tavor et al., 2020; Yang et al., 2020) have overcome this difficulty by using the generation capabilities of pre-trained language models. In this paper, we follow the latter paradigm and propose Text Augmentation by Dataset Reconstruction (TAU-DR), a novel text augmentation algorithm that generates new sentences based on the reconstruction of the original sentences from the 7389 hidden representations of a pre-trained classifier. TAU-DR utilizes frozen auto-regressive language models by soft-prompt tuning, using a relatively small number of trainable parameters compared to the language model and unlike most existing methods that rely on language models, it does not require an additional pertaining phase. During training, we extract the hidden representation from the pre-trained classifier and use a Multi-Layer Perceptron (MLP) to turn the hidden representation into a soft-prompt. The soft-prompt is then fed into the frozen language mode. Our approach is motivated by the observation that if the pre-trained classifier is trained from a language model (i.e. BERT), then the hidden representation is a contextual embedding of the original sentence. Thus, the soft-prompt will also summarize contextual information from a small neighborhood of the hidden representation, giving the frozen language model additional information for enriching the original dataset. By using this training approach and manipulating the trained prompts, we are able to generate novel sentences with their corresponding pseudolabels. Then, as in previous works (Anaby-Tavor et al., 2020; Wang et al., 2022), we apply a filtering mechanism and filter out low-quality sentences. We conduct experiments on four multi-class datasets: TREC, ATIS, Banking77, and T-Bot (in various low-resource settings) and show that our approach consistently outperforms the current stateof-the-art approaches. We also conduct several experiments measuring the quality of the generated sentences1. Our contributions are two-fold, and can be summarized below: - We propose a novel approach for data augmentation using dataset reconstruction. We demonstrate that our method achieves stateof-the-art performance on several text classification datasets. - We suggest two novel filtering approaches for better exploitation of the generated sentences - one approach for cases where the evaluation set is available, and another approach for cases where such datasets are absent. The remainder of the paper is organized as follows: Section 2 introduces the problem framework 1Our implementation will be released after the anonymity period. and relevant studies. In Section 3, we present TAU-DR and our approach. In Section 4, we conduct the experiments. Section 5 concludes the paper and includes a discussion of future work. ## 2 Problem Setup And Related Work In this section, we introduce the data augmentation setting in a low-resource text classification. Let Xtrain = {(xi, yi)} N i=1 be a text classification dataset with L classes, where we denote xito be the example and yito be its corresponding label. We assume that for each class, we have m examples where m is a relatively low number. As in previous works (e.g., Anaby-Tavor et al. 2020; Wang et al. 2022), we assume the existence of a validation set Xval and a test set Xtest. 2 Our goal is to create an augmented dataset Xgen by using Xtrain so that by training a classifier on the union of the generated and the original dataset Xtrain ∪ Xgen, we improve the performance of the same classifier trained on Xtrain. The performance of each classifier is measured on Xtest. The task of text augmentation is relatively challenging, since even small modifications can change the meaning and label of the text. By carefully setting up a rule-based approach, one can deal with this challenge. This was tried by Wei and Zou (2019), who proposed Easy Data Augmentation (EDA), which utilizes simple predefined rules to edit, remove, and substitute portions of the text while maintaining its meaning. Dai and Adel (2020) suggested a rule- based augmentation method named SDANER, tailored for named entity recognition. A different line of research, which is the prominent approach, uses pre-trained language models. Wu et al. (2019) proposed Conditional BERT (CBERT) for contextual data augmentation. Given a sentence and its label, words in the sentence are masked randomly. The label is then used as a context to predict substitute words while keeping the original sentence in the same class. Anaby-Tavor et al. (2020) introduced Language Model Based Data Augmentation (LAMBADA), which is also a conditional generation-based data augmentation. LAMBADA fine-tunes an entire language model, *GP T*2, by concatenating all of the sentences together with their corresponding 2Because this assumption does not hold in some real-world scenarios, in Section 4.5 we abandon that assumption and discuss the no-validation case. labels, thereby creating additional textual data on which the language model can be fine-tuned. Due to the noisiness of the generation process, a filtering process is used to ensure that only high-quality sentences remain. The filtering process consists of a classifier that was trained on the original dataset by taking those sentences with the top-K softmax scores. Wang et al. (2022) recently suggested PromDA. This approach first trains an entire pre-trained language model on the task of converting keywords to sentences from a general corpus. Then, using RAKE (Rose et al., 2010), keywords are extracted from the original dataset. By concatenating these keywords to a learned prefix, the language model from the previous step is used to reconstruct the original sentence. Then, the same filtering process as in LAMBADA is used, with the exception that all sentences for which the original classifier agrees with the pseudo-label are taken. ## 2.1 Soft-Prompts TAU-DR, as will be discussed in the next section, exploits the language-generation capabilities of language models by using soft-prompts, one of the dominant approaches for parameter-efficient tuning. Prompt-based learning was introduced by Brown et al. (2020). Their study demonstrated that a large language model can be adapted for downstream tasks by carefully constructing prompts (i.e., textual instructions). A method proposed by Gao et al. (2020) for simplifying the construction process involves expanding prompts by using pretrained language models. Each downstream task requires manual construction of discrete prompts. The construction of discrete prompts is still an independent process that is difficult to optimize together with downstream tasks. A study by Lester et al. (2021); Li and Liang (2021) suggests using soft-prompts. Soft-prompts do not represent actual words, as opposed to hard prompts, and can be incorporated into frozen pretrained language models. As demonstrated by Li and Liang (2021), pre-trained language models (PLMs) with soft-prompts provide better performance in low-resource settings, and enable end-toend optimization of downstream tasks. ## 3 Text Augmentation By Dataset Reconstruction (Tau-Dr) In this section, we introduce Text AUgmentation by Dataset Reconstruction (TAU-DR), our novel text augmentation algorithm. Algorithm 1 Text Augmentation by Dataset Reconstruction (TAU-DR) Require: Training dataset Xtrain, pre-trained classifier C*base*, pre-trained language model LM %% **training phase** 1: **while** training steps not done do 2: for (*x, y*) in Xtrain do 3: Extract h from C*base* 4: P ← MLP(h) % transform the hidden representation into soft-prompt. 5: xˆ *← LM*(P) % predict a sentence using the soft-prompt 6: θMLP ← θMLP − ∇θMLP Llm(x, xˆ) 7: **end for** 8: **end while** %% **generation phase** 9: Xintra ← GENintra(LM*, MLP,* Xtrain) 10: Xinter ← GENinter(LM*, MLP,* Xtrain) 11: Xgen ← Xintra ∪ Xinter %% **filtration phase** 12: Xgen ← *F iltration*(Xtrain, Xval, Xgen) , $$\operatorname*{ion}{\mathrm{~phase}}$$ $$\cdot_{a}({\mathcal{L}}{\mathcal{M}},{\mathcal{M}}{\mathcal{L}}P,\mathbb{X}_{\mathrm{train}})$$ $$\cdot_{r}({\mathcal{L}}{\mathcal{M}},{\mathcal{M}}{\mathcal{L}}P,\mathbb{X}_{\mathrm{train}})$$ $$\cdot_{\mathrm{inter}}$$ $$n{\mathrm{~phase}}$$ $$,\mathbb{X}_{\mathrm{val}},\mathbb{X}_{\mathrm{gen}})$$ TAU-DR consists of three stages: training, generation, and filtration, as described below. ## 3.1 Training We now describe the training phase in TAU-DR as shown in Algorithm 1. Given an example x from the original dataset, we extract its hidden representation, h, from the pre-trained classifier, which we denote by C*base* (line 3). For instance, if C*base* is a BERT classifier, it can be the [CLS] token representation in the last layer. The next step in line 4 is to apply a multi-layer perceptron (MLP) with parameters θMLP , and turn the hidden representation, h, into a prompt of length n denoted as P. P is then fed into the frozen language model LM (line 5). The training objective of the language model is to reconstruct the original sentence using only the hidden representation. The training step is illustrated in Figure 1. ## 3.2 Generation To generate new sentences that will challenge the classifier and ultimately improve its accuracy, we ![3_image_0.png](3_image_0.png) perturb the learned soft prompts. We suggest two novel strategies to provide new soft prompts for the frozen language model3. Intra-class generation The motivation behind the following approach is that by combining soft prompts from the same class, we will be able to lexically and semantically enrich the class itself. The method can be described as follows: We select two sentences, x1, x2 from the same class, and then extract their corresponding hidden representation, h1, h2, using the pre-trained classifier C*base*. Using the trained MLP, we transform them into their corresponding soft prompts, P1 and P2. Then, by averaging the two prompts, we achieve a new aggregated soft prompt Pagg. The latter is passed into the language model. The pseudo-label for the generated sentences is set as class x1. This is illustrated in Figure 2. Inter-class generation With inter-class generation, we help the classifier to better distinguish between the different classes. This is done by generating sentences using soft-prompts, which are created by combining soft-prompts from two different classes. First, we randomly sample two sentences from two different classes, x1, x2, and then, as detailed above, extract their soft-prompts denoted as P1 and P2, respectively. We then aggregate the two prompts by taking their weighted mean, Pagg = wP1 + (1 − w)P2, where 0 *< w <* 1 is sampled uniformly. In this case, we set the pseudolabel as the label of the closest prompt as illustrated in Figure 3. ## 3.3 Dynamic Consistency Filtering By generating new sentences for our classifier, we risk the creation of low-quality data. This can hap-3The closest work to this approach is the work of Asai et al. (2022), suggesting the aggregation of prompts for multitask generalization. pen if we set an incorrect pseudo-label or if the language model generates out-of-domain examples. Therefore, it is common to apply a consistency filtering mechanism (Anaby-Tavor et al., 2020; Wang et al., 2022). The consistency filtering suggested by AnabyTavor et al. (2020) used the pre-trained classifier and considered the top-K sentences (ordered by their softmax scores). Wang et al. (2022) also used the trained classifier. However, instead of using the top-K approach, they kept all the generated sentences for which the classifier agrees with the pseudo-label. Clearly, the chosen filtration method has a large effect on the final classifier, as it controls the data quality of the final trained classifier. The top-K approach might be too conservative, keeping a large safety margin, which results in filtering out most of the generated instances. On the other hand, keeping all the instances on which the classifier agrees with the pseudo-label might include many noisy-label sentences, resulting in a degraded classifier. We now present *Dynamic Consistency Filtering* - our filtering approach for a case where an evaluation set exists. In Section 4.5, we discuss the no- evaluation case. Our method relies upon the evaluation dataset to approximate the optimal portion of the generated instances to include in the augmented dataset. We do so by training k classifiers, one of which trained on a different quantile of the generated instances, ordered by their softmax scores (received from the pre-trained classifier C*base*). After training the k instances, we choose the best preforming classifier using the evaluation dataset. It is important to note that there is a possibility of applying the filtering mechanism in a recursive manner, for example, training a classifier on the filtered data and running that classifier on the original ![4_image_0.png](4_image_0.png) generated dataset, with the hope of improving the filtering of the instances. This way, one can further improve the performance of the final classifier, as discussed by Anaby-Tavor et al. (2020). ## 3.4 Training And Generating In ![4_Image_2.Png](4_Image_2.Png) Low-Resource Setting In a preliminary study we conducted, we investigated how over-training affects the quality of the generated text in terms of diversity and distance from the original train distribution. To show the effect of over-training, we apply LAMBADA (Anaby-Tavor et al., 2020) on several internal lowresource multi-class datasets. We trained LAMBADA for 2500 steps and generated augmented sentences every 100 steps. We then evaluated the quality of the generated sentences as a function of the trained steps by using distributional measures: Precision and Recall (Sajjadi et al., 2018) summarized as F1, DC (Naeem et al., 2020) and ![4_image_1.png](4_image_1.png) MAUVE (Pillutla et al., 2021) 4. We can observe, on Figure 4, that DC, Precision and Recall and MAUVE converges to 1. This suggest that without any control measures in place, the distribution of the generated text quickly converges into the training distribution. This is not a desired property since our goal is to generate texts which will expand the support of the training distribution. It is interesting to note that the nature of the results remains the same, even when soft-prompt tuning is applied. Therefore, to address the above, we deploy two heuristics. The first heuristic is to increase the number of training samples. We do so by using the EDA rule-based simple augmentation method discussed earlier (Wei and Zou, 2019). Please note that in this enrichment we do not consider the pseudo-labels, since our goal is to provide more reference points for the MLP training. Moreover, we checkpoint the MLP several times during training, and generate sentences from the different checkpoints. ## 4 Experiments 4.1 Setup We conduct experiments on four multi-class classification datasets (described in the next subsection). Each benchmark dataset is split into 80% train ,10% evaluation and 10% test. We then take the train dataset and sample K examples for each class where classes without K examples are removed, resulting in a shot-K dataset. In our experiments, we choose K ∈ (5, 10). As a base classifier, we choose the BERT-base model5, as in the study of Anaby-Tavor et al. (2020); Wang et al. (2021). 4The measures are introduced on Section 4.4. 5https://huggingface.co/bert-base-uncased Shot-5 Shot-10 Method ATIS TREC Banking77 T-Bot ATIS TREC Banking77 T-Bot C*base* 0.739 0.495 0.689 0.681 0.772 0.713 0.798 0.741 EDA 0.735 0.524 0.7 0.684 0.806 0.72 0.793 0.749 C-BERT 0.75 0.517 0.682 671 0.877 0.727 0.805 0.747 LAMBADA 0.88 0.566 0.709 0.703 0.871 0.745 0.787 0.74 PromDA 0.867 0.583 0.739 0.692 0.897 0.742 0.791 0.752 TAU-DR 0.906 0.641 0.733 0.71 0.933 0.773 0.839 0.788 The same set of hyperparameters is used for the training of C*base*, for training without the original data, and for training with the generated data. The performance of C*base* is evaluated during training using Xval. We compare TAU-DR to the methods discussed in Section 4.1: The rule-based data augmentation methods EDA (Wei and Zou, 2019); CBERT (Wu et al., 2019), LAMBADA (Anaby-Tavor et al., 2020), and PromDA (Wang et al., 2022) which is implemented with a T5-large model (700M parameters). All hyperparameters used for these methods are those recommended by the authors. We repeat the experiments five times and report the averaged accuracy for each shot-k dataset. For TAU-DR we used the T5-large model for all of our experiments. This model was fine tuned an additional 100k steps on the C4 dataset using the regular LM loss, to achieve better adaptivity to soft prompt tuning (Lester et al., 2021) 6. We choose MLP with 2 hidden layers and a ReLU activation. The prompt-length is set as 10 in all of our experiments. TAU-DR was trained for 100 epochs. We checkpointed the model every 20 epochs, resulting in 5 checkpoints. The pre-trained classifier C*base* used in our method is the same classifier discussed above. For the dynamic filtering, we use 10 classifiers with the same configuration as the pre-trained classifier, where each classifier is trained on a different portion of the generated dataset ordered by the softmax score of C*base*. The experimental results are shown in Table 1 for shot-5 and shot-10 for the different multi-class benchmarks. ## 4.2 Datasets All datasets used are classification datasets, with different numbers of classes and across several do-6https://huggingface.co/google/t5-large-lm-adapt mains, three of which are available in the public domain. Table 2: Properties of the used multi-class datasets Airline Travel Information Systems (ATIS, Hemphill et al. **1990):** The ATIS dataset provides a large set of queries about flight information along with the intent, the subject of the various questions. Text Retrieval Conference (TREC, **Hovy et al.** 2001): TREC is a question classification dataset that consists of a variety of questions from different areas and their intent. | Name | # Classes | Domain | |-----------|-------------|------------------------| | ATIS | 17 | Flight reservation | | TREC | 50 | Open-domain questions | | Banking77 | 77 | Banking | | T-Bot | 87 | Telco customer support | Banking77 (Casanueva et al., **2020):** The Banking77 dataset offers questions from singledomain banking, annotated with their labels. Teleco-Bot (T-Bot): An internal intent classification dataset, includes data used for the training of chatbots used by telco companies for customer support. The datasets used are summarized in Table 2 ## 4.3 Main Results First, we can observe that the addition of the generated data from TAU-DR to the classification models significantly improves the performance of C*base* and outperforms the existing method. Overall, the EDA rule-based approach does not lead to a significant improvement over C*base* on the more challenging datasets Banking77 and T-Bot on both shot-5 and shot-10. On the other hand, the language-model-based approaches, i.e., C-BERT, LAMBADA, PromDA and TAU-DR outperform the rule-based approach. PromDA can provide better results than LAMBADA on the ATIS and TREC datasets. However, with the exception of Banking77 (shot-5) it fails when considering domainspecific datasets with a larger number of classes, such as Banking77 and T-Bot. On T-Bot and Banking77 in the shot-10 setting none of the methods expect TAU-DR where able to give a statistically significant improvement over C*base*. The accuracy improvements of TAU-DR over C*base* on the ATIS dataset are approximately 20% for both shot-5 and shot-10. For TREC the improvement rate is 29% for the shot-5 setting and 9% for the shot-10 setting. For Banking77, the average improvement rate is 4.5% and for the challenging T-Bot dataset the average improvement rate is 5%. ## 4.4 Estimating The Generation Quality We now turn to estimate the quality of text generated by the different methods. We use the following measures: - Recall and Precision (Sajjadi et al., 2018): Given two distributions *P, Q*, this measure compares their "precision", or how much of Q can be generated by a "part" of P, while "recall" measures how much of P can be generated by a "part" of Q. Recall and Precision are summarized as F1. - Complexity (Kour et al., 2021): Quantifies how difficult observations are, given their true class label and how they will challenge the classifier. The measure can be used to automatically determine a baseline performance threshold.. - MAUVE (Pillutla et al., 2021): This metric measures the gap between two text distributions by calculating the area under the information divergence curve. A recent study (Kour et al., 2022) compared several statistical and distributional measures. The different measures were compared over several desired criteria. In their experiments, MAUVE turned out to be the most robust performance measure for text generation quality. In this set of experiments, we took the generated text and compared it to *the test set*, which represents the actual text distribution. A desired property of the augmented texts is that their distribution will expand the intersection between the support of the train distribution with the test distribution. Thus, we can compare the generation quality of the different methods by looking on how close they are to the test distribution. We report the average results on Table 3. The implementation details for this experiment are detailed on Appendix B. The measures of the text generated by TAU-DR are superior to 2 out of 3 in all configurations. Showing that we can generate text that is close to the actual distribution of the data. In addition, by looking at the Atis dataset, we observe that we were able to produce more challenging and complex sentences for the classifier. It has not been explored if or how these measures relate to the classifier's performance. Nevertheless, these measures can provide some insight into how well a model can reproduce the test distribution. ## 4.5 Dynamic Consistency Filtering With No Evaluation Our suggested dynamic filtering method is shown to be effective in filtering out low-quality generated data. However, the existence of such datasets is not obvious in real-world scenarios. In this subsection, we suggest an approach for filtering the generated data without relying on the existence of an evaluation dataset. The method can be described as follows: As in the Dynamic Consistency Filtering approach, for each class we order the generated examples according to their softmax scores obtained from the pre-trained classifier C*base*. We then filter out all instances on which the classifier disagrees with the pseudo-label. Then we train k classifiers on a different quantile of the ordered data (i.e., for k = 5, we train the i-th classifier i = 1*, ...,* 5, on the i/5 quantile). We then use the obtained classifiers to filter out the generated instances based on the majority vote of the classifiers, we denote this approach as TAU-DRmaj . As shown in Table 4, with the exception ATIS (shot-5) and Banking77 (shot-5), TAU-DRmaj also outperforms the benchmark methods and on average only slightly degrades the performance of TAU-DR. | Shot-5 | Shot-10 | | | | | | |-----------|-----------|--------|-------------|------|--------|-------------| | Method | F1↑ | MAUVE↑ | Complexity↑ | F1↑ | MAUVE↑ | Complexity↑ | | ATIS | ATIS | | | | | | | LAMBADA | 0.66 | 0.78 | 10.01 | 0.76 | 0.8 | 5.43 | | PromDA | 0.7 | 0.71 | 8.44 | 0.74 | 0.76 | 4.67 | | TAU-DR | 0.75 | 0.7 | 13.67 | 0.73 | 0.82 | 5.91 | | Banking77 | Banking77 | | | | | | | LAMBADA | 0.79 | 0.72 | 2.67 | 0.86 | 0.8 | 2.56 | | PromDA | 0.83 | 0.75 | 3.66 | 0.85 | 0.75 | 3.65 | | TAU-DR | 0.86 | 0.78 | 2.61 | 0.88 | 0.87 | 2.31 | Shot-5 Shot-10 Method ATIS TREC Banking77 T-Bot ATIS TREC Banking77 T-Bot BASE 0.739 0.495 0.689 0.681 0.792 0.713 0.798 0.741 TAU-DR 0.906 0.641 0.733 0.71 0.933 0.773 0.839 0.788 TAU-DRmaj 0.847 0.615 0.738 0.726 0.911 0.761 0.833 0.767 ## 5 Conclusion And Future Work In this paper, we present TAU-DR, a novel textaugmentation method for low-resource classification using dataset reconstruction. We test our method on four multi-class classification datasets in various few-shot scenarios and show that our approach outperforms the state-of-the-art approaches. In the future, we plan to explore the learned prompt space and check how it can be used for generating helpful sentences. In our preliminary experiment, we found that the averages of the prompts were concentrated in a narrow cone. This concentration hinders the exploitation of the geometry in the learned prompt space. The above observation is aligned with other findings regarding the anisotropy of the word embedding space in pre-trained language models (Li et al., 2020; Ethayarajh, 2019). Finally, we wish to explore if and how additional information (e.g, in-domain textual-data) might improve the performance of text augmentation methods on highly specialized domains. ## Limitations To address the low-resource data in the training of TAU-DR, we apply two heuristics, dataset enrichment and generation from different checkpoints. Despite being effective, they require additional computational time that might be challenging in applications with low-computational resources. A possible approach to reduce the computational time might be to average the checkpoints. We believe that this might lead to competitive results, with a significant reduction in computational time, since checkpoint averaging proved to be an effective approach in low-resource settings. Another limitation is when the original dataset is in a highlyspecialized domain that might contain domainspecific phrases that were most likely not included in the pre-training data of the language model. The results obtained by existing data augmentation approaches will most likely exhibit only marginal improvement. ## Ethics Statement Text generation by nature entails a number of ethical considerations when considering possible applications. The main failure is when the model generates text with undesirable properties (bias etc.) for training the classifier but these properties are not present in the original training data. Because our model converges and learns to generate data close to the underlying source material, the above considerations, in our approach, are negligible. As a result, the generated text may be harmful if users of such models are unaware that such issues appear on their training data or if they fail to consider them, e.g., by selecting and evaluating data more carefully. ## References Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deerom learning to the rescue! In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7383–7390. Akari Asai, Mohammadreza Salehi, Matthew E Peters, and Hannaneh Hajishirzi. 2022. Attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing. *arXiv preprint* arXiv:2205.11961. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Iñigo Casanueva, Tadas Temcinas, Daniela Gerz, ˇ Matthew Henderson, and Ivan Vulic. 2020. ´ Efficient intent detection with dual sentence encoders. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38–45, Online. Association for Computational Linguistics. Juraj Collinaszy, Marel Bundzel, and Iveta Zolotova. 2017. Implementation of intelligent software using ibm watson and bluemix. Acta Electrotechnica et Informatica, 17(1):58–63. Xiaodong Cui, Vaibhava Goel, and Brian Kingsbury. 2015. Data augmentation for deep neural network acoustic modeling. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 23(9):1469– 1477. Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. arXiv preprint arXiv:2010.11683. Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. Daga: Data augmentation with a generation approach for low-resource tagging tasks. *arXiv preprint arXiv:2011.01549*. Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*. Praveen Kumar Badimala Giridhara, Chinmaya Mishra, Reddy Kumar Modam Venkataramana, Syed Saqib Bukhari, and Andreas Dengel. 2019. A study of various text augmentation techniques for relation classification in free text. *ICPRAM*, 3:5. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. *arXiv preprint arXiv:1904.09751*. Eduard Hovy, Laurie Gerber, Ulf Hermjakob, ChinYew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In *Proceedings of the First International Conference on Human* Language Technology Research. George Kour, Samuel Ackerman, Orna Raz, Eitan Farchi, Boaz Carmeli, and Ateret Anaby-Tavor. 2022. Measuring the measuring tools: An automatic evaluation of semantic metrics for text corpora. arXiv preprint arXiv:2211.16259. George Kour, Marcel Zalmanovici, Orna Raz, Samuel Ackerman, and Ateret Anaby-Tavor. 2021. Classifier data quality: A geometric complexity based method for automated baseline and insights generation. *arXiv preprint arXiv:2112.11832*. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2017. Imagenet classification with deep convolutional neural networks. *Communications of the* ACM, 60(6):84–90. Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130, Online. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefixtuning: Optimizing continuous prompts for generation. *arXiv preprint arXiv:2101.00190*. Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. 2020. Reliable fidelity and diversity metrics for generative models. In *International Conference on Machine Learning*, pages 7176–7185. PMLR. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. *Advances in Neural Information Processing Systems*, 34:4816–4828. Ella Rabinovich, Matan Vetzler, David Boaz, Vineet Kumar, Gaurav Pandey, and Ateret Anaby-Tavor. 2022. Gaining insights into unrecognized user utterances in task-oriented dialog systems. arXiv preprint arXiv:2204.05158. Xinnuo Xu, Guoyin Wang, Young-Bum Kim, and Sungjin Lee. 2021. Augnlg: Few-shot natural language generation using self-trained data augmentation. *arXiv preprint arXiv:2106.05589*. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory, pages 1–20. Eyal Shnarch, Alon Halfon, Ariel Gera, Marina Danilevsky, Yannis Katsis, Leshem Choshen, Martin Santillan Cooper, Dina Epelboim, Zheng Zhang, Dakuo Wang, et al. 2022. Label sleuth: From unlabeled text to a classifier in a few hours. arXiv preprint arXiv:2208.01483. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In *Proceedings of the* 2015 conference on empirical methods in natural language processing, pages 1422–1432. Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022. Promda: Prompt-based data augmentation for low-resource nlu tasks. *arXiv preprint* arXiv:2202.12499. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. *arXiv* preprint arXiv:2109.09193. ## A Ablation Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196. Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. 2016. Understanding data augmentation for classification: when to warp? In 2016 international conference on digital image computing: techniques and applications (DICTA), pages 1– 6. IEEE. Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In *International conference on computational science*, pages 84–95. Springer. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, and Doug Downey. 2020. Generative data augmentation for commonsense reasoning. *arXiv preprint* arXiv:2004.11546. Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. 2018. Assessing generative models via precision and recall. *Advances in neural information processing systems*, 31. In our ablation studies, we evaluated the independent effect of 5 different components on our method: enrichment, intra-class, inter-class, checkpointing and dynamic filtering. In this ablation study we want to emphasize the contribution of each module for the success of our method. Results are summarized in Table 5. During the MLP training we used dataset enrichment in order to add more reference points. As we can observe from the results this enrichment is an important aspect of our method as our method without dataset enrichment results in an average degradation of 4.5 accuracy points. In addition, we evaluated the effect of each generation method we proposed - intra-class and inter-class. The intra-class generation is meant to enrich the number of examples in a given class, whereas inter-class is meant to highlight the difference between different classes. We can see that both generation methods are vital components of our method, with degradation of 2 and 3.25 accuracy points when not using intra-class or inter-class, respectively. Moreover, we determined the efficacy of the checkpointing paradigm. We utilized checkpointing to overcome the over training affects as discussed on Section 3. Based on the results, we can see that the checkpointing paradigm plays an important role in the method's success. Generating Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. *ACM computing surveys* (CSUR), 34(1):1–47. Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. *Journal of machine learning research*, 2(Nov):45–66. sentences using only the last checkpoint results in degradation of 3.75 accuracy points. The last ablation conducted is to evaluate the performance of the dynamic filtering method. As discussed earlier in all the generation methods this plays a vital component in keeping high-quality instances. On the ablation experiment we kept all the sentences on which C*base* agrees with the pseudo-label. Not surprisingly this approach caused a major decrease in the accuracy with an average of 6.5 accuracy points. ## B Additional Implementation Details The optimizer used for training the MLP is AdamW, we tested the following learning rate {1e − 3, 1e − 2, 1e − 4} and a 1e − 2 weight decay. We experimented with the following batch sizes {16, 32, 64}. he size of the hidden layer is set as dim(h) ∗ n/2, where n is the prefix-length. The MLP architecture was not optimized during our experiments. We experiment also with prefix-lengths of 5, 10, 15, 20. These different prefix lengths have a negligible impact, since we used a medium-sized model. This aligns with the findings of Lester et al. (2021) . We used an internal multi-class dataset which was not reported in the main paper to search for the best training configuration. The classifier was trained for 5000 steps with 8 batch size with AdamW optimizer and 1e − 5 learning rate We run all experiments on a single NVIDIA A100 GPU. For the generation phase, we used the nucleus sampling (Holtzman et al., 2019) with k = 100, p = 0.95 both for the intra- and inter-generation approaches. To calculate Precision and Recall, MAUVE and Complexity we sampled 1000 instances and compared against 1000 sentences in the generated set. We repeated this process 10 times for every one of the 5 splits for each dataset. | Shot-5 | | | | | |------------------------------|-------|-------|-----------|-------| | Method | ATIS | TREC | BANKING77 | WVA | | TAU-DR | 0.906 | 0.641 | 0.733 | 0.71 | | TAU-DR w/o enrichment | 0.858 | 0.596 | 0.713 | 0.679 | | TAU-DR w/o intra-gen. | 0.894 | 0.617 | 0.728 | 0.672 | | TAU-DR w/o inter-gen. | 0.837 | 0.631 | 0.702 | 0.695 | | TAU-DR w/o checkpointing | 0.875 | 0.602 | 0.717 | 0.694 | | TAU-DR w/o dynamic filtering | 0.761 | 0.57 | 0.697 | 0.708 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section number 5 ✓ A2. Did you discuss any potential risks of your work? Section number 5 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B and Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
ghosh-etal-2023-lasque
{L}a{SQ}u{E}: Improved Zero-Shot Classification from Explanations Through Quantifier Modeling and Curriculum Learning
https://aclanthology.org/2023.findings-acl.467
A hallmark of human intelligence is the ability to learn new concepts purely from language. Several recent approaches have explored training machine learning models via natural language supervision. However, these approaches fall short in leveraging linguistic quantifiers (such as {`}always{'} or {`}rarely{'}) and mimicking humans in compositionally learning complex tasks. Here, we present LaSQuE, a method that can learn zero-shot classifiers from language explanations by using three new strategies - (1) modeling the semantics of linguistic quantifiers in explanations (including exploiting ordinal strength relationships, such as {`}always{'} {\textgreater} {`}likely{'}), (2) aggregating information from multiple explanations using an attention-based mechanism, and (3) model training via curriculum learning. With these strategies, LaSQuE outperforms prior work, showing an absolute gain of up to 7{\%} in generalizing to unseen real-world classification tasks.
# Lasque**: Improved Zero-Shot Classification From Explanations Through** Quantifier Modeling And Curriculum Learning Sayan Ghosh∗ Rakesh R Menon∗**Shashank Srivastava** UNC Chapel Hill {sayghosh, rrmenon, ssrivastava}@cs.unc.edu ## Abstract A hallmark of human intelligence is the ability to learn new concepts purely from language. Several recent approaches have explored training machine learning models via natural language supervision. However, these approaches fall short in leveraging linguistic quantifiers (such as 'always' or 'rarely') and mimicking humans in compositionally learning complex tasks. Here, we present LaSQuE, a method that can learn zero-shot classifiers from language explanations by using three new strategies - (1) modeling the semantics of linguistic quantifiers in explanations (including exploiting ordinal strength relationships, such as 'always' > 'likely'), (2) aggregating information from multiple explanations using an attention-based mechanism, and (3) model training via curriculum learning. With these strategies, LaSQuE outperforms prior work, showing an absolute gain of up to 7% in generalizing to unseen realworld classification tasks.1 ## 1 Introduction Learning from language (also 'conversational machine learning') is a new paradigm of machine learning where machines are taught tasks through natural language supervision in the form of explanations and instructions (Andreas et al., 2018; Arabshahi et al., 2020; Weller et al., 2020; Efrat and Levy, 2020). Language explanations of concepts have been explored for training classification models in few-shot and zero-shot settings (Mei et al., 2022; Srivastava et al., 2017, 2018; Hancock et al., 2018; Chai et al., 2020; Obeidat et al., 2019; Hanjie et al., 2022). However, current approaches fall short in fully leveraging supervision available in language explanations and using learning strategies that humans routinely employ in learning new tasks. First, most ∗Equal contribution 1Our code can be found at: https://github.com/ sgdgp/LaSQuE ![0_image_0.png](0_image_0.png) approaches, such as LNL (Srivastava et al., 2017), and BabbleLabble (Hancock et al., 2018), do not model supervision within free-form language explanations in the form of quantifiers. Quantifiers are linguistic elements that can dictate the vagueness and perceived confidence of relations expressed in a statement (Solt, 2009; Moxey and Sanford, 1986). For example, with statements such as *'some poisonous mushrooms are red in color'*, we can infer that a red mushroom is not always poisonous because of the quantifier *'some'*. Moreover, quantifiers are a ubiquitous part of natural language and universal across languages. Second, prior approaches do not reason about differences in salience and utility of multiple explanations in learning a new task, weighing them equally in the absence of labeled data. This is sub-optimal since certain explanations can be naturally harder to incorporate or have inherently less value in learning a concept2. Thirdly, when learning a set of tasks, humans often learn 'simpler' concepts first and gradually build towards 'harder' concepts (Newport, 1990). Curriculum learning (Bengio et al., 2009), a method where tasks are introduced in an incremental and adaptive manner, has been shown to be effective in a wide range of complex machine learning tasks (Platanios et al., 2019; Tay et al., 2019; Narvekar et al., 2017). However, its application in the context of learning from explanations has yet to be explored. The deteriorating generalization of classifiers with the increasing complexity of explanations in prior work (Menon et al., 2022) further motivates the need for curriculum learning for learning from explanations. To address the first shortcoming, our approach LaSQuE (Learning Strategies For Quantified Explanations) explicitly models quantifier semantics and learns them directly from labeled classification data. However, directly learning from labeled data can lead to quantifier semantics that is inconsistent with human perceptions of their numerical estimates. Hence, we provide weak supervision in the form of ordinal relations describing the relative strengths of quantifiers (e.g., 'always' > 'likely') to supplement the learning of quantifier semantics that comply with human judgments. Second, we design an attention-based mechanism to model the relative importance of multiple explanations in classifying an example. We also qualitatively analyze the attention weights to identify characteristics of explanations found most helpful. Finally, we consider different axes of explanation complexity and empirically evaluate the utility of curriculum learning on three different curricula. As our test bed, we use the recently proposed CLUES benchmark (Menon et al., 2022) for learning classification tasks from language explanations. Our work focuses on learning classifiers from language explanations where the explanations provide the logic to perform the classification. (e.g., the explanation 'pungent mushrooms are toxic', provides the logic that mushrooms with a pungent odor should be classified as toxic). CLUES is the largest available benchmark that contains explanations conformant with this perspective. It differs from some other benchmarks (Mishra et al., 2022; Sanh et al., 2022), where the language component provides the *description of the task* instead (such as, 'classify the mushrooms as toxic or poisonous'), which can be used to train/prompt a model. On CLUES, LaSQuE achieves an improvement of 17% and 7%, respectively, on the synthetic and realworld benchmarks over baselines. The rest of this paper is structured as follows: we provide a description of the preliminaries in §3. In §4 we describe LaSQuE and our learning strategies along with supporting empirical performance. §5 discusses performance of LaSQuE on real world classification tasks. Our contributions are: - We introduce LaSQuE, which models semantics of linguistic quantifiers, and uses an attentionbased mechanism to identify salient explanations for learning classifiers from language. LaSQuE significantly outperforms previous methods in generalizing to unseen classification tasks. - We empirically demonstrate the utility of curriculum learning in training classifiers from language by experimenting with three curricula. ## 2 Related Work Natural Language Quantification. Previous work has studied the role of quantifiers in natural language from multiple perspectives, such as formal logic (Barwise and Cooper, 1981), linguistics (Lobner, 1986; Bach et al., 2013), cognitive psychology (Kurtzman and MacDonald, 1993), and natural language processing to guide statistical models (Srivastava et al., 2018). In the above mentioned works, quantifiers have been typically modelled in either set-theoretic terms (Barwise and Cooper, 1981) or by representing them probabilistically (Moxey and Sanford, 1993; Yildirim et al., 2013; Srivastava et al., 2018). Our work is closely related to Srivastava et al. (2018), who also model the effects of quantifiers in modifying the belief of a classifier. However, we differ from Srivastava et al. (2018) as we learn the beliefs associated with quantifiers during training as opposed to defining them apriori with fixed values. More recently, Cui et al. (2022) discusses the challenges in understanding quantifiers, specifically in the context of NLI, and contributes a focused test dataset to benchmark NLI models on their ability to understand quantifiers. While both Cui et al. (2022) and our work broadly highlight the need to model quantifiers to advance language understanding, we differ in the nature of downstream tasks studied (diverse classification tasks in our work vs NLI in Cui et al. (2022)). Our approach (LaSQuE) contains a dedicated module that enables us to *learn* quantifier semantics, which apply to a wide range of tasks spanning multiple domains. Curriculum Learning. Curriculum learning (Bengio et al., 2009) is a technique to learn complex tasks through a graded exposure of examples ranging from easy-to-hard difficulty. Recent works in machine learning (Jiang et al., 2018; Guo et al., 2018; Hacohen and Weinshall, 2019) have successfully demonstrated the utility of curriculum learning in learning image classification tasks. More recently, Xu et al. (2020) also demonstrated the effectiveness of curriculum learning for a set of natural language understanding tasks drawn from the GLUE benchmark (Wang et al., 2018). However, prior works build a curriculum of easy-to-hard examples to improve model performance on individual tasks. Rather than examples, we build a curriculum of easy-to-hard tasks in our work, similar to Mao et al. (2019). In contrast to Mao et al. (2019) though, we focus on learning structured data classification tasks from language explanations as opposed to visual question answering tasks. ## 3 Preliminaries 3.1 Setup We employ a cross-task generalization setup (Mishra et al., 2022), and train classifiers using multi-task training over a set of tasks T*seen* and evaluate for zero-shot generalization on a set of tasks Tnovel (Tnovel ∩ T*seen* = ϕ). The evaluation metric is the zero-shot classification accuracy on novel classification tasks. Datasets. For experiments, we use the recently proposed CLUES benchmark (Menon et al., 2022). The benchmark is composed of synthetic and realworld classification datasets. In CLUES, inputs are structured, consisting of attribute name-attribute value pairs (see Figure 1 for example). We use the 'Features-as-Text' or 'FaT' representation to encode the examples following Menon et al. (2022), i.e., given the input as in Figure 1, we encode the input as text tokens in the form odor | pungent [SEP] ...gill-color | white [SEP]. Additional details and statistics about CLUES can be found in Appendix A. Baselines. To compare the efficacy of our proposed strategies on CLUES, we use the following two baselines in our experiments: (1) RoBERTa w/o Exp (does not use explanations) (Liu et al., 2019) and (2) ExEnt (Menon et al., 2022). ExEnt uses Natural Language Inference (NLI) as an intermediate step to perform classification. The operations in ExEnt can be broadly grouped into three steps: (1) *NLI step*: obtain scores from an entailment prediction model (RoBERTa+MNLIfinetuned) for the alignment between the input and each explanation available for a task; (2) Entailment → *Classification scores conversion*: convert the entailment scores for each input-explanation pair into classification scores based on the nature of the explanation; and (3) *Aggregation*: average the classification scores from each input-explanation pair to obtain an aggregate score for classification. Convert aggregate scores to probabilities using softmax and train the model end-to-end using the crossentropy loss. For more details on ExEnt, we refer the reader to Menon et al. (2022). ## 4 Lasque In this section, we present our method, LaSQuE, and provide detailed descriptions and empirical support for the different learning strategies that are part of LaSQuE– (1) modeling quantifier semantics, (2) using attention for aggregation across explanations, and (3) curriculum learning. ## 4.1 Modeling Quantifier Semantics Quantifiers are a ubiquitous part of natural language and can help express varying strengths of relations in a statement. Prior work in cognitive science (Chopra et al., 2019; Steinert-Threlkeld, 2021) and machine learning (Srivastava et al., 2018; Menon et al., 2022) shows that people tend to use quantifiers often in learning or teaching tasks. Hence, modeling quantifiers is important for building systems that can mimic humans in efficiently learning from natural language. However, past work on computational modeling of quantifiers is sparse. To the best of our knowledge, no prior work has explored learning quantifier semantics in a data-driven way. In this work, we devise methods to explicitly model the differential semantics of quantifiers present in explanations to guide classifier training. Figure 2 shows architecture of our model, LaSQuE. To formalize our approach to modeling quantifier semantics, consider a task t with the set of class labels L and set of explanations E. Given the Feature-as-Text (FaT) representation of a structured data example x ∈ t and an explanation ![3_image_0.png](3_image_0.png) ej ∈ E, our model takes FaT(x) and ej as input and passes it through a pretrained RoBERTa+MNLI model, following previous work (Menon et al., 2022). For each example-explanation pair, the NLI model outputs entailment, neutral, and contradiction scores (denoted as s j e, s j n, and s j crespectively). In the next step, we incorporate quantifier semantics to assign logits to the set of class labels, L, using the outputs of the NLI model. In this work, we model the semantics of a quantifier by a probability value signifying the strength of the quantifier, i.e., the confidence of the quantifier in conveying the beliefs expressed in the explanation. Then the class logit assignment is done as follows. If: - Explanation ej **mentions a label** lexp: An illustrative example is 'If head equal to 1, then it is usually dax'. In this example the label mentioned, lexp is 'dax'. Let p*quant* denote the strength (as a probability) of the quantifier mentioned in the explanation3. In the aforementioned example, p*quant* will be the probability associated with the quantifier 'usually'. Let P(l) denote the probability of any label l ∈ L. Then, $$\begin{array}{c}{{\log(\mathbb{P}(l_{e x p}))\propto p_{q u a n t}\times s_{e}^{j}}}\\ {{\qquad\qquad+(1-p_{q u a n t})\times s_{c}^{j}+s_{n}^{j}/|L|}}\end{array}\tag{1}$$ $$\begin{array}{c}{{\forall\;l\in L\setminus\{l_{e x p}\},}}\\ {{\qquad\qquad\log(\mathbb{P}(l))\propto p_{q u a n t}\times s_{c}^{j}}}\\ {{\qquad\qquad+(1-p_{q u a n t})\times s_{e}^{j}+s_{n}^{j}/|L|}}\end{array}\tag{2}$$ Equations 1 and 2 define the likelihood of each label in terms of the NLI model outputs. The entailment score (se) denotes how strongly the explanation influences the classifier to label the input as lexp. On the other hand, the contradiction score, sc denotes how strongly the explanation influences the classifier to not label the input as lexp. These 'influences' are additionally modified based on the quantifier strength as shown in equations 1 and 2. Note: If quantifiers are absent in the explanations, we assume p*quant* is 1. - Explanation ej **mentions negation of a label** 'lexp**' (NOT** lexp): An illustrative example is 'If head equal to 1, then it is usually not dax', where 'dax' is the label mentioned (lexp). The roles of s j c and s j e as described in the previous equations are reversed. Following this step, we average the class logits from each example-explanation pair to aggregate the decisions. Finally, we apply a softmax over the resulting class scores to obtain a distribution over class labels and train the model to minimize the cross-entropy loss, LCE. Approaches to learn quantifier semantics. We experiment with the following approaches to learn the probability values of quantifiers: ![4_image_0.png](4_image_0.png) - Finetuning pre-defined probability values: We initialize the quantifier probability values (p*quant*) with pre-defined values and fine-tune them while training LaSQuE. These initial estimates can be specified from domain knowledge or by an expert. In this work, we adopt the quantifier values from Srivastava et al. (2018).4 We refer to the model learned using this approach as LaSQuE (predefined init). - Learning probability values for the quantifiers from scratch: We start from random initialization and then learn the probability values of each quantifier while training LaSQuE. We refer to the model learned using this approach as LaSQuE (random init). - Ordinal ranking as weak supervision: We explore another form of supervision by specifying ordinal relationships between pairs of quantifiers based on their relative strengths. To define ordinal relationships, we re-purpose the quantifier probability values in Srivastava et al. (2018). For example, quantifiers such as 'likely' and 'often' associated with the values 0.7 and 0.5 respectively are defined by the relationship, 'likely' > 'often'. We leverage the ordinal relations to guide the learning of quantifier semantics through a ranking loss, following Pavlakos et al. (2018). Given a pair of quantifiers qi and qj (i ≠ j), the ranking loss is defined as: $${\mathcal{L}}_{i,j}={\begin{cases}\log(1+\exp(p_{q_{i}}-p_{q_{j}})),&{\mathbf{p}}_{\mathbf{q_{i}}}^{*}>\mathbf{p}_{\mathbf{q_{j}}}^{*}\\ {(p_{q_{i}}-p_{q_{j}})}^{2},&{\mathbf{p}}_{\mathbf{q_{i}}}^{*}=\mathbf{p}_{\mathbf{q_{i}}}^{*}\end{cases}}$$ yes used can be found in the $\Delta t$. 4Full list of quantifiers used can be found in the Appendix. where, p ∗ q refers to the subjective probability value of a quantifier, q, in Srivastava et al. (2018). Further, we define $${\mathcal{L}}_{r a n k}=\sum_{(q_{i},q_{j})\in Q}{\mathcal{L}}_{i,j}\qquad\qquad(3)$$ where, Q denotes the full set of quantifiers present in the explanations of CLUES (§A.1). The final loss is a weighted sum of classification loss (LCE) and ranking loss (L*rank*). $${\mathcal{L}}_{t o t a l}={\mathcal{L}}_{C E}+\lambda{\mathcal{L}}_{r a n k}$$ $$(4)$$ where, λ denotes the weight of ranking loss. We use λ = 10 in this work, chosen using validation performance. We refer to the model learned using this approach as LaSQuE (ordinal). Performance on **CLUES-Synthetic**: To evaluate the effectiveness of natural language quantification in learning classifiers from language explanations, we experiment on a collection of 100 tasks for each of the 48 different complexities from CLUES-Synthetic. The complexities vary based on the presence of conjunctions, negations, and quantifiers in the task explanations. For each complexity, we train a classifier and evaluate its generalization to novel tasks of the same complexity. Figure 3 shows the results of different variants of LaSQuE and ExEnt across the different task complexities as the relative performance gain over the RoBERTa w/o Exp. baseline for zero-shot classification of examples from unseen tasks. For ease of visualization, we have averaged the results across binary and multiclass classification tasks in the ![5_image_0.png](5_image_0.png) figure. Post-averaging, we plot sets of four bars corresponding to the evaluations of the four models (three LaSQuE variations + ExEnt) on each of the 24 task complexities resulting from negations, conjunctions, and quantifiers. Overall, we find that explicit modeling of quantifier semantics helps to learn better zero-shot classifiers. In particular, LaSQuE expectedly performs much better than previous approaches on tasks with quantified explanations. Further, while ExEnt is weaker than RoBERTa w/o exp. baseline on certain task complexities, LaSQuE outperforms or match the baselines on almost all task complexities. Expectedly, the generalization ability of models decrease with the increasing complexity of explanations due to changes in the structure of explanations or the presence of negations. | METHOD | ACCURACY (↑) | |--------------------------|----------------| | ExEnt | 54.7 | | LaSQuE (random init) | 56.9♦ | | LaSQuE (predefined init) | 59.7♦♣ | | LaSQuE (ordinal) | 59.9♦♣ | Table 1 shows the average accuracy of different LaSQuE variants and ExEnt over tasks in CLUES-Synthetic. LaSQuE (ordinal) performs the best across majority of the synthetic task complexities in CLUES with a significant 5.2% absolute improvement across all tasks complexities over ExEnt. Further, LaSQuE (predefined init) performs comparably with LaSQuE (ordinal) in many cases (5.0% vs 5.2% absolute improvement over ExEnt) but struggles in tasks where explanations have negations in both clauses and labels. The poor performance of LaSQuE (random init) compared to LaSQuE (predefined init) and LaSQuE (ordinal) demonstrates the challenge of jointly learning quantifier semantics and a classifier only from labels. Nevertheless, LaSQuE (random init) outperforms ExEnt significantly by 2.2% points (absolute) on average across all synthetic task complexities. Analyzing the learned quantifier estimates for the LaSQuE variant whose quantifier values are finetuned from predefined values (LaSQuE (predefined init) in Figure 4), we observe the final learned probability values are close to the initialization values. On the other hand, we note that LaSQuE (ordinal) learns three clusters of quantifier probabilities that match with our intuition of high-strength (probability above 0.95), intermediate strength (probability around 0.7), and low-strength quantifiers (probability close to 0). Even though LaSQuE (ordinal) makes little difference between quantifiers within a cluster, we observe that weak supervision in the form of ordinal ranks is sufficient to develop models competent with, even surpassing, LaSQuE (predefined) that uses predefined initialization. Finally, we observe LaSQuE (random init) struggles to learn any interpretable ranking for quantifiers. On further analysis, we identify that LaSQuE (random init) can learn the quantifier semantics reasonably well for simple binary tasks. However, it struggles to learn reasonable quantifier semantics in the presence of negations, conjunctions, and disjunctions. 7408 ## 4.2 Aggregating Explanations With Attention To mimic human learning, models need to identify salient explanations that can be potentially useful in classifying an input. As previously mentioned, in the absence of labeled data for a task, previous work on learning from explanations does not differentiate between multiple explanations in terms of their salience and utility for classifying an example. For example, ExEnt averages the class logits from multiple explanations for making predictions, implicitly considering all explanations equally salient for classifying an example. To model the varying importance of each explanation towards deciding the class label, we use attention for the aggregation step. We obtain the attention weights by using a feed-forward network over the [CLS] representations obtained from the intermediate NLI model. The attention weights are then normalized using softmax. The final aggregated class logits for the label l is ∑ m j=1 ajz l j, where ajis the attention weight for each explanation ej, and z l j denotes the logit for label l using ej. The aggregated class logits are converted to probabilities using softmax, and the model is trained using cross-entropy loss. ![6_image_0.png](6_image_0.png) Performance on **CLUES-Synthetic**: To evaluate the role of attention, we experiment with two models, one using mean and the other using attention for aggregation. Each model is fine-tuned from the RoBERTA+MNLI backbone on the training tasks of CLUES-Synthetic. Figure 5(a) shows the generalization performance for two variants of LaSQuE. Using attention for aggregation across explanations results in significantly better generalization accuracy (50.68% vs 46.04% ; p < 0.1, paired t-test). While technically simple, we see that this modification allows the model to behave in conceptually sophisticated ways. Attention weight analysis: Figure 5(b) shows a histogram of average attention weights from LaSQuE for different explanation lengths. We find that longer explanations (typically explanations with nested conjunctions and disjunctions) get lower attention weights on average. This seems reasonable and intuitive since complex explanations are likely harder for the model to interpret correctly, so relying on them may be riskier. Further, we find that explanations containing quantifiers receive higher attention on average than explanations without quantifiers (0.44 vs 0.35), further highlighting the value of modeling quantifiers in explanations. Explanations containing 'definitely' and 'frequently' received higher attention than explanations containing other quantifiers. Surprisingly, the average attention weights were comparable for explanations with and without negation. ## 4.3 Curriculum Learning From Figure 3, it is clear that the generalization abilities of models diminish dramatically with the increasing complexity of tasks and explanations. Thus, we next investigate using curriculum learning (Bengio et al., 2009), which has shown significant successes in learning complex tasks, for learning classifiers from explanations. We define the 'complexity' of an explanation under three axes here - (1) the type of classification task (binary vs multiclass) served by the explanation, (2) presence of negations in the explanation, and (3) structure of the explanation (whether the explanation contains conjunction/disjunctions or nested clauses). Using curriculum learning we empirically evaluate if training on a classification task with 'easier' (less complex) explanations first gives any advantage when learning a task with 'harder' (more complex) explanations. In this work, we explore the following curricula: - Binary → multiclass: We first train classifiers on binary classification tasks and then on multiclass classification tasks. - No negations → having negations in labels and clauses: We train on tasks with explanations that contain no negation followed by training on tasks with explanations that have negations in them. Note that negation can appear in the clauses or before a class label in the explanation. - No conjunctions/disjunctions → tasks with ![7_image_0.png](7_image_0.png) nested conjunctions and disjunctions: We first train on tasks with simple explanations without any conjunctions or disjunctions. Following this, we train on tasks having explanations that contain one conjunction/disjunction and then, on tasks with explanations that contain nested clauses. Figure 6 shows the results of curriculum learning on the synthetic tasks of CLUES. We find that LaSQuE trained through curriculum learning (denoted by 'curriculum' bars) outperforms LaSQuE trained only on the most challenging task set in the corresponding curriculum (denoted by 'standard' bars) on the generalization accuracy of novel hardest tasks in the corresponding curriculum. However, we notice that LaSQuE has minimal benefits from training in a curriculum learning fashion on the conjunctions curriculum (shown in green). We hypothesize that jointly learning quantifiers and classifiers might be challenging, so we experiment with another setup where we reduce the learning problem to only modeling task complexities by freezing the quantifier semantics with the semantics learned by LaSQuE on simple synthetic binary tasks. With this modification, we find that curriculum learning is much more effective in all three curricula as seen from the improved average generalization performance (denoted by 'pretrained curricula' bars in Figure 6). Notably, we find curriculum learning to be most effective in handling negations obtaining an absolute improvement of 17.11% on the generalization accuracy. The low gains achieved through curriculum learning for handling structural complexity indicates a need to model the role of conjunctions and disjunctions explicitly. We leave this for future work to explore. We further analyze the progression of the zeroshot generalization accuracies as we increase the complexity of tasks as we move forward in the curriculum. We defer this result and discussion to the Appendix §C. Briefly, our results suggest that models tend to perform better on more complex tasks at the expense of slight performance drops on simpler tasks as the curriculum progresses. ## 5 Performance On Real-World Tasks ![7_image_1.png](7_image_1.png) Comparison with **ExEnt**. In the previous sections, we established the effectiveness of our proposed strategies on a large number of synthetic tasks from CLUES. Here, we empirically evaluate LaSQuE on the 36 real-world classification tasks from CLUES using the aforementioned strategies. In Figure 7, we find that directly trying to train LaSQuE fails to surpass the baselines (even when using attention to aggregate over explanations) as the comparatively low number of explanations in CLUES-Real hinders the model from learning quantifier semantics and classification jointly. To alleviate this issue, we pre-train on the synthetic tasks and then fine-tune the learned model on the real tasks, which we also see as a natural type of curriculum learning. We find that pre-training on synthetic tasks (LaSQuE (syn2real)) gives a relative gain of 6.7% in generalization accuracy over ExEnt. On the contrary, if we pre-train ExEnt using the same set of synthetic tasks, we find that the resultant model, ExEnt (syn2real), is inferior to ExEnt in terms of generalization accuracy (as shown in figure 7). LaSQuE (syn2real) outperforms ExEnt (syn2real) (58.68% vs 52.94%), showing that LaSQuE is better in transferring the skills learned over synthetic tasks to real-world tasks. Next, we evaluate the utility of curriculum learning on real tasks. We start with a pre-trained LaSQuE on synthetic tasks and then fine-tune it first on binary tasks of CLUES-Real followed by training on multiclass tasks of CLUES-Real. We find that curriculum learning results in the best generalization (LaSQuE (syn2real + curriculum)) performing significantly better than ExEnt (relative gain of 12.7%; p < 0.005, paired t-test) on CLUES-Real. Comparison with Large Instruction-tuned models. Recent works show that large language models (LLMs) fine-tuned on multiple classification tasks have an ability for zero-shot classification on new tasks (Ouyang et al., 2022; Sanh et al., 2022; Chung et al., 2022). These models have been primarily trained on unstructured text classification tasks and instructions that define the task rather than providing logic for classification. Given the emergent ability of large language models (Wei et al., 2022), we test the performance of such models on CLUES-Real and compare them with our best LaSQuE model. Specifically, we compare against the publicly available T0-3B (Sanh et al., 2022) and FLAN-T5-XXL (Chung et al., 2022) models. We report the generalization accuracy over 16 real-world tasks with and without using explanations in the prompt for T0-3B and FLAN-T5-XXL in Table 2. We find that both T0-3B and FLAN-T5- XXL with explanations in the prompt (47.90% and 42.30% respectively) perform worse than our best LaSQuE (61.90%) on the same set of tasks. This shows that our strategies for LaSQuE instill stronger inductive biases into a much smaller model (125M for LaSQuE vs 3B for T0/ 11B for FLAN-T5-XXL). Further, adding explanations in the prompt lowers performance of both T0-3B and FLAN-T5-XXL showing that these models struggle in understanding the classification logic described in form of natural language explanations, for structured classification tasks. Future work should explore improved techniques for using large instruction-tuned models under zero-shot settings for structured classification tasks guided by natural language explanations. ## 6 Conclusion We have presented effective and generalizable strategies to learn classifiers from language explanations. While our results are promising, our analysis also highlights several open challenges in learn- | METHOD | ACCURACY (↑) | |------------------------|----------------| | LaSQuE (best) | 61.90% | | T0-3B (w/o exp.) | 49.47% | | T0-3B (w/ exp.) | 47.90% | | FLAN-T5-XXL (w/o exp.) | 44.47% | | FLAN-T5-XXL (w/ exp.) | 42.30% | ing from language. In particular, LaSQuE struggles to learn quantifier semantics without quantifierspecific supervision (in the form of pre-defined initialization or ordinal relations), especially when tasks have complex explanations (due to the presence of negations/conjunctions/disjunctions). Further, our modeling of quantifiers as fixed probability values is restrictive. Future work can also explore explicit modeling of negations, conjunctions and disjunctions for learning from explanations. ## 7 Limitations In this work, we introduce LaSQuE, which models and learns the differential semantics of linguistic quantifiers present in natural language explanation to train a classifier guided by these explanations. We evaluate the efficacy of LaSQuE over baselines on the CLUES benchmark. This work assumes that only a single quantifier is present in the explanations. However, in real-world settings, explanations may contain multiple quantifiers. Modeling the composition of quantifiers can be an interesting direction for future work to make the paradigm of learning from explanations more robust toward fuzzy concepts expressed in real-world explanations. For our experiments, we assume perfect extraction of quantifiers and limit our analysis to a limited set of quantifiers in this work. Furthermore, we assume that the effect of quantifiers in a sentence is the same irrespective of the domain of the sentence. For example, consider two sentences 'pungent mushrooms are usually toxic' and 'people who smoke regularly usually suffer from cancer'. Here the effect of *'usually'* is not exactly the same for two sentences that are from different domains. However, LaSQuE is not sensitive to the task domain while modeling the semantics of the quantifier. Future work can investigate variations in the semantics of the same quantifier across different domains and also how to incorporate/learn such domain-specific differences (for example, by modeling the semantics of a quantifier as a probability distribution rather than a point value). ## Ethics And Broader Impact All our experiments are performed over publicly available datasets, specifically datasets (including language explanations) from CLUES benchmark (Menon et al., 2022). The datasets do not contain any information that uniquely identifies the crowdworkers involved in data collection. We do not perform any additional annotation or human evaluation in this work. Our method, LaSQuE can learn classifiers over structured data using language explanations provided as part of input to the classifier. LaSQuE is built over existing pre-trained language model, RoBERTa (Liu et al., 2019). We do not foresee any risks with our method if the inputs to our model are appropriate for the task. Any measures to counteract erroneous inputs (that may be provided deliberately, potentially exploiting unwanted biases) or curb the biases of pre-trained language models are beyond the scope of this work. The broader impact of this research in the longer term could increase the accessibility of predictive technologies for ordinary users (non-experts), enabling them to customize AI technologies through natural language interactions. ## Acknowledgments This work was supported in part by NSF grant DRL2112635. The views contained in this article are those of the authors and do not necessarily reflect the views or opinions of the funding agency. ## References Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with latent language. In *Proceedings of* the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2166–2179, New Orleans, Louisiana. Association for Computational Linguistics. Forough Arabshahi, Kathryn Mazaitis, Toby Jia-Jun Li, Brad A Myers, and Tom Mitchell. 2020. Conversational learning. Elke Bach, Eloise Jelinek, Angelika Kratzer, and Barbara BH Partee. 2013. Quantification in natural lan- guages, volume 54. Springer Science & Business Media. Jon Barwise and Robin Cooper. 1981. Generalized quantifiers and natural language. In *Philosophy, language, and artificial intelligence*, pages 241–301. Springer. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 41–48. Duo Chai, Wei Wu, Qinghong Han, Wu Fei, and Jiwei Li. 2020. Description based text classification with reinforcement learning. In *Proceedings of the* 37th International Conference on Machine Learning, ICML'20. JMLR.org. Sahil Chopra, Michael Henry Tessler, and Noah D. Goodman. 2019. The first crank of the cultural ratchet: Learning and transmitting concepts through language. In *CogSci*. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Ruixiang Cui, Daniel Hershcovich, and Anders Søgaard. 2022. Generalized quantifiers as a source of error in multilingual NLU benchmarks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4875–4893, Seattle, United States. Association for Computational Linguistics. Avia Efrat and Omer Levy. 2020. The turking test: Can language models understand instructions? arXiv preprint arXiv:2010.11982. Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R Scott, and Dinglong Huang. 2018. Curriculumnet: Weakly supervised learning from large-scale web images. In *Proceedings of the European Conference on Computer* Vision (ECCV), pages 135–150. Guy Hacohen and Daphna Weinshall. 2019. On the power of curriculum learning in training deep networks. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 2535– 2544. PMLR. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1884–1895, Melbourne, Australia. Association for Computational Linguistics. Austin W Hanjie, Ameet Deshpande, and Karthik Narasimhan. 2022. Semantic supervision: Enabling generalization over output spaces. arXiv preprint arXiv:2202.13100. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (to appear)*. Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585(7825):357–362. Linda M. Moxey and Anthony J. Sanford. 1986. Quantifiers and Focus. *Journal of Semantics*, 5(3):189–206. Sanmit Narvekar, Jivko Sinapov, and Peter Stone. 2017. Autonomous task sequencing for customized curriculum design in reinforcement learning. In *IJCAI*, pages 2536–2542. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304–2313. PMLR. Rasha Obeidat, Xiaoli Fern, Hamed Shahbazi, and Prasad Tadepalli. 2019. Description-based zero-shot fine-grained entity typing. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 807–814, Minneapolis, Minnesota. Association for Computational Linguistics. Howard S Kurtzman and Maryellen C MacDonald. 1993. Resolution of quantifier scope ambiguities. *Cognition*, 48(3):243–279. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In *International Conference on Learning Representations*. Georgios Pavlakos, Xiaowei Zhou, and Kostas Daniilidis. 2018. Ordinal depth supervision for 3d human pose estimation. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 7307–7316. Lingjie Mei, Jiayuan Mao, Ziqi Wang, Chuang Gan, and Joshua B. Tenenbaum. 2022. FALCON: Fast visual concept learning by integrating images, linguistic descriptions, and conceptual relations. In *International* Conference on Learning Representations. Rakesh R Menon, Sayan Ghosh, and Shashank Srivastava. 2022. CLUES: A benchmark for learning classifiers using natural language explanations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6523–6546, Dublin, Ireland. Association for Computational Linguistics. Linda M Moxey and Anthony J Sanford. 1993. Prior expectation and the interpretation of natural language quantifiers. *European Journal of Cognitive Psychology*, 5(1):73–91. Elissa L Newport. 1990. Maturational constraints on language learning. *Cognitive science*, 14(1):11–28. Eric Jones, Travis Oliphant, Pearu Peterson, et al. 2001–. SciPy: Open source scientific tools for Python. Sebastian Lobner. 1986. Quantification as a major module of natural language semantics. In *Studies in discourse representation theory and the theory of generalized quantifiers*, pages 53–86. De Gruyter. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162–1172, Minneapolis, Minnesota. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. Stephanie Solt. 2009. The semantics of adjectives of quantity. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1527–1536, Copenhagen, Denmark. Association for Computational Linguistics. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natural language quantification. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 306–316, Melbourne, Australia. Association for Computational Linguistics. Shane Steinert-Threlkeld. 2021. Quantifiers in natural language: Efficient communication and degrees of semantic universals. *Entropy*, 23(10):1335. Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, and Aston Zhang. 2019. Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4922–4931, Florence, Italy. Association for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. Transactions on Machine Learning Research. Survey Certification. Orion Weller, Nicholas Lourie, Matt Gardner, and Matthew E. Peters. 2020. Learning from task descriptions. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 1361–1375, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understanding. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6095–6104, Online. Association for Computational Linguistics. Ilker Yildirim, Judith Degen, Michael Tanenhaus, and Florian Jaeger. 2013. Linguistic variability and adaptation in quantifier meanings. In *Proceedings of the* Annual Meeting of the Cognitive Science Society, volume 35. ## A Details On Clues CLUES (Menon et al., 2022) is a recently proposed benchmark of classification tasks paired with natural language explanations. The benchmark consists of 36 real-world classification tasks (CLUES-Real) as well 144 synthetic classification tasks (CLUES-Synthetic). The tasks and explanations of the benchmark are in English language. The real-world classification tasks were created using resources from UCI Machine Learning repository, Kaggle, and Wikipedia tables. The explanations for real-world tasks were crowdsourced. The synthetic tasks were created programmatically to study the performance of models under different levels of task complexities. The 48 different complexities in CLUES-Synthetic arise from the: (a) presence of negations in clauses and/or labels, (b) structure of explanations (conjunctions/disjunctions/nested), (c) presence of quantifiers in explanations, and (d) binary vs multiclass classification task. The explanations for CLUES-Synthetic are generated programmatically using templates. In this work, we follow the train and test splits for CLUES-Real from Menon et al. (2022). Additionally, we train on 70% of the labeled examples of the seen tasks and perform zero-shot generalization test over the 20% examples of each task in CLUES-Real. For the extremely small tasks, we use the entire set of examples for zero-shot testing. The seen-unseen task splits for CLUES-Real and CLUES-Synthetic that we use for experiments in this paper is the same as that in Menon et al. (2022). ## A.1 List Of Quantifiers | QUANTIFIERS | PROBABILITY | |-----------------------------------------------------------|---------------| | "always", "certainly", "definitely" | 0.95 | | "usually", "normally", "generally", "likely", "typically" | 0.70 | | "often" | 0.50 | | "sometimes", "frequently", | 0.30 | | "occasionally" | 0.20 | | "rarely", "seldom" | 0.10 | | "never" | 0.05 | The full list of quantifiers along with their associated probability values are shown in Table 3. Table 3: Probability values used for quantifiers in CLUES. These values are based on Srivastava et al. (2018). ## B Training Details In this section we provide details about implementation such as hyperparameter details, and details about hardware and software used along with an estimate of time taken to train the models. ## B.1 Hyper-Parameter Settings For all the transformer-based models we use the implementation of HuggingFace library (Wolf et al., 2020). All the model based hyper-parameters are thus kept default to the settings in the HuggingFace library. We use the publicly available checkpoints to initialize the pre-trained models. For RoBERTa based baselines we use 'roberta-base' checkpoint available on HuggingFace. For our intermediate entailment model in ExEnt, we fine-tune a pretrained checkpoint of RoBERTa trained on MNLI corpus ('textattack/roberta-base-MNLI' from HuggingFace). When training on CLUES-Synthetic, we use a maximum of 64 tokens for our baseline RoBERTa w/o Exp. and ExEnt. We used the AdamW (Loshchilov and Hutter, 2019) optimizer commonly used to fine-tune pretrained Masked Language Models (MLM) models. For fine-tuning the pre-trained models on our benchmark tasks, we experimented with a learning rate of 1e − 5. In order to learn the quantifier probabilities, we search for the correct learning rate to use in {1e − 3, 2e − 3, 5e − 3, 9e − 3, 1e − 2, 2e − 2, 3e − 2} and use 1e − 2 for our reported experiments based on the best validation accuracy obtained while training and testing on the binary classification datasets with no negation and conjunction complexities in explanations/concepts. Batch sizes was kept as 2 with gradient accumulation factor of 8. The random seed for all experiments was 42. We train all the models for 20 epochs. Each epoch comprises of 100 batches, and in each batch the models look at one of the tasks (in a sequential order) in the seen split. In the curriculum learning experiments, we run the model on each task type for 20 epochs and select the best model during a particular step of the curriculum based on the validation scores of the seen tasks. Finally, the chosen best checkpoint is used to initialize the model for the next step of the curriculum. ![13_image_0.png](13_image_0.png) ## B.2 Hardware And Software Specifications All the models are coded using Pytorch 1.4.05 (Paszke et al., 2019) and related libraries like numpy (Harris et al., 2020), scipy (Jones et al., 2001–) etc. We run all experiments on a Tesla V100-SXM2 GPU of size 16GB, 250 GB RAM and 40 CPU cores. ## B.3 Training Times - Training on CLUES-Real : The baseline RoBERTa w/o Exp model typically takes 3 seconds on average for training on 1 batch of examples. ExEnt and LaSQuE (all variants) also take comparable amount of time to train on 1 batch. In 1 batch, the models go through 16 examples from the tasks in seen split. - Training on CLUES-Synthetic : All the models take comparatively much lesser time for training on our synthetic tasks owing to lesser number of explanations on average for a task. For training on 1 batch, all models took 1 seconds or less to train on 1 batch of examples from CLUES-Synthetic. - Training for curriculum learning: The run time of a curriculum learning episode depends on the number of tasks in an episode. In Figure 6, the binary-multiclass curriculum takes 2 hours to train, while negations take 4 hours, and conjunctions take 3 hours. The same time frame applies for the results in Figure 8. ## C Extended Analysis Of Curriculum Learning In Figure 8, we show the trajectories of generalization performance as we increase the complexity along three independent axes in the three curricula. Briefly, our results indicate that in learning tasks with more classes, generalization increases on multiclass classification tasks at the expense of 5https://pytorch.org/ a slight performance decrease on the more straightforward binary tasks. In the curriculum focused on negations, LaSQuE underperforms on tasks with explanations that have 'label negations' after training on the relevant training datasets for that complexity. However, on further analysis, we observe that this trend is more pronounced when 'label negations' are paired with multiclass classification tasks. By contrast, LaSQuE improves through training on the relevant training datasets of binary classification tasks with 'label negations' in concepts. Lastly, training progressively on more structurally complex tasks resulting from conjunctions/disjunctions in explanations shows improvements during evaluation across all conjunction types without forgetting how to solve simpler tasks. | Model Name | No Conjunctions | Simple Conjunctions | Nested Conjunctions | | | | | | | | | | |--------------------------|-------------------|-----------------------|-----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | No Quantifiers | No Quantifiers | No Quantifiers | | | | | | | | | | | | NN | CN | LN | BN | NN | CN | LN | BN | NN | CN | LN | BN | | | LaSQuE (predefined init) | 89.67 | 87.09 | 81.40 | 42.87 | 78.05 | 68.41 | 51.01 | 56.84 | 75.38 | 65.82 | 58.70 | 45.14 | | LaSQuE (scratch) | 89.67 | 87.09 | 81.40 | 42.87 | 78.05 | 68.41 | 51.01 | 56.84 | 75.38 | 65.82 | 58.70 | 45.14 | | LaSQuE (ordinal) | 91.22 | 88.00 | 79.79 | 59.97 | 76.00 | 62.91 | 54.05 | 58.02 | 74.72 | 68.66 | 51.62 | 47.24 | | ExEnt | 89.67 | 87.09 | 81.40 | 42.87 | 78.05 | 68.41 | 51.01 | 56.84 | 75.38 | 65.82 | 58.70 | 45.14 | | Model Name | No Conjunctions | Simple Conjunctions | Nested Conjunctions | | | | | | | | | | | With Quantifiers | With Quantifiers | With Quantifiers | | | | | | | | | | | | NN | CN | LN | BN | NN | CN | LN | BN | NN | CN | LN | BN | | | LaSQuE (predefined init) | 64.81 | 56.77 | 63.70 | 56.16 | 56.30 | 55.02 | 49.79 | 44.37 | 54.77 | 46.80 | 44.66 | 40.79 | | LaSQuE (scratch) | 66.15 | 47.48 | 47.01 | 47.33 | 54.00 | 44.76 | 49.90 | 42.20 | 52.23 | 35.46 | 42.70 | 37.15 | | LaSQuE (ordinal) | 65.10 | 45.03 | 56.88 | 56.83 | 56.36 | 50.41 | 53.08 | 51.78 | 53.48 | 49.22 | 44.92 | 43.91 | | ExEnt | 49.41 | 40.02 | 47.88 | 43.44 | 49.17 | 42.87 | 45.84 | 42.33 | 43.34 | 37.67 | 37.41 | 34.80 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3.1 ✓ B1. Did you cite the creators of artifacts you used? Introduction, Section 3.1 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We do not create any new datasets besides what already exists in prior work. The CLUES benchmark we utilize does not have an associated license. Our main baseline, \exent, is available under the MIT License. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3.1 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The dataset that has been used in this work describes "how to solve a classification task?". Hence, the kind of textual data used in our work depends on the classification task provided by prior work in some cases. Further, the paper accompanying the dataset mentions that the dataset does not contain any personal information about the crowdworkers involved during its creation. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We report language of datasets used in Appendix A. Note: We exclusively use already available datasets for our experiments in this paper. In the CLUES benchmark, quantifiers are present in 50% explanations as per the benchmark paper. Hence, we did not report any additional analysis on the number. Besides quantifiers, we do not use any other linguistic phenomena or demographic information to improve our classifiers. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We use zero-shot generalization accuracy as a metric. This quantity was computed manually. We use HuggingFace Transformers and Pytorch in our code and cite them appropriately. More in Appendix B ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-etal-2023-learned
Learned Adapters Are Better Than Manually Designed Adapters
https://aclanthology.org/2023.findings-acl.468
Recently, a series of works have looked into further improving the adapter-based tuning by manually designing better adapter architectures. Understandably, these manually designed solutions are sub-optimal. In this work, we propose the Learned Adapter framework to automatically learn the optimal adapter architectures for better task adaptation of pre-trained models (PTMs). First, we construct a unified search space for adapter architecture designs. In terms of the optimization method on the search space, we propose a simple-yet-effective method, GDNAS for better architecture optimization. Extensive experiments show that our Learned Adapter framework can outperform the previous parameter-efficient tuning (PETuning) baselines while tuning comparable or fewer parameters. Moreover: (a) the learned adapter architectures are explainable and transferable across tasks. (b) We demonstrate that our architecture search space design is valid.
# Learned Adapters Are Better Than Manually Designed Adapters Yuming Zhang1**, Peng Wang**2∗ , Ming Tan3∗**, Wei Zhu**4∗† 1 College of Computer Science and Software Engineering, Shenzhen University 2 Tomorrow Advancing Life 3 Southern University of Science and Technology 4 East China Normal University ## Abstract ![0_Image_0.Png](0_Image_0.Png) Recently, a series of works have looked into further improving the adapter-based tuning by manually designing better adapter architectures. Understandably, these manually designed solutions are sub-optimal. In this work, we propose the Learned Adapter framework to automatically learn the optimal adapter architectures for better task adaptation of pre-trained models (PTMs). First, we construct a unified search space for adapter architecture designs. In terms of the optimization method on the search space, we propose a simple-yet-effective method, GDNAS, for better architecture optimization. Extensive experiments show that our Learned Adapter framework can outperform the previous parameter-efficient tuning (PETuning) baselines while tuning comparable or fewer parameters. Moreover: (a) the learned adapter architectures are explainable and transferable across tasks. (b) We demonstrate that our architecture search space design is valid. ## 1 Introduction Increasingly large pre-trained models (Han et al., 2021; Devlin et al., 2019; Peters et al., 2018; Liu et al., 2019b; Radford and Narasimhan, 2018; Raffel et al., 2019) built upon the Transformer architecture (Vaswani et al., 2017) have been emerging and achieving the state-of-the-art (SOTA) results on a variety of downstream tasks (Gao et al., 2023; Zhu et al., 2023; Li et al., 2019; Zhu, 2021b; Zuo et al., 2022; Zhang et al., 2022; Guo et al., 2021b; Zhu et al., 2021a; Sun et al., 2020; Zhu et al., 2019). Despite their effectiveness, these large-scale models also bring the curse of prohibitive computation (Zhu et al., 2021c; Zhu, 2021c; Sun et al., 2022) and storage costs during the adaptations to downstream tasks due to the gradient computation of the whole model and the giant size of the fine-tuned checkpoint. ∗Equal contribution. †Corresponding author: michaelwzhu91@gmail.com Recently, parameter efficient tuning (PETuning) has raised much attention in the research field since it can only train a small portion of PTMs and keep the vast majority of parameters frozen, thus alleviating the computation costs during full finetuning. A series of studies (Houlsby et al., 2019; Pfeiffer et al., 2021; Mahabadi et al., 2021; BenZaken et al., 2021; Hu et al., 2021; Guo et al., 2021a; Li and Liang, 2021; Lester et al., 2021) has verified that PETuning can achieve competitive performance compared to conventional finetuning with very few trainable parameters, resulting in a considerable reduction in model adaptation costs. Adapter-based methods (Houlsby et al., 2019; Pfeiffer et al., 2021; Mahabadi et al., 2021; He et al., 2021) inject newly-introduced layers after or around the attention or feed-forward modules of the Transformer block, and yield promising results by fine-tuning a small portion of the PTM's parameters. Recently, a branch of recent research has ad- ![1_image_0.png](1_image_0.png) vanced the understanding of adapter-based tuning more deeply and improved the adapters' architectures to improve parameter efficiency further. Adaptable adapters (Moosavi et al., 2022) propose that adapters at different layers should have different activation functions. Thus they fit the rational activation functions to downstream tasks during parameter tuning. AdapterDrop (Rücklé et al., 2020) tries to reduce the number of adapters' parameter number by not inserting adapters on the lower layers. He et al. (2021) bridge connections among different PETuning approaches to form a unified framework and further propose to insert adapters in parallel to the modules of the Transformer block. Jie and Deng (2022) and Sung et al. (2022) propose to add encoding operations between the projection layers of an adapter and achieve better PETuning performances. The above empirical evidence implies that altering the adapters' architecture designs can help to improve the PETuning performances of adapters with even fewer tunable parameters. Predictably, such an optimal architecture is difficult to construct manually and may vary across different PTM backbones and tasks. Therefore, we propose to search for the optimal architecture of adapters automatically. We present the **Learned Adapter** framework to search for the optimal architecture of adapters automatically. We first construct a unified search space (Figure 2) that considers various design choices of adapters, including the activation functions, encoding operations, and how the adapters are connected to the PTM backbone. In terms of the specific methodology for optimization on the search space, make a simple-yet-effective modification to the optimization method in DARTS (Liu et al., 2019a), which is better at identifying the proper components for adapters at different intermediate layers. We conduct extensive experiments to study the effectiveness of our Learned Adapter framework. The experimental results show that with 0.068% parameters, we can recover 99.5% finetuning performances on the GLUE (Wang et al., 2018) benchmark. Moreover, the searched architecture outperforms the manually designed PETuning baselines while tuning fewer parameters. Figure 1 depicts the overall comparison between our Learned Adapter and the baselines. Furthermore, the learned architectures of adapters are transferable across tasks, which significantly strengthens the usefulness of the searched structures. Further experiments demonstrate that our newly proposed search space for adapters is valid. ## 2 Related Work Adapter-based tuning. One of the most important research lines of PETuning is adapter-based tuning. Adapter (Houlsby et al., 2019) inserts adapter modules with bottleneck architecture between every consecutive Transformer (Vaswani et al., 2017) sublayers. AdapterFusion (Pfeiffer et al., 2021) only inserts sequential adapters after the feed-forward module. Adapter-based tuning methods have comparable results with model tuning when only tuning a fraction of the backbone model's parameter number. Due to their amazing results on PETuning, a branch of literature has investigated the architecture of adapters in search of further improvements. He et al. (2021) analyze a wide range of PETuning methods and show that they are essentially equivalent. They also propose the general architecture of PETuning. AdapterDrop (Rücklé et al., 2020) investigates the efficiency of removing adapters from lower layers. Adaptive adapters (Moosavi et al., 2022) investigate the activation functions of adapters and propose to learn the activation functions of adapters via optimizing the parameters of rational functions as a part of the model parameters. Compacter (Mahabadi et al., 2021) uses lowrank parameterized hypercomplex multiplication (Le et al., 2021) to compress adapters' tunable parameters. There is also work (Sung et al., 2022; Jie and Deng, 2022) trying to add different encoding operations, like self-attention operations and convolutions between the bottleneck structure of adapters, and achieve better performances. Our work complements this branch of literature by investigating: (a) whether and how the adapter architectures affect the PETuning performances, and whether different layers of PTMs need different adapter architectures; (b) whether we can obtain better adapter architectures via neural architecture search. Other PETuning methods Another main research line of PETuning is the prompt-based tuning that inserts some additional soft prompts into the hidden states instead of injecting new neural modules to PTMs. Prompt tuning (Lester et al., 2021) and P-tuning (Liu et al., 2022) insert a soft prompt to word embeddings only, and can achieve competitive results when applied to supersized PTMs. Prefix-tuning (Li and Liang, 2021) and P-tuning v2 (Liu et al., 2021) insert prompts to every hidden layer of PTM. IDPG (Wu et al., 2022) uses the prompt generator with parameterized hypercomplex multiplication (Le et al., 2021) to generate a soft prompt for every instance. There are also some other popular PETuning methods, such as BitFit (Ben-Zaken et al., 2021) which only tunes the bias terms, LoRA (Hu et al., 2021) which optimizes low-rank decomposition matrices of the weights within self-attention layers. Neural architecture search In the early attempts, neural architecture search (NAS) requires massive computations, like thousands of GPU days (Zoph and Le, 2017; Zoph et al., 2018; Liu et al., 2018). Recently, a particular group of one-shot NAS, led by the seminal work DARTS (Liu et al., 2019a) has attracted much attention. DARTS formulates the search space into a super-network that can adjust itself in a continuous space so that the network and architectural parameters can be optimized alternately (bi-level optimization) using gradient descent. A series of literature try to improve the performance and efficiency of DARTS, such as Xie et al. (2019), Chen et al. (2021), Chu et al. (2021), Nayman et al. (2019). SNAS (Xie et al., 2019) reformulate DARTS as a credit assignment task while maintaining the differentiability. Gao et al. (2020) penalize the entropy of the architecture parameters to encourage discretization on the hyper-network. P-DARTS (Chen et al., 2021) analyze the issues during the DARTS bi-level optimization, and propose a series of modifications. PC-DARTS (Xu et al., 2021) reduces the memory cost during search by sampling a portion of the channels in supernetworks. FairDARTS (Chu et al., 2021) change the softmax operations in DARTS into sigmoid and introduce a zero-one loss to prune the architectural parameters. XNAS (Nayman et al., 2019) dynamically wipes out inferior architectures and enhances superior ones. NAS is widely applied in both computer vision and natural language processing, especially in knowledge distillation (Zhu, 2021a; Zhang et al., 2021). Our work complements the literature by examining the optimization of DARTS on our search space and propose a new training procedure that does not require re-training after discretization. ## 3 Search Space Of Learned Adapter 3.1 Pilot Experiments And Motivations In this subsection, we conduct a series of experiments on the RTE (Dagan et al., 2005) and MRPC (Dolan and Brockett, 2005) datasets to demonstrate the necessity of investigating the architecture of adapters. The baseline modelMis RoBERTa-large model with an parallel adapter at the feed-forward module (FFN adapter) (He et al., 2021). The backbone model is frozen and we only tune the adapters on downstream tasks. The bottleneck dimension is 32 and the activation function is ReLU. The other experimental settings follows Appendix B. We now consider a series of simple modifications to the baseline model. Modifying the activation function We replace the activation functions of the adapters from ReLU to GeLU, SWISH or Tanh, while keeping the other Model RTE MRPC M 79.1 (0.5) 89.3 (0.4) M*gelu* 79.3 (0.2) 89.5 (0.3) M*swish* 79.6 (0.3) 89.2 (0.4) M*tanh* 79.2 (0.5) 88.9 (0.6) Msa 79.5 (0.3) 89.6 (0.2) M*conv* 79.5 (0.6) 89.4 (0.5) M*attn* 79.0 (0.4) 89.1 (0.3) M*block* 79.6 (0.5) 89.4 (0.3) settings unchanged. The three modified models are denoted as Mgelu, M*swish* and B*tanh*, respectively. Adding encoding operations We add a selfattention operation (Vaswani et al., 2017) or a convolutional operation of kernel size 3 after the downprojection and before the activation function. The two variants of model M are denoted as Msa and M*conv*, respectively. Since extra operations introduce more parameters, we reduce the bottleneck dimension of Msa and M*conv* to 24 to ensure fair comparison. Alternative adapter placements Instead of inserting the adapter around the FFN module of the transformer block, we now consider: (1) M*attn* inserts the adapters at the attention modules (attn adapter); (2) M*block* inserts the adapters around the entire transformer block (block adapter). Note that the setting of block adapters is theoretically supported by the general framework of PETuning in He et al. (2021) but not considered by the previous work. In this work, we will demonstrate the usefulness of block adapters via experiments. Table 1 reports the experimental results of the above models. The evaluation metrics for the RTE and MRPC tasks are introduced by Appendix A.2. We can see that the four simple modifications to the baseline model, Mgelu, Msa, M*conv* and M*block*, can slightly outperform M, demonstrating that the adapter architectures are essential for adapter tuning, and it is promising to design better adapter architectures for better adapter tuning performances. The pilot experiments raise a vital research question: **What are the optimal architectures for** adapters? Obviously, such an optimal architecture will be different across tasks and PTM models and even across different intermediate layers of a PTM, making it impossible for manual designs. We are motivated to investigate the problem of optimizing the architectures of adapters via neural architecture ## 3.2 General Architecture Of Adapters As depicted in Figure 2, we now construct the search space of the Learned Adapter. The adapter is a bottleneck architecture with bottleneck dimension r, consisting of down-projection layer MLPd, an activation function g1, an encoder layer Enc, another activation function g2 and finally a up-projection layer MLPu. Formally, the hidden states hx goes through the adapter and becomes ## H (A) X = Mlpu(G2(Enc(G1(Mlpd(Hx))))). (1) Following He et al. (2021), the hidden representation hx will also go through the backbone's certain encoding module BEnc, and the adapted hidden states will become h ′ x = BEnc(hx) + h (A) x . Following (Wu et al., 2022; Mahabadi et al., 2021; Le et al., 2021), we employ the parameterized hypercomplex multiplication (PHM) layer (Le et al., 2021) with parameter n to reduce the parameters of MLPd and MLPu. The PHM layer has a parameter complexity of O(*rd/n*), reducing the parameters of the projection layers by at most 1n . ## 3.3 Search Space We now formally introduce the search space of our Learned Adapter framework. The whole search space contains three types of search cells as shown in Figure 2: Activation Search Cell The Activation Search Cell is designated to choose the proper activation functions g1 and g1 from several candidates. Similar to So et al. (2019), the collection of candidate activation functions considered is: (a) ReLU (Agarap, 2018); (b) **GeLU** (Hendrycks and Gimpel, 2016); (c) **SWISH** (Ramachandran et al., 2017); (d) **Tanh** (Krizhevsky et al., 2012); (e) **NullAct**, which means no activation function and not to make changes to the input. Encoder Search Cell As is shown in Figure 2, different from (Wang et al., 2020; Zhu et al., 2021b), we construct our encoder cell as a simple DAG with a single edge. Our collection of encoder operations consists of the following four groups: (a) 1-d convolutional layers, with stride 1, same padding, output filters equal to the input's dimension, and kernel size equal to 1, 3, 5, or 7 (denoted as **conv_**k, k = 1, 3, 5, 7). (b) Multi-head self-attention (MHA) layers (Vaswani et al., 2017), with head size equal to 2 or 8 (denoted as **mha_k**, k = 2, 8). (c) Skip connection (He et al., 2015), denoted as **skip-connect**. (d) The null encoding operation that multiplies zero tensors to the input (**null**).1 Adapter Placement Search Cell This search cell is designated to decide the placement of the adapter in an intermediate transformer block. We consider three candidate placements shown in Figure 2: 2(a) **FFN adapter**, that is, to insert the adapter in parallel to the feed-forward module; (b) Attn adapter, parallel to the self-attention module; (c) **Block adapter** inserts the adapter in parallel to the whole transformer block. This placement option is supported by the theoretical analysis of He et al. (2021) but has not been considered by the literature. In the experiments, we will show that including the above three choices for adapter placements are necessary. Note that the above three search cells are singleedge DAGs. Following Pham et al. (2018); Wang et al. (2020); Zhu et al. (2021b); Zhu (2021a); Zhu et al. (2021d), we consider the macro search space, that is, different adapter architectures are learned for different intermediate layers. Intuitively, the macro search space allows for idiosyncratic architectures for different intermediate layers, leading to easier model adaptation. Despite the simple structures of search cell DAGs compared to the general NAS literature, our macro search space can result in 6.38e+90 combinations of different adapter architectures across different intermediate layers of the PTM backbones. Note that our search space contains the adapter architectures from Sung et al. (2022); Jie and Deng (2022) as special cases. ## 4 Search Method 4.1 Preliminaries On Darts Assume there is a pre-defined space of operations denoted by O, where each element, o(·), denotes a neural network operation, like convolutional operation, self-attention, activation, etc. DARTS (Liu et al., 2019a) initialize a hypernetwork in which each block is a search cell, that is, a fully connected directed acyclic graph (DAG) with N nodes. Let (*i, j*) denote a pair of 1Choosing this operation means the model decides to discard an operation in the encoder, thus making the encoder a lighter architecture. 2Through our initial experiments, we find no improvements to include other standard adapter placement options, like Houlsby et al. (2019); Pfeiffer et al. (2021) nodes in the DAG. The core idea of DARTS is to use a weighted sum to include all |O| operations in O, fi,j (zi) = Po∈O a o i,j · o(zi), where a o i,j = exp α o i,j Po ′∈O exp α o ′ i,j , zi denotes the output of the i-th node, and α o i,j is the architectural parameters that represents the weight (or the importance score) of o(·) in edge (*i, j*). The output of a node is the sum of all input flow, i.e., zj =Pi<j fi,j (zi). The output of the entire cell is formed by summing the last two nodes. This design makes the entire framework differentiable to both layer weights and architectural parameters α o i,j so that it is possible to perform architecture search in an end-to-end fashion. After the search process is completed, the discretization procedure extracts the final sub-network by selecting the best operation on each edge and dropping the lower-score operations. And the final sub-network will train on the original train set with randomly initialized parameters. ## 4.2 Discussion On The Search Method The standard optimization method for the above framework is the bi-level optimization proposed in DARTS (Liu et al., 2019a). However, there are recent works arguing that the single-level optimization method could also work for the DARTS framework. As pointed out by Bi et al. (2019) and Bi et al. (2020), bi-level optimization suffers considerable inaccuracy of gradient estimation and the potential instability can increase with the complexity of the search space. And Bi et al. (2020) conduct experiments to demonstrate that one-level optimization performs comparably with bi-level optimization but with better efficiency. Their experiments are conducted mainly on computer vision benchmarks like CIFAR-10 (Krizhevsky et al., 2012). In this work, we would like to investigate which optimization method is better under our framework. Note that the original DARTS requires one to re-train the learned networks from scratch after the search procedure, which definitely introduce additional computation costs. In this work, we propose to gradually discretize the hyper-network and obtain a sub-network without re-training. We will refer to this method as gradually discretizing neural architecture search (GDNAS). We first train the complete hyper-network M for K1 epochs. Then we select a edge e in the search space to discretize (for example, the edge in the encoder cell). Discretization simply means selecting the operation o e ∗ with highest architectural parameter, and drop the other operations. Now we have obtain a new reduced hyper-network M. The discretized edge may cause the performance of the hyper-network to drop significantly, so we further finetune the hyper-network M for K2 epochs. In addition to the advantage of not to retrain the learned network, GDNAS retains the knowledge in the hyper-network, and obtain the performance gains against the re-trained sub-network. This is analogous to the model pruning literature, where a network pruned from a larger one is usually better than the network trained from scratch (Liang et al., 2021). Algorithm 1: GDNAS Input: A hyper-network M, all edges E on hyper-network M; Output: Set of selected operations {o∗ e}e∈E Data: Training set D*train*, a batch of validation data Bval 1 Train hyper-network M on the training set D*train* for K1 epochs; 2 for *edge* e in E do 3 Select the best operation o∗ e ← arg maxo α o e ; 4 Discretize edge e of hyper-network M by only keeping o∗ e ; 5 Further train the hyper-network M on D*train* for K2 epochs; ## 5 Experiments 5.1 Evaluation Datasets We evaluate the performance of the methods on the GLUE (Wang et al., 2018) benchmarks. These benchmarks cover multiple tasks of paraphrase detection (MRPC, QQP), sentiment classification (SST-2), natural language inference (MNLI, RTE, QNLI), linguistic acceptability (CoLA).3 Since the original test sets of the GLUE benchmark are not publicly available, we follow Zhang et al. (2020) and Mahabadi et al. (2021) to construct the train/dev/test splits as follows to ensure a fiar comparison: (a) for datasets with fewer than 10k samples (RTE, MRPC, STS-B, CoLA), we divide 3Following Devlin et al. (2019) and (Raffel et al., 2019), as a common practice, we do not experiment with the WNLI task (Levesque et al., 2011) due to its adversarial nature with respect to the training set. the original validation set in half, using one half for validation and the other for testing. (b) for larger datasets, we split 1k samples from the training set as the development set, and use the original development set as the test set. The detailed statistics and evaluation metrics of the GLUE benchmark is presented in Table 7 of Appendix A. ## 5.2 Experiment Settings We run all the experiments on NVIDIA V100 32GB GPUs. We mainly evaluate our method on the GLUE benchmarks with the RoBERTa-large (Liu et al., 2019c) backbone model. We also evaluate our framework with the DeBERTa-large (He et al., 2020) and GPT2-large (Radford et al., 2019) backbones. We use the HugginFace Transformers (Wolf et al., 2020) for implementing all the methods. Unless otherwise specified, GDNAS will adopt the bi-level optimization method of DARTS. For GDNAS' discretization procedure, we set K1 = 5 and K2 = 0.5 on large datasets (SST-2, QNLI, QQP and MNLI), and K1 = 20 and K2 = 2 on low-resource datasets. And the batch size is set to 128 for dataset with more than 10k training samples, and 32 otherwise. For Learned Adapter, we set the bottleneck dimension r to 32 and select at most one adapter at each transformer layer. For the PHM layers, we use the PyTorch implementation of Le et al. (2021) and set n to 8. We run each task under 5 different random seeds and report the average performance and standard deviation. More details of the experimental settings are put in Appendix B. ## 5.3 Baselines We compare our Learned Adapter framework with the current SOTA baseline methods. Fine-tune The traditional fine-tuning methods that train all parameters in the PTM backbone. Adapter-based tuning For adapter-based tuning methods, we compare with: (1) Adapter (Houlsby et al., 2019); (2) Compacter (Mahabadi et al., 2021); (3) Parallel Adapter proposed by (He et al., 2021) added on the FFN module; (4) LST (Sung et al., 2022). We re-implement Parallel Adapter with PHM projection layers (n = 8). Prompt-based tuning For prompt-based tuning methods, we compare with (1) Prompt Tuning (Lester et al., 2021), (2) P-tuning v2 (Liu et al., 2021). The number of prompt tokens in these methods is set to 20. | Method | Tunable | CoLA | SST-2 | MRPC | QQP | STS-B | MNLI | QNLI | RTE | Avg | |----------------------|-----------|------------|------------|------------|------------|------------|------------|------------|------------|-------| | Params | (mcc) | (acc) | (acc-f1) | (acc-f1) | (corr) | (acc) | (acc) | (acc) | | | | Baselines | | | | | | | | | | | | RoBERTa-large | 355M | 65.3 | 95.4 | 91.5 | 90.3 | 91.8 | 89.8 | 94.5 | 80.1 | 87.3 | | Prompt Tuning | 21K | 54.6 (2.5) | 90.9 (0.5) | 74.8 (1.3) | 87.4 (0.5) | 90.1 (0.3) | 83.4 (0.2) | 92.4 (0.3) | 68.7 (1.9) | 80.3 | | P-tuning v2 | 985K | 57.3 (2.1) | 92.3 (0.3) | 86.9 (1.2) | 88.1 (0.4) | 90.4 (0.1) | 87.3 (0.2) | 92.9 (0.4) | 77.2 (2.1) | 84.1 | | BitFit | 273K | 59.2 (0.9) | 94.1 (0.3) | 88.6 (0.8) | 88.5 (0.6) | 91.1 (0.3) | 87.3 (0.1) | 93.2 (0.2) | 78.6 (1.4) | 85.1 | | LoRA | 778K | 59.7 (1.4) | 93.6 (0.1) | 88.9 (0.7) | 88.3 (0.4) | 90.9 (0.2) | 87.9 (0.2) | 93.2 (0.1) | 78.8 (1.3) | 85.2 | | UNIPELT | 1.4M | 61.5 (1.7) | 93.6 (0.2) | 89.2 (0.8) | 88.7 (0.4) | 91.0 (0.5) | 87.5 (0.2) | 92.9 (0.1) | 79.1 (0.9) | 85.4 | | Adapter | 1.6M | 63.1 (1.4) | 93.8 (0.1) | 87.5 (0.8) | 88.7 (0.4) | 90.9 (0.4) | 88.5 (0.2) | 93.2 (0.1) | 78.9 (1.3) | 85.6 | | LST | 1.7M | 63.6 (1.5) | 93.9 (0.2) | 89.2 (0.7) | 88.7 (0.2) | 91.2 (0.4) | 88.3 (0.1) | 93.0 (0.2) | 79.2 (0.9) | 85.9 | | Parallel Adapters | 279K | 63.8 (1.5) | 94.2 (0.3) | 89.3 (0.5) | 88.9 (0.6) | 91.1 (0.4) | 88.3 (0.1) | 93.2 (0.1) | 79.1 (0.5) | 86.0 | | Compacter | 279K | 63.7 (0.9) | 94.2 (0.4) | 89.1 (0.7) | 88.6 (0.3) | 90.8 (0.2) | 88.2 (0.2) | 92.8 (0.2) | 78.7 (0.9) | 85.8 | | Our proposed methods | | | | | | | | | | | | Learned Adapter | 294K | 64.3 (0.9) | 94.9 (0.3) | 89.8 (0.9) | 89.2 (0.3) | 91.3 (0.2) | 88.6 (0.2) | 93.5 (0.2) | 80.4 (1.3) | 86.5 | Other PETuning methods We also compare: (1) BitFit (Ben-Zaken et al., 2021); (3) LoRA (Hu et al., 2021); (4) UNIPET (Mao et al., 2021) combines different types of PETuning methods in an automatical manner. We implement Aadpter, BitFit, and LoRA using the OpenDelta4library. Other baselines are implemented using their open-sourced codes with their default settings. For a fair comparison, we do not use supplementary training like Wu et al. (2022) to enhance performance. ## 5.4 Results On The Glue Benchmark Table 2 shows the results on GLUE with RoBERTa-large. Our Learned Adapter framework, outperforms the previous PETuning methods and notably preserves 99.4% performance of the fullmodel fine-tuning method while only tuning 240K to 300K parameters. We can observe from Table 2 that: (a) Note that our Learned Adapter framework obtains further improvements by automatically designing adapter architectures for different intermediate layers of the PTM. (b) Note that although we add encoding operations in adapters, the total tunable parameters of the Learned Adapter in the macro setting are fewer than Compacter since our framework can automatically drop adapters on certain layers when necessary. ## 5.5 Further Analysis Explanations of the searched architectures To understand the searched adapter architectures under our Learned Adapter framework, we present the learned adapter architectures on the RTE and SST-2 tasks on Table 9 and 10 in Appendix D, respectively. From the learned adapter architectures, we can observe that: (a) The adapter architecture varies for different layers, showing that different layers require different adapter architectures. (b) On each task, Learned Adapter chooses the **null** encoding operation on 3-5 intermediate layers, meaning to drop the adapters on these layers. (c) Regarding the adapter placement choices, we find that on each task, all three placement candidates, FFN adapter, Attn adapter, and Block adapter, are selected. This observation demonstrates that introducing block adapters into our search space is necessary. (d) Most adapters select the convolutional operations, and multi-head self-attention operations tend to occur in adapters of deeper layers. (e) around half of the learned adapters choose the **NullAct** for the second activation function g2. Furthermore, we observe that there are adapters on the deeper transformer layers that requires no activation function but an encoder operation, demonstrating novel design patterns for adapters. ## Exploring The Limit Of Parameter Efficiency To explore the limit of parameter efficiency, we train the Learned Adapter, and Compacter (Mahabadi et al., 2021) with different rank parameters n ∈ {1, 2, 4, 8, 16, 32}. Note that in the main experiments, we set n equal to 4. With larger n, the parameters of adapters will increase proportionally. In Figure 3, we demonstrate the results of the RTE and SST-2 tasks. We can see that the advantages of our Learned Adapter framework become more prominent with lower tunable parameter budgets. The results demonstrate that our framework can 4https://github.com/thunlp/OpenDelta ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) RTE SST-2 RTE **80.4** (1.3) 94.6 (0.1) SST-2 80.3 (1.4) **94.9** (0.3) | Search space | RTE | SST-2 | |------------------|------------|------------| | (acc) | (acc) | | | S | 80.4 (1.3) | 94.9 (0.3) | | S1 | 80.1 (1.5) | 94.6 (0.3) | | S2 | 79.8 (1.0) | 94.4 (0.4) | | Parallel Adapter | 79.1 (0.5) | 94.2 (0.3) | effectively deliver the most proper architectures under the given parameter budgets and boost the performance of adapter-based tuning. Architectures' Transferability We now evaluate the transferability of the searched structures by the Learned Adapter.The RTE and SST-2 datasets are used as source and target datasets. We search the adapter architectures on the source dataset and train the searched architectures from scratch on the target task, and report the average and standard deviation of scores over 5 random seeds on table 3. We can see from Table 3 that the searched architectures are highly transferable. The transferred architectures can already achieve comparable or better performances than most baseline models (Table 2). The transferability guarantees the reusability of the searched adapter architectures. Ablation studies on the search space We now conduct an ablation study of our search space by reducing our search space S to a singleton step-bystep : (a) reduce the activation cells by only keeping the **ReLU** activation for g1 and the **NullAct** for g2 (S1); (b) further reduce the encoder cell to only include **skip-connect** (S2); (c) further reduce the adapter placement cell to only include FFN adapter, and now the search space only contains Parallel Adapter (He et al., 2021). Table 4 reports the search results on different search spaces, showing that that dropping any components of the whole search space results in performance losses. The results demonstrate that each search cell in our search space design is necessary and beneficial. Working with other PTM backbones To verify the general applicability of our Learned Adapter | Method | Tunable | RTE | SST-2 | |------------------------|-----------|------------|------------| | Params | (acc) | (acc) | | | DeBERTa-large backbone | | | | | Fine-tuning | 406M | 82.1 (1.1) | 95.7 (0.2) | | Adapters | 1.6M | 80.7 (1.3) | 93.8 (0.1) | | Parallel Adapters | 279K | 81.5 (1.2) | 93.9 (0.3) | | Learned Adapter | 245K | 82.0 (0.9) | 94.5 (0.3) | | GPT2-large backbone | | | | | Fine-tuning | 774M | 79.5 (0.8) | 95.5 (0.1) | | Adapters | 2.7M | 78.1 (0.9) | 93.9 (0.1) | | Parallel Adapters | 462K | 78.5 (1.2) | 93.8 (0.1) | | Learned Adapter | 482K | 79.1 (0.6) | 94.2 (0.2) | ![7_image_0.png](7_image_0.png) framework, we also conduct experiments on two other widely used PTM backbones, DeBERTalarge (He et al., 2020), and GPT2-large (Radford et al., 2019). The results are shown in Table 5. Our Learned Adapter successfully outperforms the adapter-based tuning baselines on both pre-trained backbones. This result enhances the reliability of our framework. We now validate our Learned Adapter framework on other pre-trained backbone: DeBERTalarge (He et al., 2020) and GPT2-large (Radford et al., 2019). The results are presented in Table 5. ## 5.6 Discussions On The Search Method Search efficiency of GDNAS We use the RTE task to demonstrate the search efficiency. Running the RTE task with DARTS takes 1.5h (70.5min for bi-level optimization for 25 epochs and 21.6min for re-training with 25 epochs). Since GDNAS does not require re-training, it requires 1.2h (73.3min for training the hyper-network for k1+3∗K2 = 26 epochs). Our method consumes around three times the training time of Parallel Adapter (He et al., 2021), which is affordable compared to manually designing different architectures and running numerous evaluations. Ablation study of search methods We now run Learned Adapter with GDNAS with singlelevel optimization, the original DARTS (Liu et al., 2019a) and ENAS (Cai et al., 2018). The results are shown in Table 6. The results demonstrate that our GDNAS are effective in discovering better adapter architectures. In addition, the results demonstrate that bi-level optimization obtains slightly better results. Performance on a NAS benchmark To further ![8_image_0.png](8_image_0.png) | Search | RTE | SST-2 | |----------------------|------------|------------| | method | (acc) | (acc) | | GDNAS | 80.4 (1.3) | 94.9 (0.3) | | GDNAS (single-level) | 80.2 (0.9) | 94.7 (0.2) | | DARTS | 79.8 (1.2) | 94.5 (0.5) | | ENAS | 80.3 (1.0) | 94.3 (0.4) | validate our GDNAS method, we conduct experiments on the widely studied neural architecture search benchmark dataset, CIFAR-10 (Krizhevsky, 2009). The results are reported in Table 8 of Appendix C. Our GDNAS method achieves 2.52% test error, which manageable search cost of 0.6 GPU days. ## 6 Conclusion In this work, we propose the Learned Adapter framework, which automatically optimizes the adapter architectures. First, we design a unified search space for adapters, taking into account the recent works of manual adapter designs. Second, in light of the issues in the DARTS method, we propose a novel GDNAS method that can deliver better adapter architectures and requires no re-training of the learned adapter architectures. We run extensive experiments and analyses on the GLUE benchmark, demonstrating that our Learned Adapter framework can achieve better tuning performances than the baselines while maintaining parameter efficiency. ## Limitations We showed that our proposed method can greatly improve the performance of parameter efficient tuning on diverse NLU tasks and three different pretrained models (i.e., RoBERTa-large, DeBERTalarge and GPT2-large). However, we acknowledge the following limitations: (a) the more super-sized pretrained models with tens of billions of or more parameters were not studied due to limited computation resources. (b) Other tasks in natural language processing, like the text generation tasks, were also not considered. But our framework can be easily transferred to other backbone architectures and different types of tasks. It would be of interest to investigate if the superiority of our method holds for other backbone models and types of tasks. And we will explore it in future work. ## Ethics Statement The finding and proposed method aims to improve the adapter based tuning in terms of tuning parameters and performances. The used datasets are widely used in previous work and, to our knowledge, do not have any attached privacy or ethical issues. ## References Abien Fred Agarap. 2018. Deep learning using rectified linear units (relu). *ArXiv*, abs/1803.08375. Elad Ben-Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. *ArXiv*, abs/2106.10199. Kaifeng Bi, Changping Hu, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian. 2019. Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters. *arXiv e-prints*, page arXiv:1910.11831. Kaifeng Bi, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian. 2020. Gold-nas: Gradual, one-level, differentiable. *ArXiv*, abs/2007.03331. Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Efficient architecture search by network transformation. In *AAAI*. Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. 2021. Progressive darts: Bridging the optimization gap for nas in the wild. *ArXiv*, abs/1912.10952. Xiangxiang Chu, Bo Zhang, Ruijun Xu, and Jixiang Li. 2021. Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. *2021* IEEE/CVF International Conference on Computer Vision (ICCV), pages 12219–12228. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. pages 177–190. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Xiangxiang Gao, Wei Zhu, Jiasheng Gao, and Congrui Yin. 2023. F-pabee: Flexible-patience-based early exiting for single-label and multi-label text classification tasks. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Yuan Gao, Haoping Bai, Zequn Jie, Jiayi Ma, Kui Jia, and Wei Liu. 2020. Mtl-nas: Task-agnostic neural architecture search towards general-purpose multitask learning. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 11540–11549. Demi Guo, Alexander Rush, and Yoon Kim. 2021a. Parameter-efficient transfer learning with diff pruning. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4884–4896, Online. Association for Computational Linguistics. Zhao Guo, Yuan Ni, Keqiang Wang, Wei Zhu, and Guotong Xie. 2021b. Global attention decoder for Chinese spelling error correction. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1419–1428, Online. Association for Computational Linguistics. Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji rong Wen, Jinhui Yuan, Wayne Xin Zhao, and Jun Zhu. 2021. Pre-trained models: Past, present and future. *ArXiv*, abs/2106.07139. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. ArXiv, abs/2110.04366. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decodingenhanced bert with disentangled attention. *ArXiv*, abs/2006.03654. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv: Learning*. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Shibo Jie and Zhifang Deng. 2022. Convolutional bypasses are better vision transformer adapters. *ArXiv*, abs/2207.07039. A. Krizhevsky. 2009. Learning multiple layers of features from tiny images. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. *Communications of the ACM*, 60:84 - 90. Tuan Le, Marco Bertolini, Frank No'e, and Djork-Arné Clevert. 2021. Parameterized hypercomplex graph neural networks for graph classification. In *International Conference on Artificial Neural Networks*. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. *arXiv preprint arXiv:2104.08691*. Hector J. Levesque, Ernest Davis, and L. Morgenstern. 2011. The winograd schema challenge. In *International Conference on Principles of Knowledge Representation and Reasoning*. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv* preprint arXiv:2101.00190. Xiepeng Li, Zhexi Zhang, Wei Zhu, Zheng Li, Yuan Ni, Peng Gao, Junchi Yan, and Guotong Xie. 2019. Pingan smart health and SJTU at COIN - shared task: utilizing pre-trained language models and commonsense knowledge in machine reading tasks. In *Proceedings of the First Workshop on Commonsense* Inference in Natural Language Processing, pages 93–98, Hong Kong, China. Association for Computational Linguistics. Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and Quantization for Deep Neural Network Acceleration: A Survey. arXiv e-prints, page arXiv:2101.09671. Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Loddon Yuille, Jonathan Huang, and Kevin P. Murphy. 2018. Progressive neural architecture search. In *ECCV*. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2019a. Darts: Differentiable architecture search. ArXiv, abs/1806.09055. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *ArXiv*, abs/2110.07602. Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022. Ptuning: Prompt tuning can be comparable to finetuning across scales and tasks. In Annual Meeting of the Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. In *NeurIPS*. Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen tau Yih, and Madian Khabsa. 2021. Unipelt: A unified framework for parameter-efficient language model tuning. *ArXiv*, abs/2110.07577. Nafise Sadat Moosavi, Quentin Delfosse, Kristian Kersting, and Iryna Gurevych. 2022. Adaptable adapters. In *North American Chapter of the Association for* Computational Linguistics. Niv Nayman, Asaf Noy, T. Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik-Manor. 2019. Xnas: Neural architecture search with expert advice. *ArXiv*, abs/1906.08031. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *North American Chapter of the Association for Computational Linguistics*. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487–503, Online. Association for Computational Linguistics. Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. In *ICML*. Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pretraining. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683. Prajit Ramachandran, Barret Zoph, and Quoc V. Le. 2017. Swish: a self-gated activation function. arXiv: Neural and Evolutionary Computing. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2018. Regularized evolution for image classifier architecture search. In AAAI Conference on Artificial Intelligence. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers. In *Conference on Empirical Methods in Natural Language Processing*. David R. So, Chen Liang, and Quoc V. Le. 2019. The evolved transformer. *ArXiv*, abs/1901.11117. Haixia Sun, Jin Xiao, Wei Zhu, Yilong He, Sheng Zhang, Xiaowei Xu, Li Hou, Jiao Li, Yuan Ni, and Guotong Xie. 2020. Medical knowledge graph to enhance fraud, waste, and abuse detection on claim data: Model development and performance evaluation. *JMIR Med Inform*, 8(7):e17653. Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, and Xipeng Qiu. 2022. A simple hash-based early exiting approach for language understanding and generation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2409–2421, Dublin, Ireland. Association for Computational Linguistics. Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. Lst: Ladder side-tuning for parameter and memory efficient transfer learning. *ArXiv*, abs/2206.06522. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *BlackboxNLP@EMNLP*. Yujing Wang, Yaming Yang, Yiren Chen, Jing Bai, Ce Zhang, Guinan Su, Xiaoyu Kou, Yunhai Tong, Mao Yang, and Lidong Zhou. 2020. Textnas: A neural architecture search space tailored for text representation. In *AAAI*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, and Hao Ma. 2022. Idpg: An instance-dependent prompt generation method. In *North American Chapter of the* Association for Computational Linguistics. Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. 2018. Snas: Stochastic neural architecture search. ArXiv, abs/1812.09926. Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. 2019. Snas: Stochastic neural architecture search. ArXiv, abs/1812.09926. Yuhui Xu, Lingxi Xie, Wenrui Dai, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Hongkai Xiong, and Qi Tian. 2021. Partially-connected neural architecture search for reduced computational redundancy. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43:2953–2970. Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter. 2019. Understanding and robustifying differentiable architecture search. *ArXiv*, abs/1909.09656. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. *ArXiv*, abs/2006.05987. Zhen Zhang, Wei Zhu, Jinfan Zhang, Peng Wang, Rize Jin, and Tae-Sun Chung. 2022. PCEE-BERT: Accelerating BERT inference via patient and confident early exiting. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 327–338, Seattle, United States. Association for Computational Linguistics. Zhexi Zhang, Wei Zhu, Junchi Yan, Peng Gao, and Guotong Xie. 2021. Automatic student network search for knowledge distillation. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pages 2446–2453. IEEE. Hongpeng Zhou, Minghao Yang, Jun Wang, and Wei Pan. 2019. Bayesnas: A bayesian approach for neural architecture search. *ArXiv*, abs/1905.04919. Wei Zhu. 2021a. Autonlu: Architecture search for sentence and cross-sentence attention modeling with redesigned search space. In Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part I 10, pages 155–168. Springer. Wei Zhu. 2021b. AutoRC: Improving BERT based relation classification models via architecture search. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 33– 43, Online. Association for Computational Linguistics. Wei Zhu. 2021c. LeeBERT: Learned early exit for BERT with cross-level optimization. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2968–2980, Online. Association for Computational Linguistics. Wei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, Guotong Xie, and Xiaoling Wang. 2021a. paht_nlp @ MEDIQA 2021: Multi-grained query focused multi-answer summarization. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 96–102, Online. Association for Computational Linguistics. Wei Zhu, Yuan Ni, Xiaoling Wang, and Guotong Xie. 2021b. Discovering better model architectures for medical query understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, pages 230–237, Online. Association for Computational Linguistics. Wei Zhu, Peng Wang, Xiaoling Wang, Yuan Ni, and Guotong Xie. 2023. Acf: Aligned contrastive finetuning for language and vision tasks. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. Category Datasets |train| |dev| |test| |Y| Type Labels Single-sentence SST-2 66349 1000 872 2 sentiment positive, negative CoLA 8551 521 522 2 linguistic acceptability acceptable, not acceptable | Sentence-pair | |-----------------| Wei Zhu, Xiaoling Wang, Yuan Ni, and Guo Tong Xie. 2021c. Gaml-bert: Improving bert early exiting by gradient aligned mutual learning. In *EMNLP*. Wei Zhu, Xiaoling Wang, Yuan Ni, and Guotong Xie. 2021d. Autotrans: Automating transformer design via reinforced architecture search. In *Natural Language Processing and Chinese Computing*, pages 169–182, Cham. Springer International Publishing. | CoLA | 8551 | 521 | 522 | 2 | linguistic acceptability | acceptable, not acceptable | |--------|--------|-------|-------|-----|----------------------------|------------------------------------| | MNLI | 391702 | 1000 | 19647 | 3 | NLI | entailment, neutral, contradiction | | MRPC | 2668 | 1000 | 408 | 2 | paraphrase | equivalent, not equivalent | | QNLI | 103743 | 1000 | 5463 | 2 | NLI | entailment, not entailment | | QQP | 362846 | 1000 | 40430 | 2 | paraphrase | equivalent, not equivalent | | RTE | 1490 | 1000 | 277 | 2 | NLI | entailment, not entailment | Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, and Guotong Xie. 2019. PANLP at MEDIQA 2019: Pre-trained language models, transfer learning and knowledge distillation. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 380–388, Florence, Italy. Association for Computational Linguistics. Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. *ArXiv*, abs/1611.01578. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2017. Learning transferable architectures for scalable image recognition. *2018* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8697–8710. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. Learning transferable architectures for scalable image recognition. *2018* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8697–8710. Yuhui Zuo, Wei Zhu, and Guoyong GUET Cai. 2022. Continually detection, rapidly react: Unseen rumors detection based on continual prompt-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3029–3041, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. ## A Datasets And Evaluation Metrics A.1 Dataset Splits The detailed statistics of the GLUE benchmark is presented in Table 7. ## A.2 Evaluation Metrics For MNLI, we report the average of the accuracy scores on the matched and mis-matched test set. For MRPC and QQP, we report acc-f1, which is the average of accuracy and F1 scores. For STSB, we report corr, which denotes the average of the Pearson and Spearman correlation coefficients. For CoLA, we report mcc, which is the Matthews correlation. For all other tasks, we report accuracy. ## B Appendix For Experimental Settings Details of experiments We run our methods and baseline method on the GLUE (Wang et al., 2018) following the previous works. All datasets are downloaded via the HuggingFace Datasets5library. Since the test split of these tasks are invisible to the researchers, we randomly split off 1k samples from the training set as validation set for large datasets(QQP, QNLI, SST2, MNLI), and use the remaining as the training set, and use the original validation set as the test set. For other datasets, we randomly split the original validation set in half as the validation set and the test set, and use the original train set as our train set. The same dataset is split differently with different random seeds. For each experiment setting, we repeat the experiment with 5 seeds, and report the average score and standard deviation. In all experiments, the maximum sequence length is 128 for the tasks. We mainly use the RoBERTa-large model (355M parameters) as the backbone model, but we also adopt the GPT2-large and DeBERTa-large the backbone models for ablation studies. We freeze the pretrained parameters in all experiments except fullmodel finetuning. We use AdamW as the optimizer with a linear learning rate decay schedule and 6% of the training steps for warm-up. 5https://huggingface.co/docs/datasets/index We train the hyper-network on the train set D*train* following (Liu et al., 2019a). For training epochs, we set K1 = 5 and K2 = 1 on large datasets (SST-2, QNLI, QQP and MNLI), and K1 = 20 and K2 = 5 on low-resource datasets. We will run the search procedure once for each task. After the hyper-network is fully discretized, instead of retraining from scratch, we further train the remained network for K2 epochs, and we evaluate the model on the dev set and save the model checkpoint every I*eval* steps. The best checkpoint on the dev set is used to run predictions on the test set. We report the average scores on the test set and standard deviations across 5 random seeds. Other hyper-parameters We do pilot experiments on SST-2 using learning rates in {2e-5, 5e-5, 1e-4, 2e-4}, and find that 1e-4 performs the best. For fine-tuning, we try learning rates in {1e-5, 2e5, 5e-5} and find that 2e-5 performs the best. The number of training epochs for the baselines is set as K = 5 on large datasets (SST-2, QNLI, QQP and MNLI), and K = 20 on smaller datasets. We apply these hyper-parameters to all baselines, and no further hyperparameter-tuning are conducted. Therefore, the comparison is fair for all methods. ## C Experimental Results On The Cifar-10 Task To further validate that our GDNAS method can obtain better search performances than the DNAS baselines, we now conduct experiments in the general NAS setting. Following DARTS (Liu et al., 2019a), we conduct neural architecture search on the CIFAR-10 dataset (Krizhevsky, 2009) based on the search space of DARTS. We keep all the search settings identical to DARTS. We first train the hyper-network with frozen architectural weights for 50 epochs. After selecting the operation on an edge, we tune the hyper-net for 8 epochs to let the modified hyper-network to adjust. Following DARTS (Liu et al., 2019a), we run the search and architecture selection phase with four random seeds and report both the best and average test errors of the obtained architectures. The results are reported in Table 8, which compares GDNAS with the DNAS baseline methods. Our GDNAS method achieves 2.52% test error, which manageable search cost of 0.6 GPU days. The results of GDNAS is comparable to other method (like PDARTS (Chen et al., 2021)) with more complex ## D Learned Architectures On The Glue Tasks In this section, we present the learned adapter architectures on the RTE and SST-2 tasks. The learned adapter architecture are presented in Table 9, 10, respectively. | Architecture | Test Error | Params | Search Cost | Search | |-------------------------------|--------------|------------|---------------|-----------| | (%) | (M) | (GPU days) | Framework | | | NASNet-A (Zoph et al., 2017) | 2.65 | 3.3 | 2000 | RL | | AmoebaNet (Real et al., 2018) | 3.34 | 3.2 | 3150 | evolution | | ENAS (Pham et al., 2018) | 2.89 | 4.6 | 0.5 | RL | | DARTS (Liu et al., 2019a) | 3.02 | 3.3 | 0.4 | DNAS | | SNAS (Xie et al., 2018) | 2.85 | 2.8 | 1.5 | DNAS | | BayesNAS (Zhou et al., 2019) | 2.81 | 3.4 | 0.2 | DNAS | | P-DARTS (Chen et al., 2021) | 2.50 | 3.4 | 0.3 | DNAS | | R-DARTS (Zela et al., 2019) | 2.95 | - | 1.6 | DNAS | | GDNAS (ours) | 2.52 | 3.3 | 0.6 | DNAS | Table 8: The search results on the CIFAR-10 task. | Layer index | Adapter placement | Activation g1 | Activation g2 | Encoder operation 1 | |---------------|---------------------|-----------------|-----------------|-----------------------| | 1 | FFN | elu | null_act | mha_8 | | 2 | - | - | - | - | | 3 | - | - | - | - | | 4 | FFN | elu | null_act | mha_2 | | 5 | Block | tanh | null_act | mha_2 | | 6 | FFN | gelu_new | leaky_relu | conv_3 | | 7 | FFN | null_act | tanh | mha_2 | | 8 | - | - | - | - | | 9 | Attn | elu | relu | conv_3 | | 10 | - | - | - | - | | 11 | Attn | gelu_new | relu | conv_1 | | 12 | Attn | gelu_new | relu | conv_3 | | 13 | Block | relu | leaky_relu | conv_1 | | 14 | Attn | swish | relu | conv_3 | | 15 | FFN | leaky_relu | relu | conv_3 | | 16 | Block | leaky_relu | relu | conv_5 | | 17 | FFN | leaky_relu | relu | conv_1 | | 18 | Attn | leaky_relu | null_act | skip_connect | | 19 | FFN | relu | relu | conv_3 | | 20 | FFN | gelu_new | null_act | conv_3 | | 21 | - | - | - | - | | 22 | Block | tanh | null_act | mha_8 | | 23 | Attn | tanh | null_act | conv_3 | | 24 | FFN | tanh | null_act | mha_2 | | Layer index | Adapter placement | Activation g1 | Activation g2 | Encoder operation 1 | |---------------|---------------------|-----------------|-----------------|-----------------------| | 1 | Attn | null_act | null_act | conv_5 | | 2 | - | - | - | - | | 3 | FFN | elu | null_act | mha_2 | | 4 | Attn | tanh | null_act | mha_2 | | 5 | Attn | elu | null_act | skip_connect | | 6 | FFN | gelu_new | relu | conv_5 | | 7 | Block | leaky_relu | leaky_relu | conv_3 | | 8 | - | - | - | - | | 9 | Block | leaky_relu | leaky_relu | conv_5 | | 10 | FFN | relu | relu | conv_3 | | 11 | Block | tanh | null_act | mha_8 | | 12 | - | - | - | - | | 13 | Block | tanh | null_act | skip_connect | | 14 | - | - | - | - | | 15 | FFN | swish | swish | conv_1 | | 16 | Block | relu | relu | conv_1 | | 17 | FFN | relu | swish | conv_3 | | 18 | FFN | leaky_relu | relu | conv_3 | | 19 | Attn | gelu_new | relu | mha_8 | | 20 | FFN | swish | relu | mha_8 | | 21 | Attn | swish | null_act | conv_3 | | 22 | Attn | elu | null_act | conv_3 | | 23 | Attn | null_act | null_act | mha_8 | | 24 | Attn | elu | null_act | mha_2 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 6.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 6.2, Appendix C ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6.4, Section 6.5, Section 6.6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 6.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
belani-flanigan-2023-automatic
Automatic Identification of Code-Switching Functions in Speech Transcripts
https://aclanthology.org/2023.findings-acl.469
Code-switching, or switching between languages, occurs for many reasons and has important linguistic, sociological, and cultural implications. Multilingual speakers code-switch for a variety of communicative functions, such as expressing emotions, borrowing terms, making jokes, introducing a new topic, etc. The function of code-switching may be quite useful for the analysis of linguists, cognitive scientists, speech therapists, and others, but is not readily apparent. To remedy this situation, we annotate and release a new dataset of functions of code-switching in Spanish-English. We build the first system (to our knowledge) to automatically identify a wide range of functions for which speakers code-switch in everyday speech, achieving an accuracy of 75{\%} across all functions.
# Automatic Identification Of Code-Switching Functions In Speech Transcripts Ritu Belani The Harker School ritubelani@gmail.com Jeffrey Flanigan University of California, Santa Cruz jmflanig@ucsc.edu ## Abstract Code-switching, or switching between languages, occurs for many reasons and has important linguistic, sociological, and cultural implications. Multilingual speakers code-switch for a variety of communicative functions, such as expressing emotions, borrowing terms, making jokes, introducing a new topic, etc. The function of code-switching may be quite useful for the analysis of linguists, cognitive scientists, speech therapists, and others, but is not readily apparent. To remedy this situation, we annotate and release a new dataset of functions of code-switching in Spanish-English. We build the first system (to our knowledge) to automatically identify a wide range of functions for which speakers code-switch in everyday speech, achieving an accuracy of 75% across all functions. ## 1 Introduction Code-switching, or switching between languages within the same utterance or sentence (Poplack, 1980), commonly emerges in conversations between multilinguals and in written communication such as social media. In today's intersecting multilingual world, it is essential to develop computational tools that can process and analyze codeswitched speech and text. In recent years, there has been much progress in processing code-switched language. Many codeswitched datasets have been collected for a diverse set of natural language processing tasks such as sentiment analysis, NER, conversational systems, and many others (Sitaram et al., 2019). Workshops held on computational approaches to codeswitching have created shared tasks on language identification (Solorio et al., 2014) and Named Entity Recognition (NER) (Aguilar et al., 2019) in code-switched texts. Nuanced tasks like humor detection, sarcasm detection, and hate detection have been applied to Hindi-English code-switched data (Bansal et al., 2020). Despite these achievements, there is relatively little work on identifying the functions of code-switching. Although there are annotation schemes (Zentella, 1998; Hartmann et al., 2018) and some annotated datasets (Dey and Fung, 2014; Begum et al., 2016; Lee and Wang, 2015; Rudra et al., 2019), to our knowledge, there is no work automatically identifying the communicative function of a code-switch across a full range of qualities (Zentella, 1998). There are many potential applications for the task proposed in this paper, including improved cognitive models of bilingual processing, diagnosis of language disorders, and improved understanding of social factors of group membership and microaggressions. Code-switching analysis contributes to the development of cognitive models for bilingual language processing and production (Macnamara and Kushnir, 1971; Kecskes, 2006; Phillips and Pylkkänen, 2021; Kheder and Kaan, 2021). Understanding the functions of code-switching is critical for speech-language pathologists interacting with bilingual children, so as not to mistakenly diagnose them with a language disorder when in reality, children are taking advantage of a wide range of communicative strategies by code-switching (Miccio et al., 2009; De la Rosa, 2022). Studying codeswitching in people with dementia and Alzheimer's disease can provide insights into language impairments experienced as their condition becomes more severe (Santi et al., 1990; Friedland, 1998; Svennevig et al., 2019). Code-switching is also important for pragmatics research of understanding social identities and group membership that speakers are trying to assert (Auer, 2005; Cashman, 2005). Because of political undertones of using one language over another (Heller, 1992), code-switching is useful for understanding linguistic microagressions (Anchimbe, 2015; Takeuchi, 2022). Our contributions are the following: - An annotation scheme identifying the function of code-switching with 11 different labels, encompassing emotional, situational, and semantic functions of code-switching - A new dataset applying this annotation scheme to code-switched utterances in the Spanish-English Bangor Miami Corpus (Deuchar, 2010) - Trained models and experiments with XLMRoBERTa (Conneau et al., 2019) and a baseline Naive Bayes model, demonstrating the feasibility of the proposed task ## 2 Related Work 2.1 Code-Switched Data Annotation Several studies have annotated code-switched data according to their own frameworks (Lee and Wang, 2015; Begum et al., 2016; Hartmann et al., 2018; Rudra et al., 2019). Rudra et al. (2016) developed classifiers to determine whether Hindi-English code-switching on Twitter was opinionated or not and found that audiences preferred to use Hindi to express a negative sentiment and English to express a positive sentiment. Lee and Wang (2015) developed a system to identify the emotions in code-switched Chinese-English posts. Additionally, one corpus of Hindi-English code-switched conversations has broadly grouped the functions of code-switching in order to study the rules that govern code-switching (Dey and Fung, 2014). The framework we apply in this paper draws upon elements from Zentella (1998)'s framework, and it closely mirrors the approach of Begum et al. (2016). However, while their annotation scheme is based on Tweets, ours is specific to conversational code-switching. Linguists have also developed theoretical frameworks for code-switching without applying them to the systematic annotation of corpora (Poplack, 1980; Gumperz, 1982; MyersScotton, 1997; Zentella, 1998; Halim and Maros, 2014). ## 2.2 Code-Switching And Multilingual Language Models Previous research has proven the success of finetuning the pre-trained models Multilingual BERT and XLM-RoBERTa for tasks such as offensive language identification (Jayanthi and Gupta, 2021) and named entity recognition and part-of-speech tagging (Winata et al., 2021) in code-switched texts. Because of these models' state-of-the-art performance, we decided to fine-tune Multilingual BERT and XLM-RoBERTa on our tasks. ## 3 Annotation We describe the data annotated, present our annotation scheme, and give a comparison of our annotation to previous annotation schemes. ## 3.1 Data We annotate data from the Bangor Miami corpus (Deuchar, 2010), a publicly available anonymized code-switched Spanish-English conversational dataset consisting of audio recordings and human-created transcripts between two or more speakers. This dataset was selected for annotation because of its diverse examples of natural code-switching in spontaneous conversations, as opposed to datasets with synthetically manufactured examples of code-switching.1 We filter the data from the transcripts for sentences with instances of code-switching and annotate the first 26 transcripts of the 56 total transcripts. The statistics of our filtered dataset are: number of utterances = 1,379; number of sentences = 7,547; words in Spanish = 15,796; words in English = 20,357; ambiguous words (both Spanish and English) = 3,393. ## 3.2 Annotation Scheme We identify eleven labels in our annotation scheme as a mix of emotional, situational, and semantic functions of code-switching. Like Begum et al. (2016), we identify that a single code-switch could serve multiple functions because each code-switch can be seen as a sum of its semantic, structural, and sentiment-related dimensions. Thus, the labels are not mutually exclusive, and one code-switch can have multiple labels. Change topic: code-switch to introduce another viewpoint, change the tone, or clarify something. Ex: I'm not ready at all, *¿y qué tal tú?* (I'm not ready at all, and what about you?) Borrowing: a short word or phrase substitution in the other language, then returning to the original language. Ex: *Mi amiga de* high school va a casarse en dos semanas. (My friend from high school is going to get married in two weeks.) Joke: code-switch for comedic effect or a sarcastic quip. Ex: You're making such a big deal 1Our dataset and code can be found at https://github.com/ritumb0/Automatic-IdentificationCode-Switching. about it, como si murieran las personas en la calle. (You're making such a big deal about it, as if people were dying in the street.) Quote: code-switch to be true to how a statement was spoken by someone else. Ex: So my Spanish teacher said, "*Oye, necesitas* estudiar más." (So my Spanish teacher said, "Hey, you need to study more.") Translate: code-switch to repeat a statement or phrase, perhaps for the sake of emphasis or clarity. Ex: *A veces*, sometimes, I like to be by myself. (Sometimes, sometimes, I like to be myself.) Command: code-switch for imperative or mandate intended to get the addressee to do something. Ex: *Él no sabe lo que está diciendo*, just don't listen to him. (He doesn't know what he's saying, just don't listen to him.) Filler: a filler, brief interjection, or short noise intended to communicate meaning from the other language. Ex: *Y yo me* callé, you know, *porque no quería ofender* a nadie. (And I stopped talking, you know, because I didn't want to offend anybody.) Exasperation: code-switch to complain or emphasize anger or frustration. Ex: Ay, cómo me sigues molestando, I should just get up and leave! (Oh, how you keep annoying me, I should just get up and leave!) Happiness: code-switch to make a compliment or positive interjection. Ex: I just saw her dress, ¡qué lindo! (I just saw her dress, how pretty!) Proper noun: code-switch to talk about people or places whose names are in the other language or pronounced according to the other language. Ex: Escogimos United Airlines porque ellos ofrecen las mejores meriendas. (We chose United Airlines because they offer the best snacks.) Surprise: code-switch to interject or relay that something was unexpected. Ex: *¿Qué hizo ella?* Oh my god. (What did she do? Oh my god.) 61.6% of the utterances in the dataset contain more than one type of code-switching. It is possible for an utterance to contain code-switching that does not fall under our scheme and therefore gets no label, but this does not occur in our dataset. ## 3.3 Comparison To Previous Annotation Schemes Because of the broad range of domains to which our task and dataset can be applied, we choose to include a diverse set of tags to account for all the functions of code-switching we observe. Our categories quote, command, and translate are similar to categories in Begum et al. (2016) and Zentella (1998). However, we use courser-grained categories to expedite annotation and improve agreement. Our changing topic category is closely modeled after Zentella (1998)'s designation of Realignment, which includes a topic shift, rhetorical question, break from a narrative, aside comment, and checking with the listener. Begum et al. (2016) includes sarcasm and negative sentiment categories, which are subsets of our more expansive joke and exasperation categories. Fine-grained categories that Begum et al. (2016) include which we do not are the more fine-grained breakdowns of NarrativeEvaluative, Reinforcement, Cause-Effect, and Reported Speech. Table 4 in the appendix includes this comparison between annotation schemes in table form. We include emotion categories for code switching, which are not included in Begum et al. (2016) and Zentella (1998), as we find this to be an important reason for code switching in dialogues. Lee and Wang (2015)'s annotation scheme for emotions in Chinese-English code-switching includes happiness, sadness, anger, fear, and surprise, three of which we share in our categories of happiness, exasperation, and surprise. We have included categories such as using a filler and expressing happiness, frustration, or surprise which we find occurs during a conversation in which someone is reacting to the statements made by the other person. In a related annotation scheme, Dey and Fung (2014) establish a set of functions of codeswitching among the speakers in their HindiEnglish code-switching conversation corpus, which consists of Ease of Use, Comment, Referential Function, Topic Shift, Dispreference, Personalisation, Emphasis, No Substitute Word, Name Entity, and Clarification. However, they do not go in depth into their reasoning behind choosing these functions and offer little elaboration upon what each one entails. A few of the functions that we identify have typically not been regarded as instances of code-switching, such as borrowing and proper nouns (Scotton and Ury, 1977). However, these features may still be of interest for downstream applications, so we include them here. Label **Naive** Bayes mBERT **mBERT with** adapter XLM-R **XLM-R with** adapter Change topic 63.2 **86.3** ±1 85.7 ±1.7 86.3 ±0.9 **86.3** ±0.4 Borrowing 57.3 **78.5** ±6.7 77.4 ±3.1 75 ±2.3 70.9 ±2.1 Joke 59.6 **79.8** ±13.6 37.0 ±28.0 68.5 ±15.6 68.7 ±9.8 Quote 40.9 **75.6** ±2.4 74.3 ±5.2 69.3 ±4.9 70.3 ±4.6 Translate 46.4 72.2 ±10.7 73.9 ±9.6 **74.6** ±17.6 74 ±10.5 Command 70.5 59.6 ±31 **74.3** ±8.2 66.4 ±20.6 66.2 ±7.1 Filler 57.8 70.5 ±3.2 72.2 ±5.3 73.4 ±2.5 **74.4** ±2.5 Exasperation 62.3 53.2 ±16.8 51.4 ±14.2 70.5 ±14.4 **77.1** ±8.7 Happiness 64.1 **83.6** ±6.1 80.2 ±8.7 78.4 ±4.3 70.5 ±6.3 Proper noun 61.0 84.5 ±3.3 85.4 ±1.6 **85.5** ±1.9 83.6 ±1.9 Surprise 68.2 75.0 ±4.9 66.4 ±3.9 **79.4** ±3.6 73.3 ±7.4 Average 59.2 74.4 ±2.8 70.7 ±5.2 **75.4** ±3.6 74.1 ±3.1 Table 1: Accuracy (in %) of label detection in code-switching dialogue. We report the standard deviation from training with 5 different random seeds. ## 3.4 **Statistics And Inter-Annotator Agreement** In the annotated data, the frequency of some functions of code-switching over others validates theories about code-switching. For example, codeswitching to change topics is regarded as the most frequent type of code-switching (Zentella, 1998), a trend which is present in Table 2. There are three filtered entries which contain markers that a codeswitch is near, but are all spoken in one language, so they receive no label. To compute inter-annotator agreement, a subset of 100 code-switched utterances was labeled by another annotator. The trained annotator was fluent in English and Spanish. After engaging in a presentation which included the same information as Section 3.2 and discussing five examples with the principal annotator, the trained annotator labeled 100 code-switched utterances independently. Because our dataset is multi-label, Cohen-Kappa is computed for each label as a binary classification task. The agreement scores are shown in Table 2 for each category. ## 4 Automatic Detection Of The Code-Switching Functions To demonstrate the feasibility of the proposed task, we fine-tune classifiers on our annotated corpus to predict labels for code-switching in our data. For the train/dev/test split, four conversations (16% of the annotated data, 220 code-switched utterances) are randomly set aside as test data, and the rest of the data is organized into a 75/25 train/dev split. Results show the most effective approach is by building unique classifiers for each label. Because over half of the labels appear in less than 10% of the data, we find that the classifiers always predict 0 for these labels if provided with all of the training data. Thus, we create balanced training datasets for each label so that half of the examples are an instance of the label, and the other half are not. In addition to a baseline Naive Bayes classifier, we fine-tune bert-base-multilingual-cased (mBERT) and xlm-roberta-base (XLM-RoBERTa) classifiers using Huggingface.2 Because of the rel2https://huggingface.co/models. mBERT base has 110M parameters, and XLM-RoBERTa base has 125M. We use the | Label | Frequency | Cohen-Kappa Score | |--------------|-------------|---------------------| | Change topic | 65.0% | 0.60 | | Borrowing | 26.0% | 0.45 | | Joke | 3.7% | 0.14 | | Quote | 6.5% | 0.52 | | Translate | 5.9% | 0.30 | | Command | 8.3% | 0.50 | | Filler | 31.0 % | 0.30 | | Exasperation | 7.7% | 0.23 | | Happiness | 4.1% | 0.54 | | Proper noun | 25.9% | 0.40 | | Surprise | 11.6% | 0.07 | | Transcript | Gold | System | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------| | Original: MAR: That my children were being welcomed into the - olvídate si tiene como tres trainers! Tiene un cocinero! | Borrowing | Borrowing | | Translation: MAR: That my children were being welcomed into the - forget it if he has like three trainers! He has a chef! Original: JES: Invita a a alguna de las celebraciones. NIC: I don't know. I have JES: Tú sabes se caen bien. NIC: Yeah I'll tell her. Bueno | Filler | No Filler | | not her I gotta tell sister. Translation: JES: Invite [her] to one of the celebrations. NIC: I don't know. I have JES: You know they like each other. NIC: Yeah I'll tell her. Well not her I gotta tell sister. Original: IRI: Ajá. JAM: If if I happens to see like running blood or something like that I feel disgust and I feel weak and I IRI: My dad was just the same. Sí, sí, sí, sí, kryptonite. JAM: Kryptonite, yeah. IRI: No mi pa mi papá era igual. | Translate | Translate | | Translation: IRI: Uh huh. JAM: If if I happens to see like running blood or something like that I feel disgust and I feel weak and I IRI: My dad was just the same. Yes, yes, yes, yes, kryptonite. JAM: Kryptonite, yeah. IRI: No my da- my dad was the same. Original: PAI: En qué lo puedo ayudar? SAR: He's going to the airport. PAI: What up. SAR: Discúlpame. PAI: It sounds like you're saying | No Command | Command | | escúpame. Translation: PAI: How can I help you? SAR: He's going to the airport. PAI: What up. SAR: Excuse me. PAI: It sounds like you're saying spit me. | | | | Table 3: Sample system outputs on Spanish-English code-switched data with speaker IDs. We show gold and system | | | Table 3: Sample system outputs on Spanish-English code-switched data with speaker IDs. We show gold and system outputs for only one label type. However, these examples may have additional labels. atively small training set, to combat overfitting, we experiment with adapter layers for the two Transformers, but find that they do not perform as well. Training details and hyperparameters are in the appendix. We find the best model to be the XLMRoBERTa model. ## 4.1 Results The accuracy for each label with each model is shown in Table 1. Since the dataset is small, in order to quantify the statistical significance, we compute the mean accuracy of each model on each task and report the standard deviation across 5 training runs. ## 4.2 Qualitative Analysis Of Results In a qualitative analysis of the models' predictions, we observe that models are more likely to notice a borrowed word when it is surrounded by a longer string in the other language. In addition, when Google Colab Pro+ Tesla V100-SXM2-16GB GPU to train the models, and each model trains in less than 15 minutes. there are multiple code-switching points, it is more difficult for models to identify the full range of functions. Example outputs are shown in Table 3. ## 5 Conclusion This paper presents a corpus of Spanish and English code-switching with labels for the different functions for code-switching. We collect the data from the Bangor Miami corpus, create an annotation scheme for functions of code-switching, and annotate the data. We propose a classifier-based approach to detect the functions of code-switching in the annotated code-switching corpus. Results show that the XLM-RoBERTa model is the most effective at predicting functions of code-switching. We believe that analysis of functions of code-switching is an innovative approach towards bilingual speech diagnosis as well as contributing to a linguistic model of code-switching. ## 6 Limitations Our system has been trained on everyday conversations from Spanish-English bilinguals and may not be applicable to other domains. Additionally, the accuracy of the classifier varies depending on the label type. We use human-created transcripts, so results may not apply for automatic transcripts. There is a risk that incorrect conclusions can be drawn if the system does not meet the performance requirements. ## Acknowledgements We would like to thank Maggie Yan for her assistance with annotating data, Ms. Anuradha Datar for valuable discussions, the Harker Science Research program for their guidance, and the members of JLab and the anonymous reviewers for their feedback. ## References Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2019. Named entity recognition on code-switched data: Overview of the calcs 2018 shared task. *arXiv* preprint arXiv:1906.04138. Eric A Anchimbe. 2015. Code-switching: Between identity and exclusion. *Code-switching between* structural and sociolinguistic perspectives, page 139. Peter Auer. 2005. A postscript: code-switching and social identity. *Journal of Pragmatics*, 37(3):403– 410. Conversational Code-Switching. Srijan Bansal, Vishal Garimella, Ayush Suhane, Jasabanta Patro, and Animesh Mukherjee. 2020. Codeswitching patterns can be an effective route to improve performance of downstream nlp applications: A case study of humour, sarcasm and hate speech detection. *arXiv preprint arXiv:2005.02295*. Rafiya Begum, Kalika Bali, Monojit Choudhury, Koustav Rudra, and Niloy Ganguly. 2016. Functions of code-switching in tweets: An annotation framework and some initial experiments. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1644– 1650. Holly R. Cashman. 2005. Identities at play: language preference and group membership in bilingual talk in interaction. *Journal of Pragmatics*, 37(3):301–315. Conversational Code-Switching. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Ivonne Marie Maldonado De la Rosa. 2022. Speech Language Pathologists' Approach to Code-Switching with Culturally Linguistically Diverse Clients. Ph.D. thesis, Inter-American University of Puerto Rico (Puerto Rico). Margaret Deuchar. 2010. Bilingbank spanish-english miami corpus. Anik Dey and Pascale Fung. 2014. A hindi-english code-switching corpus. In *LREC*, pages 2410–2413. Deborah Cecily Friedland. 1998. *Language loss in* bilingual speakers with Alzheimer's disease. Ph.D. thesis, Newcastle University. John J Gumperz. 1982. *Discourse strategies*. 1. Cambridge University Press. Nur Syazwani Halim and Marlyna Maros. 2014. The functions of code-switching in facebook interactions. Procedia-Social and Behavioral Sciences, 118:126– 133. Silvana Hartmann, Monojit Choudhury, and Kalika Bali. 2018. An integrated representation of linguistic and social functions of code-switching. In *Proceedings of* the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Monica Heller. 1992. The politics of codeswitching and language choice. *Journal of Multilingual & Multicultural Development*, 13(1-2):123–142. Sai Muralidhar Jayanthi and Akshat Gupta. 2021. Sj_aj@ dravidianlangtech-eacl2021: Task-adaptive pre-training of multilingual bert models for offensive language identification. *arXiv preprint* arXiv:2102.01051. Istvan Kecskes. 2006. The dual language model to explain code-switching: A cognitive-pragmatic approach. Souad Kheder and Edith Kaan. 2021. Cognitive control in bilinguals: Proficiency and code-switching both matter. *Cognition*, 209:104575. Sophia Lee and Zhongqing Wang. 2015. Emotion in code-switching texts: Corpus construction and analysis. In *Proceedings of the Eighth SIGHAN workshop* on chinese language processing, pages 91–99. John Macnamara and Seymour L Kushnir. 1971. Linguistic independence of bilinguals: The input switch. Journal of Verbal Learning and Verbal Behavior, 10(5):480–487. Adele W Miccio, Carol Scheffner Hammer, and Bárbara Rodríguez. 2009. *Code-switching and language disorders in bilingual children.* Cambridge University Press. Carol Myers-Scotton. 1997. *Duelling languages: Grammatical structure in codeswitching*. Oxford University Press. Ana Celia Zentella. 1998. Growing up bilingual: Puerto rican children in New York. Blackwell. Sarah F Phillips and Liina Pylkkänen. 2021. Composition within and between languages in the bilingual mind: Meg evidence from korean/english bilinguals. Eneuro, 8(6). Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espanol: toward a typology of code-switching1. Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2016. Understanding language preference for expression of opinion and sentiment: What do hindi-english speakers do on twitter? In *Proceedings of the 2016* conference on empirical methods in natural language processing, pages 1131–1141. Koustav Rudra, Ashish Sharma, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2019. Identifying and analyzing different aspects of english-hindi codeswitching in twitter. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 18(3):1–28. Susan De Santi, Loraine K Obler, Helene SaboAbramson, and Joan Goldberger. 1990. Discourse abilities and deficits in multilingual dementia. In Discourse ability and brain damage, pages 224–235. Springer. Carol Myers Scotton and William Ury. 1977. Bilingual strategies: The social functions of code-switching. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language identification in code-switched data. In *Proceedings of the First* Workshop on Computational Approaches to Code Switching, pages 62–72. Jan Svennevig, Pernille Hansen, Hanne Gram Simonsen, and Anne Marie Dalby Landmark. 2019. Codeswitching in multilinguals with dementia: patterns across speech contexts. *Clinical linguistics & phonetics*, 33(10-11):1009–1030. Jae DiBello Takeuchi. 2022. Code-switching as linguistic microaggression: L2-japanese and speaker legitimacy. *Multilingua*. Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2021. Are multilingual models effective in codeswitching? *arXiv preprint arXiv:2103.13309*. ## A Appendix Hyperparameters For the mBERT and XLMRoBERTa models as well as their respective adapter models, our hyperparameters were 20 epochs, a weight decay of 0.01, and we tuned the batch size from the set 4, 16 and the learning rate from the set 2e−5, 0.0001 with grid search. In order to account for the variance between different initial seeds, we first found the best performing hyperparameter combination for each model on each task with the default seed of 42, then we ran the model four additional times with the same hyperparameters but with a different seed, from 30 to 20 to 10 to 5. Comparison to Previous Annotation Schemes We give a mapping between labels in our annotation scheme and labels in other code-switching annotation schemes in Table 4. | Zentella (1998) | Begum et al. (2016) | Our paper | |-----------------------------------|-----------------------|--------------------------------| | Topic shift, | Narrative-Evaluative, | Change topic | | Declarative/question shift | Cause-Effect | (discourse) | | Narrative frame break | Sarcasm | Joke (sociological) | | Direct Quotations | Quotations | Quote | | Indirect Quotations | Reported Speech | (discourse) | | Aggravating requests | Imperative | Command | | Mitigating requests | (sociological) | | | Attention attraction Translations | Translation | Translate (clarify) | | - | Reinforcement | - | | Crutching | - | Borrowing (lexical) | | Filling in | - | Filler (discourse) | | - | - | Proper noun (lexical) | | - | - | Happiness (express emotion) | | - | Abuse/Neg. Sentiment | Exasperation (express emotion) | | - | - | Surprise (express emotion) | Table 4: Comparison of our annotation scheme with other frameworks for code-switching ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1, 4 ✓ B1. Did you cite the creators of artifacts you used? 3.1, 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 3.1, 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3.1 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.1 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3.4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3.4 ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We recruited the annotator in an informal context; they were a friend who agreed to spare a few hours of their time to help us out. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We used a pre-existing, publicly available and anonymized dataset collected by Bangor Miami researchers. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We used a pre-existing, publicly available and anonymized dataset collected by Bangor Miami researchers. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3.1
wang-etal-2023-federated
Federated Domain Adaptation for Named Entity Recognition via Distilling with Heterogeneous Tag Sets
https://aclanthology.org/2023.findings-acl.470
Federated learning involves collaborative training with private data from multiple platforms, while not violating data privacy. We study the problem of federated domain adaptation for Named Entity Recognition (NER), where we seek to transfer knowledge across different platforms with data of multiple domains. In addition, we consider a practical and challenging scenario, where NER datasets of different platforms of federated learning are annotated with heterogeneous tag sets, i.e., different sets of entity types. The goal is to train a global model with federated learning, such that it can predict with a complete tag set, i.e., with all the occurring entity types for data across all platforms. To cope with the heterogeneous tag sets in a multi-domain setting, we propose a distillation approach along with a mechanism of instance weighting to facilitate knowledge transfer across platforms. Besides, we release two re-annotated clinic NER datasets, for testing the proposed method in the clinic domain. Our method shows superior empirical performance for NER with federated learning.
# Federated Domain Adaptation For Named Entity Recognition Via Distilling With Heterogeneous Tag Sets Rui Wang1 Tong Yu2† Junda Wu3 Handong Zhao2 **Sungchul Kim**2 Ruiyi Zhang2 Subrata Mitra2 **Ricardo Henao**1,4 1Duke University 2Adobe Research 3New York University 4KAUST {rui.wang16, ricardo.henao}@duke.edu jw6466@nyu.edu {tyu, hazhao, sukim, ruizhang, sumitra}@adobe.com ## Abstract Federated learning involves collaborative training with private data from multiple platforms, while not violating data privacy. We study the problem of federated domain adaptation for Named Entity Recognition (NER), where we seek to transfer knowledge across different platforms with data of multiple domains. In addition, we consider a practical and challenging scenario, where NER datasets of different platforms of federated learning are annotated with heterogeneous tag sets, *i.e.*, different sets of entity types. The goal is to train a global model with federated learning, such that it can predict with a complete tag set, *i.e.*, with all the occurring entity types for data across all platforms. To cope with the heterogeneous tag sets in a multi-domain setting, we propose a distillation approach along with a mechanism of instance weighting to facilitate knowledge transfer across platforms. Besides, we release two re-annotated clinic NER datasets, for testing the proposed method in the clinic domain. Our method shows superior empirical performance for clinic NER with federated learning. ## 1 Introduction Federated learning for Named Entity Recognition (NER) is the task of collaboratively learn with NER datasets from multiple platforms, while not violating data privacy, *i.e.*, without sharing data across different platforms (Ge et al., 2020). A platform can be an institution, *e.g.*, a hospital or a drug company, where a private collection of clinic NER dataset are locally stored. In reality, data from different platforms are usually sampled from different clinic domains, due to different patient groups, etc. Additionally, different schemes may be used when annotating for different platforms. This happen when healthcare providers use customized tag sets to create their own datasets (Beryozkin et al., 2019). As an example, a hospital may hold a dataset of †Corresponding Author clinical reports from doctors annotated with entities of patient *Disease* and *Drugs* prescribed by the doctors, while a drug company may have text data of patient feedback, annotated with *Drugs* and their adverse drug effects (ADE). In this case, it would be mutually beneficial for the hospital and the drug company if they can train in a federated manner a shared (global) NER model with both datasets. The global model should in principle predict with the complete tag set, i.e., {Disease, Drugs, ADE}, enabling the hospital to also recognize the ADE in their clinic reports and the drug company to identify *Disease* from their patient feedback, without sharing or re-annotating their local datasets. This can be regarded as a problem of domain adaptation, since the key challenge is to efficiently transfer knowledge of locally unlabeled entity types, i.e., Disease and ADE across domains/platforms, so that the resulting global model can work for both the hospital and the drug company. So motivated, we study federated domain adaptation for clinic NER in the multi-domain setting where datasets from multiple platforms representing different domains. Further, we address a more challenging scenario in which different platforms also annotate with different tag sets, *i.e.*, set of entity types. The goal is to benefit all platforms from federated learning, via training a global model that predicts with the complete tags set, including all the encountered entity types, for text of different domains/platforms. Note there are previous works studying federated NER in a multi-domain setting (Ge et al., 2020; Zhao et al., 2021). However, these works generally presume that the NER model for one platform only predicts with the entity types annotated in the local training data, unlike our setting that requires predicting on a larger tag set (the complete tag set). Here, we claim that such an assumption might not be practical in a multi-domain setting. To illustrate this, suppose there is a platform with annotations of training data ![1_image_0.png](1_image_0.png) that sufficiently cover enough entity types for its own propose of evaluation. For this platform, with enough amount of training data locally, joint training with data from other distant domains may harm the performance of the resulting model on its local data, *i.e.*, there is no guarantee that data of other platforms is similar enough to be beneficial to its own domain. As a result, such a platform might be reluctant in joining federated learning, further considering the potential risk of data leakage in any federated learning system (Li et al., 2021). On the contrary, we require to predict with the complete tag set, while annotating with incomplete (subset of the complete tag set) and heterogeneous tag sets locally. This motivates a platform to participate in federated learning, so that it can benefit from knowledge of locally unlabeled entity types transferred from other platforms. To address the heterogeneous tag sets and facility knowledge transfer across platforms, with regards to the locally unlabeled entity types, we propose a distillation approach, that distills knowledge of unlabeled entity types from other platforms via pseudo-annotations with the complete tag set. Based on the proposed distillation, we further propose a instance weighting mechanism, so that knowledge learned with local data is more transferable across platforms. We adopt a promptbased NER model (Chen et al., 2022) with superior performance for cross-domain NER, and only transmit prompt-related parameters (7% of the model size) for each round of federated learning to reduce the communication cost. We should note that a comprehensive evaluation of the global model in the setting considered requires testing data with the complete tag set for each domain/platform. However, existing public clinical datasets of different domains are usually annotated using different tag sets (with small overlap), *i.e.*, they lack evaluation data that is consistently annotated with the complete tag sets for multiple domains. Therefore, we re-annotate the ADE-Corpus (Gurulingappa et al., 2012) and SMM4H (Weissenbacher et al., 2019) datasets using the annotation scheme of CADEC (Karimi et al., 2015), resulting in datasets of multiple domains that are annotated consistently for evaluation. Our contributions are as follow: - We study federated learning for clinic NER, where data in different platforms can be from multiple domains and annotated with heterogeneous tag sets. - We propose a distillation approach along with a weighting mechanism to facilitate knowledge transfer across different platforms. - We release two re-annotated clinic datasets for evaluation in clinical settings and to encourage future research. Empirical results show that our method delivers superior performance in the considered setting. ## 2 Preliminaries 2.1 Problem Formulation Figure 1 illustrates our considered setting for clinic NER. Suppose there are K platforms in federated learning with datasets {Dk} K k=1, Dk = 7450 ## Algorithm 1 Our Federated Learning Algorithm. Assume the non-trainable parameters of the NER model are available in each platform. Randomly initialize the trainable parameters, θ1, for the NER model on the server of federated learning. Initialize the instance weights w k i,1 = 1, for i = 1*, . . . , N*k and k = 1*, . . . , K* (see Section 3.3). for the t th round of federated learning do % *Update instance weighting* Compute w k i,t+1 according to (12). % *Local training* Download θtto each local platform. Train locally on each platform, via distilling with the pseudo-complete annotation set resulting in {θ k t } K k=1 (see Section 3.2). % *Aggregation* Upload {θ k t } K k=1 to the server and aggregate with (1), generating θt+1 for the next round of federated learning. end for {Xi,Y Tk i} Nk i=1, with Nk being the size of Dk. Xi is a text sequence and Y Tk iis its NER label sequence, annotated with tag set Tk. In Figure 1, we have T1 = {Drug, ADE}, T2 = {Disease}, *etc.* We assume Xi of different platforms are from different text domains. The goal is to train an NER model that predicts with the complete tag set *i.e.* T = ∪ K k=1Tk, for all platforms, without data being shared across different platforms. ## 2.2 Federated Learning As illustrated in Figure 1, federated learning involves periodical communication between the server and platforms involving the trainable parameters of the model. Specifically, let θt be the trainable parameters of the NER model before the t th communication round of federated learning. We assume the non-trainable parameters, *e.g.*, the pretrained parameters of a PLM, are available locally in each platform. A typical training cycle of federated learning includes: Local Training: θtis transferred to each platform and is then trained/updated locally with the private data of each platform. Specifically, θtis trained for Eloc epochs separately on different platforms. We denote {θ k t } K k=1 as the trainable parameters of different platforms from local training. Aggregation: After local training, each platform will transfer their updated parameters {θ k t } K k=1 to the server. Since the goal of our federated learning setting is to training a global model for all platforms, the server will aggregate the {θ k t } K k=1, generating θt+1 for the next round of communication. The aggregation is usually performed via weighted averaging, *i.e.*, $$\theta_{t+1}=\sum_{k=1}^{K}m_{k}\theta_{t}^{k},\qquad\qquad(1)$$ where Pk mk = 1. Since aggregation is not the focus of this work, we will discuss the values of mk in the Appendix. Algorithm 1 shows the complete procedure of federated learning. The proposed distillation and instance weighting mechanism are described in Sections 3.2 and 3.3, respectively. ## 3 Methodology 3.1 Model Architecture In order to efficiently train a global model for all the participants, we need to i) Facilitate knowledge transfer across different platforms/domains, so that each client can benefit from knowledge regarding locally unlabeled entity types, transferred from other platforms. ii) Reduce the communication cost of federated learning. With these considerations, we adopt LightNER (Chen et al., 2022) as our NER model for federated learning. Below, we briefly describe the LightNER model, along with the rationale that we adopt it for our setting. Sequence-to-Sequence NER: NER is conventionally identified a sequence labeling problem, which predicts with a label-specific output layer (Luo et al., 2020; Lee et al., 2020) on top of a Pretrained Language Model (PLM), *e.g.*, BERT. However, such models may have inferior performance for cross-domain problems, since the label-specific output layer that is trained from scratch cannot benefit from the pretrained knowledge for generalization (Chen et al., 2022). To solve this, recent works (Cui et al., 2021; Chen et al., 2022) adopt a sequence-to-sequence framework for NER based on the pretrained BART model (Lewis et al., 2019), where the entity labels are predicted as natural language tokens in the output sequence, leveraging the pretrained semantics knowledge from BART token embeddings for better generalization. By formulating NER as sequence-to-sequence generation, LightNER achieve superior performance for crossdomain NER tasks, a merit that we value for our setting involving multiple domains. Given a length-L text sequence Xi = [xl] L l=1 from platform k, 7451 the model should generate the following label sequence YT k i, indicating the start/end positions and entity types of each entity within Xi, $$Y^{T^{k}}=[\mathbf{p}_{1}^{T^{k}};\cdots;\mathbf{p}_{n}^{T^{k}}],\,\mathbf{p}_{c}^{T^{k}}=[\mathbf{s}_{c},\mathbf{e}_{c},\mathbf{t}_{c}],\tag{2}$$ where $c=1,\cdots,n$ and $[\cdot]$ denotes concatenation. where c = 1, · · · , n and [ ; ] denotes concatenation. n is the number of entities in Xi. pT c denotes the c th entity annotated within Xi, where sc/ec denotes its start/end position in Xi, and tc ∈ T kis the entity type. LightNER follows the encoder-decoder architecture of BART, generating the label sequence YT i autoregressively, given Xi. The LightNER model for platform k can be trained via minimizing the following loss of cross-entropy, $$\mathcal{L}_{i}^{T^{k}}=-\sum_{l=1}^{3n}\log P_{\theta}(\mathbf{y}_{l}^{T}|\mathbf{X}_{i},\mathbf{y}_{1}^{T^{k}},\cdots,\mathbf{y}_{l-1}^{T^{k}}),\tag{3}$$ where yT k lis the l th element of YT k iand θ denotes the trainable parameters of LightNER. Prompt Tuning: To preserve the pretrained knowledge from BART for better generalization across domains/platforms, we follow LightNER that freezes the pretrained parameters of BART and inserts tunable prompt parameters for training. Specifically, let q ∈ RNq×D denote an array of Nq prompt tokens, where we have Nq = 10 and d = 768. q is projected by a trainable layer into the keys and values of the self-attention in each pretrained transformer layer, with q being shared by all layers. The projection on q follows (Chen et al., 2022) and is detailed in Appendix B. As a result, the number of trainable parameters in the model is significantly reduced, *i.e.*, only 7% of the total model size, compared with fine-tuning all the model parameters. This leads to reduced communication cost for federated learning, considering that we only need to communicate trainable parameters between the server and platforms. ## 3.2 Distillation With Pseudo-Complete Annotation The local datasets of each platform only contain annotations of {T k} K k=1, T k ⊂ T . For platform k, we denote entity types that are not annotated locally as T\k, with *T ∪ T* \k = T . During local training, if the local trainable parameter θ k t (Algorithm 1) is trained solely with the local annotations of T k, the resulting NER model will learn to ignore the entities of T\kfrom the input text sequences. This contradicts our goal of predicting with the complete tag set T . To solve this problem, we notice that the parameter θtin Algorithm 1 is aggregated from updates of different platforms ({θ k t−1} K k=1) with {T k} K k=1. Thus, the NER model with θt should be able to predict with the complete tag set T , including T kfrom each platform k. Additionally, considering that θtis downloaded to each platform before the local training of the t th round of federated learning (Algorithm 1), the model with θt should be locally available for each platform. Inspired by this, we propose to distill from the model with θt while training locally with θ k t , so that θ k t can be trained with T instead of T k. Specifically, we extract predictions regarding T\kfrom the the model with θt and combine them with the local annotations of T k, constituting the pseudo-complete annotation. θ k t will be trained with the pseudo-complete annotation of the complete tag set T . Let Xi be the i th text sequence from platform k and YT k i = [pT k 1; *· · ·* ; pT k nloc ] be the local annotation regarding T k. nloc is the number of locally annotated entities of T k. Given Xi, we first greedly decode the prediction Yˆ T ifrom θt, $$\hat{\mathbf{y}}_{l}^{T}=arg\,max_{\mathbf{y}}\ P_{\mathbf{\theta}_{t}}(\mathbf{y}|\mathbf{X}_{1},\hat{\mathbf{y}}_{1}^{T},\cdots,\hat{\mathbf{y}}_{l-1}^{T}).\tag{4}$$ As mentioned above, the prediction Yˆ T ishould have the complete tag set T . Predictions regarding T\k within Yˆ T irepresents the knowledge of un-annotated entity types in platform k, which is transferred from other platforms. We extract such predictions from Yˆ T i, denoted as, Yˆ T\k i = [pT\k 1; *· · ·* ; pT\k ntrans ]. n*trans* is the number of entities in Yˆ T\k i. {pT\k c } n*trans* c=1 is defined as in (2), representing entities that are predicted as types from T\k with θt. We combine Yˆ T\k i with the existing annotation YT k ifrom platform k, generating the pseudo-complete annotation, $$Y_{i}^{T}=[Y_{i}^{T^{k}};\hat{Y}_{i}^{T^{k}}]=[p_{1}^{T};\cdots;p_{n_{loc}+n_{trans}}^{T}],\tag{5}$$ where each entity is from either YT k ior Yˆ T\k i. YT iis constructed to cover the complete tag set T , illustrated in Figure 2. For local training, θ k t is trained with the pseudo-complete annotation YT i instead of YT k i, with the loss of (3). ## 3.3 Instance Weighting With (5), θ k tis expected to be trained with {Xi,YT i} Nk i=1 during local training. Let yl be the l th element of YT i, which can be categorized as 7452 2) combine with akathisia, ADR 1) extract Clozapine, Drug, schizophrenia, Disease Clozapine, Drug, Drug, akathisia, ADR, with schizophrenia, Disease Clozapine, Drug, akathisia, ADR, schizophrenia, Disease 3) train Model of Model of Figure 2: Constructing the pseudo-complete annotation YT ifor training of θ k i . For platform k, we show entities of either yl ∈ YT kor yl ∈ Yˆ T\k i. The training loss can be decomposed as, $$\begin{split}{\mathcal{L}}^{\mathcal{T}}&={\mathcal{L}}^{\mathcal{T}^{k}}+{\mathcal{L}}^{\mathcal{T}^{\backslash k}}\\ &={\frac{1}{N_{k}}}\sum_{i=1}^{N_{k}}{\mathcal{L}}_{i}^{\mathcal{T}^{k}}+{\frac{1}{N_{k}}}\sum_{i=1}^{N_{k}}{\mathcal{L}}_{i}^{\mathcal{T}^{\backslash k}},\quad\quad(7)\end{split}$$ where, $$\mathcal{L}_{i}^{T^{k}}=-\sum\log P_{\theta_{t}^{k}}(y_{l}|\mathbf{X},y_{1}^{T},\cdots,y_{l-1}^{T}),\tag{8}$$ $$y_{l}\in Y_{i}^{T^{k}}$$ $$\mathcal{L}_{i}^{T^{\backslash k}}=-\sum\log P_{\theta_{t}^{k}}(y_{l}|\mathbf{X},y_{1}^{T},\cdots,y_{l-1}^{T}).$$ (9) $$y_{l}\in Y_{i}^{T^{\backslash k}}$$ {LT k } K k=1 represents training with the local annotations of T k. The knowledge learnt from LT k will be transferred to other platforms where annotations of T kis not available. Correspondingly, {LT\k} K k=1 represents how platform k can benefit from knowledge of T\kthat is transferred from the other platforms where annotations for T\k are available. For platform k, the model is expected to benefit from the knowledge learnt with {LT k′ }k′̸=k, regarding entity types that are not locally annotated (T\k), so that it can identify entities of T\k via training with LT\k. With this perspective, we denote {LT k } K k=1 and {LT\k} K k=1 as the source and *target* loss, respectively, in terms of the direction of knowledge transfer. To facilitate the knowledge transfer across platforms discussed above, we propose a weighting mechanism for the training instances of the source loss {LT k } K k=1, so that the knowledge learnt from the source loss can be more transferable for the target loss {LT\k t } K k=1. Specifically, we want to upweight instances that are more beneficial for the training in other platforms and *vise versa*. Formally, we rewrite the source loss as, $${\mathcal{L}}^{{\mathcal{T}}^{k}}={\frac{1}{N_{k}}}\sum_{i=1}^{N_{k}}w_{i,t}^{k}\times{\mathcal{L}}_{i}^{{\mathcal{T}}^{k}},\qquad\quad(10)$$ where w k i,t = 1 reduces to (7). w k i,t is the weight for the i th sample for platform k at the t th federated learning round, measuring how the knowledge from training with LT k i(source) is transferable for the target loss in other platforms, *i.e.*, {LT\k}k′̸=k (target). For conciseness, we omit the subscript t that denotes the number of federated learning round in presenting the loss functions, but only showing it for the weight w k i,t. The question remaining is how to measure the transferablility of knowledge learnt from LT k iin the federated learning setting. Since the federate learning is a privay-preserving framework that only allows communicating model updates between the server and platforms, we define wk i,t according to the gradient similarity between the source and target loss. Specifically, for the i th sample of platform k, we first compute the gradients of its source loss and mean of the target loss from other platforms, which we denote as g src iand g tgt, respectively, $$g_{i}^{s r c}=\frac{\partial{\mathcal{L}}_{i}^{\mathcal{T}^{k}}}{\partial\mathbf{q}},\mathbf{g}^{t g t}=\sum_{k^{\prime}\neq k}\frac{\partial({\mathcal{L}}^{\mathcal{T}^{\backslash k}})}{\partial\mathbf{q}},\qquad(11)$$ q is the prompt embeddings as introduced in Section 3. wk i,t is updated with the cosine similarity 7453 between the two gradients, $$w^{k}_{i,t+1}=\alpha\cdot w^{k}_{i,t}+(1-\alpha)\cdot\frac{<\mathbf{g}^{s r c}_{i},\mathbf{g}^{t g t}>}{||\mathbf{g}^{s r c}_{i}||_{2}||\mathbf{g}^{t g t}||_{2}},\tag{12}$$ where α is a momentum value. < ·, · > denotes the dot product and *|| · ||*2 is the L2 norm. wk i,t is computed before local training (Algorithm 1). For platform k, we save g src ilocally and upload the gradient of the target loss LT\kto the server for computing g tgt. g tgt is computed on the server side, then downloaded to each platform for updating w k i,t with (12). We further elaborate the procedures of updating w k i,t in Algorithm 2. Note that updating w k i,t does not involving training of the NER model and w k i,t is not shared to the server or other platforms. We should also notice that the above uploading and downloading of gradients introduce additional communication cost. With such concern, we only compute gradients with respect to q ∈ RNq×d(as in (11)), which has only several thousand parameters (Section 3), inducing only minor communication cost. We use q to calculate the gradient similarity, because q is shared by each pretrained layer in BART (Section 3), thus it should correspond to the general information regarding prompt tuning. ## 4 Related Works NER with Heterogeneous Tag Sets. Greenberg et al. (2018); Beryozkin et al. (2019) investigate on training over NER datasets with heterogeneous tag sets. However, they assume these datasets are available in a *centralized* location. Such an assumption is not practical in training with clinical data, for which privacy preservation is of primary concern (Hassan Mahlool and Hamzah Abed, 2022). Additionally, they do ot explicitly consider the differences in data distribution for the text from different datasets. Our work is orthogonal to these works, since we assume *decentralized* training, *i.e.*, federated learning, where we account for the issues of privacy and communication costs that do not exist in training with *centralized* datasets. Federated Domain Adaptation Peng et al. (2019) is the first work studying domain adaptation for federated learning. Recently, Hong et al. (2021) further studies the fairness and debasing problem in federated domain adaptation. These works adopt a discriminator module for adversarial domain adaptation, which increases the communication cost of federated learning. Yao et al. (2022) studies federated domain adaptation via sharing statistics of data distributions of the local platforms. However, such an approach may be vulnerable to membership inference attacks (Shokri et al., 2017), resulting in data leakage, thus may not be applicable to clinical data for which data privacy is the primary concern. Additionally, these work only consider the task of image classification. Our work studies federated domain adaptation for clinical NER. Note that federated domain adaptation is different from federated learning with non-IID (Independent and Identically Distributed) data (*e.g.,*, (Li et al., 2020)). The latter focus on the problem with slow convergence or diverged results in aggregating with updates from non-IID data. Instead, we targets at effectively transferring knowledge across platforms/domains, so that each platform can benefit from knowledge of locally unannotated entity types transferred from other platforms. Federated Learning for NER. Ge et al. (2020) presents a pilot study of federated learning for clinical NER. Zhao et al. (2021) introduces adversarial training to solve the adversarial attack problem for federated NER. One of the major problems is that these approaches require sharing or communicating the whole NER model (or its encoder) between the server and platforms of federated learning. This will induce huge communication cost in training with the recent Pretrained Langauge Models (PLMs) (Kenton and Toutanova, 2019; Lewis et al., 2019), *i.e.*, containing hundreds of millions of parameters. In this work, we study using a promptbased pretrained NER model (Chen et al., 2022) for our federated learning, thus only communicating prompt-related parameters. This significantly reduces the communication cost compared to fine tuning all the pretrained parameters. Further, different from Ge et al. (2020); Zhao et al. (2021), we focus on federated domain adaptation that efficiently transfer knowledge among platforms of different domains. (Wu et al., 2021) investigates knowledge distillation in federated learning with NER, but is not targeting the federated domain adaptation problem as in our setting. ## 5 Experiments 5.1 Baselines And Ablations We first compare with the classic adversarial domain adaptation with (Ganin et al., 2016), and two more recent works of federated domain adaptation (Peng et al., 2019; Hong et al., 2021). Note that these methods are originally designed for image classification. We re-implement them with our NER model, *i.e.*, LightNER, for comparison. Please refer to Appendix A for details. Note that these approaches generally require an additional domain discriminator for adversarial domain matching. Such discriminator is trained and communicated along with the NER model. This introduces additional communication cost, as with uploading and downloading the gradients of q in Section 3.3. In the Appendix A, we compared the communication cost of our instance weighting with q with that of the discriminator. Our communication cost is lower, while achieving better performance as in Table 1 and 2. We denote training with Algorithm 1 as *Ours*. For the ablation study, we consider: (i) Ours w/o distill&weight. This is to train the LightNER model without distillation in Section 3.2 and instance weighting in Section 3.3. Specifically, the model is trained with only the local annotation YT kinstead of (5) , and w t i,k is always set to 1. (ii) Ours w/o weight. It trains the NER model with (5) (as in *Ours*), while setting w t i,k = 1, *i.e.*, no instance weighting. Please refer to the appendix C for implementation details. ## 5.2 Experiments With Ontonote 5.0 Before evaluating with clinic data, we first demonstrate our method with OntoNote 5.0, a classic NER dataset of 18 entity types (*|T |* = 18), with data from six domains: nw, tc, wb, bn, bc and mz. We have the number of platforms K = 6, with each platform representing data of a different domain. To simulate the heterogeneous tag sets, we assume the training data of each domain/platform is annotated with 3 entity types (|T k| = 3), which are randomly sampled from the 18 entity types without replacement. For OntoNote 5.0, we study the challenging scenario of federated domain adaptation that each entity type is only annotated in one of the six platforms, *i.e.*, T k1 ∩ T k2 = ∅, for k1, k2 ∈ {1, · · · , K} and k1 ̸= k2. We randomly sample five time and report the F1 score for each domain, averaged over different samplings. For each domain, the F1 score is computed via evaluating the global model on the testing dataset of this domain with all the 18 entity types. Table 1 shows the resulting F1 score with OntoNote 5.0. Our method outperforms the baselines with a large margin. Instead of communicating domain discriminators as baselines, we communicate the gradients of prompt embeddings, which has a smaller size (Appendix A). Additionally, the performance gain from *Ours w/o distill&weight* to Ours w/o weight shows the effectiveness of our distillation with pseudo-complete annotation (Section 3.2), which allows the NER model being trained with the complete tag set during local training. Similarily, the the performance gain from Ours w/o weight to *Ours* validates the usefullness of our proposed instance weight mechanism. Both of these techniques contribute to the superior performance of our trained NER model. ## 5.3 Experiments With The Clinic Datasets As mentioned in Section 1 and 5.2, the evaluation of our NER model requires testing data of different domains with complete tag sets. However, existing public clinic datasets are generally created with different annotation schemes. For example, datasets may be annotated with different tags sets (Beryozkin et al., 2019; Karimi et al., 2015), and even the same entity type can have various definitions in different datasets (Karimi et al., 2015). Such a lack of consistent annotations for clinic data of different domains poses challenges to the evaluation of our considered setting. Broadly speaking, this also add to the difficulty in studying general transfer learning problems with clinic NER. For instance, the classic domain adaptation (Long et al., 2015) generally involves transferring knowledge from a labeled source domain to an unlabeled target domain. The resulting model is evaluated with testing data of the target domain, annotated with the same classes/entity types as in the source domain, *i.e.*, requiring consistent annotation for data of the source and target domain, which is hardly fullfilled with public datasets when dealing with clinic NER. To solve this problem, we take three clinic datasets: CADEC (Karimi et al., 2015), ADE (Gurulingappa et al., 2012) and SMM4H (Weissenbacher et al., 2019), which contains text from three distinct text domains, *i.e.*, formal costumer report, medical case report and casual tweets, respectively. We provide some samples of the three dataset in the Supplymentary data. These datasets are originally annotated with different tag sets. To have consistent annotation across domains. We re-annotate ADE and SMM4H with the tag sets defined in CADEC (with the largest tag set). As | Method | nw | tc | wb | bn | bc | mz | Avg | |-------------------------|-------|-------|-------|-------|-------|-------|-------| | (Ganin et al., 2016) | 58.75 | 56.95 | 57.31 | 57.57 | 58.38 | 58.15 | 57.85 | | (Peng et al., 2019) | 61.32 | 58.32 | 59.64 | 60.10 | 59.75 | 60.09 | 59.87 | | (Hong et al., 2021) | 59.27 | 55.27 | 55.86 | 58.16 | 56.79 | 57.33 | 57.11 | | Ours w/o distill&weight | 45.10 | 52.15 | 49.42 | 46.52 | 47.55 | 46.71 | 47.91 | | Ours w/o weight | 59.76 | 55.09 | 52.66 | 58.47 | 57.00 | 58.42 | 56.90 | | Ours | 61.67 | 59.92 | 59.52 | 62.41 | 60.57 | 63.22 | 61.22 | (Ganin et al., 2016) 66.89 57.26 60.70 61.62 (Peng et al., 2019) 63.77 55.38 58.85 59.33 (Hong et al., 2021) 67.49 57.56 59.02 61.36 Ours w/o distill&weight 41.92 41.68 45.81 43.14 Method ADE CADEC SMM4H Avg Ours w/o weight 66.80 54.91 60.29 60.67 Ours 69.25 55.68 62.22 **62.38** a result, the three datasets are consistently annotated with the same tag set of 5 entity types, T = {Drug, ADE, Disease, Finding, *Symptom*}, as defined in (Karimi et al., 2015). In Appendix D, we also elaborate our annotation procedure and dataset statistics. In simulating our setting of federated domain adaptation with the above datasets, we set the number of local platform K = 3. Each platform holds text data of a different domain/dataset. Unlike OntoNote 5.0, we consider a more flexible and practical scenario that allows overlapping among tags set of different platforms. Please refer to Appendix D on the tag sets of annotation for each local platform in experiments. Table 2 shows the results of federated domain adaptation with our clinic datasets. Our method has the highest F1 score averaged over the three considered datasets/domains. Among the three client datasets, CADEC is larger and more diverse than ADE and SMM4H. Thus, CADEC may contain samples that are quite different from those in ADE and SMM4H, and knowledge learnt with such samples may not be transferable for the training of ADE and SMM4H. From our weighting mechanism (10), such samples can be downweighted during training to facilitate knowledge transfer across platforms. Since such downweighted samples may be important for the local training with CADEC, the improvement for CADEC with our weighting mechanism is slightly smaller than that on the other two clinic datasets. However, we should note that our proposed method can consistently provide improvement over the ablations for different datasets. Table 2 also shows that our annotations for ADE and SMM4H are meaningful, and can be leveraged for the training of existing advanced NER model (Chen et al., 2022). To faciliate future research, we have released our annotated clinic datasets†. ## 5.4 Hyperparameter Analysis Let η be the percentage of trainable parameters in the NER model, which is proportional to the communication cost during federated learning. In order to investigate the relation between the communication cost of federated learning and the model performance, we vary the value of η and plot η with the averaged F1 score on OntoNote 5.0 in Figure 3 (a). η is varied by changing the hidden dimension h of the projection on q, explained in Appendix B. Results in Figure 3 (a) shows that, when η is not large (η ≤ 10), the model performance can be improved with larger communication cost (larger η). However, when the value of η gets large enough (*e.g.*, η ≥ 10), the model may overfit to the domain specific information of each client during local training, hindering the further improvement of model performance. Figure 3 (b) shows the F1 score on OntoNote 5.0, with varing values of Eloc, *i.e.* epoches of local training per round. All the points share the same communication cost, with the same η and communication rounds for federated learning. The model performance generally improves with longer local training (larger Eloc). We should note that increasing Eloc corresponds to larger computation †https://github.com/RayWangWR/ClinicDataset cost in local training. The performance gets saturated when Eloc get too large, i.e., Eloc ≥ 2, which indicated that the local training may have reached convergence after 2 epoches. ## 6 Conclusion In this work, we study the problem of federated domain adaptation for clinic NER. We consider the practical setting with heterogeneous tag sets for different platforms of federated learning. To cope with the heterogeneous tag sets and facilitate knowledge transfer among different platforms, we propose distillation with pseudo-complete annotation and an instance weighting mechanism. In addition, we will release two re-annotated clinic datasets for our considered setting. In experiments, our trained NER model show superior performance in the considered setting. ## 7 Limitations Our work is base on the existing sequence-tosequence NER model, since its way of decoding has been shown effective for knowledge transfer between different classes (Chen et al., 2022). However, it might also be valuable to consider other token-classification-based or CRF-based (Sutton et al., 2012) NER models. Especially, it would be interesting to employ the existing CRF-based distillation method (Wang et al., 2020b) to cope with the problem of heterogeneous tag sets for NER. ## 8 Acknowledgements This research was supported by ONR N00014-18-12871-P00002-3. The student involved was also supported by Adobe gift research funding. We would like to thank the anonymous reviewers for their insightful comments. Moreover, we want to thank Billy I. Kim for his dedicated efforts in screening the annotations. ## References Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In *International Conference on Artificial Intelligence and Statistics*, pages 2938–2948. PMLR. Genady Beryozkin, Yoel Drori, Oren Gilon, Tzvika Hartman, and Idan Szpektor. 2019. A joint namedentity recognizer for heterogeneous tag-sets using a tag hierarchy. *arXiv preprint arXiv:1905.09135*. Xiang Chen, Lei Li, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, and Ningyu Zhang. 2022. Lightner: a lightweight tuning paradigm for low-resource ner via pluggable prompting. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2374–2387. Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using bart. *arXiv preprint arXiv:2106.01760*. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096– 2030. Suyu Ge, Fangzhao Wu, Chuhan Wu, Tao Qi, Yongfeng Huang, and Xing Xie. 2020. Fedner: Privacypreserving medical named entity recognition with federated learning. *arXiv preprint arXiv:2003.09288*. Nathan Greenberg, Trapit Bansal, Patrick Verga, and Andrew McCallum. 2018. Marginal likelihood training of bilstm-crf for biomedical named entity recognition from disjoint label sets. In *Proceedings of the 2018* conference on empirical methods in natural language processing, pages 2824–2829. Harsha Gurulingappa, Roman Klinger, Martin Hofmann-Apitius, and Juliane Fluck. 2010. An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature. In 2nd Workshop on Building and evaluating resources for biomedical text mining (7th edition of the Language Resources and Evaluation Conference), pages 15–22. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drugrelated adverse effects from medical case reports. Journal of biomedical informatics, 45(5):885–892. Dhurgham Hassan Mahlool and Mohammed Hamzah Abed. 2022. A comprehensive survey on federated learning: Concept and applications. arXiv e-prints, pages arXiv–2201. Junyuan Hong, Zhuangdi Zhu, Shuyang Yu, Zhangyang Wang, Hiroko H Dodge, and Jiayu Zhou. 2021. Federated adversarial debiasing for fair and transferable representations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 617–627. Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73–81. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. 2021. A survey on federated learning systems: vision, hype and reality for data privacy and protection. *IEEE* Transactions on Knowledge and Data Engineering. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429–450. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In *International conference on machine learning*, pages 97–105. PMLR. Ying Luo, Fengshun Xiao, and Hai Zhao. 2020. Hierarchical contextualized representation for named entity recognition. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 8441–8448. Xingchao Peng, Zijun Huang, Yizhe Zhu, and Kate Saenko. 2019. Federated adversarial domain adaptation. *arXiv preprint arXiv:1911.02054*. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In *2017 IEEE symposium on security and privacy (SP)*, pages 3–18. IEEE. Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Foundations and Trends® in Machine Learning, 4(4):267–373. Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020a. Federated learning with matched averaging. *arXiv* preprint arXiv:2002.06440. Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Fei Huang, and Kewei Tu. 2020b. Structure-level knowledge distillation for multilingual sequence labeling. arXiv preprint arXiv:2004.03846. Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O'Connor, Michael Paul, and Graciela Gonzalez. 2019. Overview of the fourth social media mining for health (smm4h) shared tasks at acl 2019. In Proceedings of the fourth social media mining for health applications (\# SMM4H) workshop & shared task, pages 21–30. Chuhan Wu, Fangzhao Wu, Ruixuan Liu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2021. Fedkd: Communication efficient federated learning via knowledge distillation. *arXiv preprint arXiv:2108.13323*. Chun-Han Yao, Boqing Gong, Hang Qi, Yin Cui, Yukun Zhu, and Ming-Hsuan Yang. 2022. Federated multitarget domain adaptation. In *Proceedings of the* IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1424–1433. Hanyu Zhao, Sha Yuan, Niantao Xie, Jiahong Leng, and Guoqiang Wang. 2021. A federated adversarial learning method for biomedical named entity recognition. In *2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)*, pages 2962–2969. IEEE. ## A Implementation Of Baselines Below we talk about our considered baselines. (Ganin et al., 2016): It aligns the features of different domains via adversarial matching, using a domain discriminator. We add a K way domain discriminator, on the hidden states of every layer in the the encoder of our model NER model. The discriminator will try to classify the domain from which the data of the hidden states is generated. (Peng et al., 2019): In addition to adversarial matching with a discriminator, (Peng et al., 2019) also consider enhancing cross-domain generalization via disentangling the task-specific information from the domain-specific information. Therefore, apart from using the discriminator, we also add the disentanglement loss on the last layer of the decoder of our NER model. (Hong et al., 2021): Similar to (Ganin et al., 2016) and (Peng et al., 2019), (Hong et al., 2021) also uses a K way discriminator for adversarial domain matching. The difference is that it adopt a squared adversarial loss during training, for fairness among local platforms. Additionally, it minimize the prediction entropy of image classification for unlabeled samples. In order to adapt it to our case where the prediction is a label sequence (instead of a single label), we minimize the prediction entropy on the tokens of Yˆ T\k i. As mentioned in Section 5.1, these approaches generally requires a domain discriminator that is Algorithm 2 Algorithm for Instance Weighting Input: w k i,t, i = 1, · · · , Nk, k = 1, · · · , K. Output: w k i,t+1, i = 1, · · · , Nk, k = 1, · · · , K. % Compute and save the gradients of the source and target loss. for k = 1, · · · , K do for i = 1, · · · , Nk do Compute the source and target loss, LT k i and LT\k i, respectively. Backpropagate for the gradients ∂LT k i ∂qand ∂LT\k i ∂q, while not updating the model. end for Save g src i = ∂LT k i ∂qlocally. Compute ∂LT\k ∂q =PNk i=1 ∂LT\k i ∂q, and upload to the server. end for % Update the weights with via cosine similarity between gradients. for k = 1 · · · , K do Compute g tgt =Pk′̸=k ∂(LT\k) ∂qon the server and download to platform k. for i = 1, · · · , Nk do Compute the cosine similarity between g src i and g tgt on platform k. Update w k i,t to w k i,t+1 according to (12). end for end for trained and communicated along with the model, increasing the communication cost. We use the same K way discriminators for all the baselines. For each layer of BART encoder in LightNER, we add a K way discriminator with a single linear layer. For these discriminators of each layer, the only parameter is a matrix of size K × d, with d being the hidden dimension of the BART encoder. Communication cost: We quantify it as the number of trainable parameters involved in the model. Since the BART encoder has 12 layers, the communication cost of the discriminators is 12 × K × d, which is 72 × d for OntoNote 5.0 and 36 × d for the clinic datasets. Comparitively, the communication cost for updating our instance weighting is Nq×d, *i.e.*, 10×d, since we have Nq = 10. Therefore, our instance weighting has less communication cost than the discriminators. ## B Details Of Prompt Implementation We following (Chen et al., 2022) in implementing the prompt in Section 3. Generally speaking, (Chen et al., 2022) insert an array of key embeddings and value embeddings into the self-attention module of each transformer layer in BART (Lewis et al., 2019). The inserted key embeddings and value embeddings are denoted as ΦK ∈ R Nq×d and ΦV ∈ R Nq×d, respectively. Let Xl be the input of a transformer layer in BART. The selfattention module first projects Xlinto embeddings of the key (Kl), query (Ql) and value (V l), $$\begin{array}{l}\mathbf{K}^{l}=\mathbf{X}^{l}\mathbf{W}^{K},\mathbf{Q}^{l}=\mathbf{X}^{l}\mathbf{W}^{Q},\mathbf{V}^{l}=\mathbf{X}^{l}\mathbf{W}^{V}\\ \end{array}\tag{13}$$ where WK,WQ,WV ∈ R d×dare the project matrices. The self-attention output with inserted ΦK and ΦV can be computed as, $$output^{l}=softmax(\frac{\mathbf{Q}^{l}[\mathbf{K}^{l};\mathbf{\Phi}_{K}]_{r}^{\mathsf{T}}}{d})[\mathbf{V}^{l};\mathbf{\Phi}_{V}]_{r}\tag{14}$$ where *output*l denote the output from selfattention. [; ]r denotes row concatenation. ΦK and ΦV are projected from the prompt q ∈ R Nq×d in Section 3, $$[\Phi_{K};\Phi_{V}]_{c}=W_{2}^{l}T a n h(W_{1}^{l}q)\qquad(15)$$ where [; ]c denotes column concatenation. *T anh* is the tangent activation. Wl 1 ∈ R d×hand Wl 1 ∈ R h×2dare two trainable linear projections for a transformer layer. h is the hidden dimension, controlling the size of trainable parameters. ## C Experiment Details In the experiments, we show results with Nq = 10, d = 768 and h = 400. With such configuration, the trainable parameters (those need to be communicated) only takes up 7.04% (η = 7.04) of the model size, significantly reducing the communication cost compared to finetuning the full model. The model is locally trained for 1 epoch before being upload for aggregation, i.e., Eloc = 1 (Section 2.2), and train with 25 rounds of communication. We fix the pretrained BART parameters in LightNER, only training and communicating the trainable parameters for federated learning. Our model is trained with learning rate 3e-5 and batch size 8. We empirically set the momentum value α = 0.9. We train with a single GPU with pytorch 1.7.0 and python 3.8. For the weights of aggregation in equation (1), {mk} K k=1, we initially tried with FedAvg 7459 | Dataset | # Sentences | # Drug | # ADE | # Disease | # Finding | # Symptom | |------------------------------------|---------------|----------|---------|-------------|-------------|-------------| | ADE (Gurulingappa et al., 2012) | 4258 | 4077 | 4652 | 1169 | 89 | 126 | | SMM4H (Weissenbacher et al., 2019) | 1226 | 1471 | 1414 | 135 | 26 | 20 | ![11_image_0.png](11_image_0.png) (Wang et al., 2020a) that set mk as proportional to the size of the dateset in its corresponding domain. However, we found this will lead to inferior results for platforms whose dataset is small in size. Therefore, we set the weights {mk} K k=1 as uniform. ## D The Clinic Datasets The labeling procedure: We annotate the text corpous of ADE (Gurulingappa et al., 2012) and SMM4H (Weissenbacher et al., 2019) with a tag set of 5 entity types, *i.e.*, T = {Drug, ADE, Disease, Finding, *Symptom*}, following the definition as in the original paper of CADEC (Karimi et al., 2015). Following (Gurulingappa et al., 2010),we have two annotators that can discuss on the disagreement. We split the text of ADE (Gurulingappa et al., 2012) and SMM4H (Weissenbacher et al., 2019) into batches of 100 sentences. The annotators will work on streaming of batches, and annotating each batch takes about an hour. To ensure the quality of the resulting annotation, we also include a medical student from a clinical institution, in addition to the two annotators, to decide on sample for which the two annotators are not confident. The medical student and the two annotators are all student volunteers, who are also contributing to the methodology and experiments of this research project and credited with their names included in the paper author list. Table 3 show the statistics of our annotations, regarding the number of sentences and identified entities. We have also removed some of the duplicated sentences in SMM4H. Simulating heterogeneous tag sets for different platforms: As in Section 5.3, our experiments with the clinic datasets consider three platforms for federated learning. During the experiments, we specify different sets of annotated entity types (T k) for different platforms to simulate local training with heterogeneous tag sets. For instance, if T kis specified as annotated in platform k, then annotations of T\k will be ignored in this platform. {T k} K k=1 are specified such that each platform contains at least one annotated entity types whose annotations are not available in the other platforms. Formally, for each platform k, there exist at least one s ∈ T k, s.t., s /∈ T k′, k′ ̸= k. In this way, we simulate a practical scenario that each platform will have its unique contribution to the federated learning system, via enabling the global model to recognize at least one entity types whose annotations are only available in this platform. Such a setting is based on the consideration that including more platforms in the federated learning system may increase the risk of backdoor attack (Bagdasaryan et al., 2020) and privacy leakage (Li et al., 2021). Therefore, it is realistic that a platform is allow to participate in federated learning only if it can make unique contributions to the global model, *i.e.*, enabling the global model to recognize entity types that are not annotated in other platforms. Additionally, since there are 3 platforms, we allow each entity types to be annotated in at most 2 platforms. This is because it is less necessary for knowledge of a certain entity type to be transferred across platforms, if all the three platforms have already had its annotation. As in Section 5.3, we experiment with 3 platforms (K = 3) using the clinic datasets, with text of each platform being from a unique clinic dataset. In determining the T kfor each platform, we first randomly (uniformly) sample three different entity types (Drug, ADE and *Disease* as an example) from T , one for each platform. Each of the sampled entity types is specified as uniquely annotated in its associated platform. Then, for each of the rest of the entity types, denoted as s, (s ∈ {Finding, *Symptom*} in this example), we first randomly decide whether the it is annotated in n ∈ {1, 2} platforms, with a bernoulli distribution of probability 0.5 for each case. Then, we randomly (uniformly) sample n platforms, and assume s is annotated within these platforms. We randomly sample 5 sets of {T k} K k=1 with the above process. Since the three clinic datasets do not come with training and testing splits. We follow (Ge et al., 2020) that randomly sample 10% of the data in each dataset for testing, while the rest is for local training. We have 3 random split per sampled {T k} K k=1, and run the experiment with each split and sampled {T k} K k=1. Following (Ge et al., 2020; Chen et al., 2022) We report the average F1 score of all the experiment runs. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
jiang-etal-2023-interpreting
Interpreting Sentiment Composition with Latent Semantic Tree
https://aclanthology.org/2023.findings-acl.471
As the key to sentiment analysis, sentiment composition considers the classification of a constituent via classifications of its contained sub-constituents and rules operated on them. Such compositionality has been widely studied previously in the form of hierarchical trees including untagged and sentiment ones, which are intrinsically suboptimal in our view. To address this, we propose semantic tree, a new tree form capable of interpreting the sentiment composition in a principled way. Semantic tree is a derivation of a context-free grammar (CFG) describing the specific composition rules on difference semantic roles, which is designed carefully following previous linguistic conclusions. However, semantic tree is a latent variable since there is no its annotation in regular datasets. Thus, in our method, it is marginalized out via inside algorithm and learned to optimize the classification performance. Quantitative and qualitative results demonstrate that our method not only achieves better or competitive results compared to baselines in the setting of regular and domain adaptation classification, and also generates plausible tree explanations.
# Interpreting Sentiment Composition With Latent Semantic Tree Zhongtao Jiang1,2, Yuanzhe Zhang1,2, Cao Liu3, Jiansong Chen3, Jun Zhao1,2**, Kang Liu**1,2 1The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3Meituan {zhongtao.jiang, yzzhang, jzhao, kliu}@nlpr.ia.ac.cn {liucao, chenjiansong}@meituan.com ## Abstract As the key to sentiment analysis, sentiment composition considers the classification of a constituent via classifications of its contained sub-constituents and rules operated on them. Such compositionality has been widely studied previously in the form of hierarchical trees including untagged and sentiment ones, which are intrinsically suboptimal in our view. To address this, we propose semantic tree, a new tree form capable of interpreting the sentiment composition in a principled way. Semantic tree is a derivation of a context-free grammar (CFG) describing the specific composition rules on difference semantic roles, which is designed carefully following previous linguistic conclusions. However, semantic tree is a latent variable since there is no its annotation in regular datasets. Thus, in our method, it is marginalized out via inside algorithm and learned to optimize the classification performance. Quantitative and qualitative results demonstrate that our method not only achieves better or competitive results compared to baselines in the setting of regular and domain adaptation classification, and also generates plausible tree explanations1. ## 1 Introduction Sentiment classification is a task to determine the sentiment polarity of a sentence (Yadav and Vishwakarma, 2020; Dang et al., 2020). Current researches on this task are gradually shifting from improving model performance to interpretability. As the most known stream, *feature-based explanation* tries to figure out which input feature, say word, has the most influence on the prediction, in the form of the salience score or rationale, and in both self and post-hoc settings (Li et al., 2016; Ribeiro et al., 2016; Kim et al., 2020; Lei et al., 2016; Bastings et al., 2019; De Cao et al., 2020). However, this task requires sentiment composition 1Data and code implementation is available at https:// github.com/changmenseng/semantic_tree. ![0_image_0.png](0_image_0.png) Figure 1: Different tree structures for explanining sentiment composition, where semantic tree can explain the sentiment composition in the inverted-V structure, as shown in the box of (c). (Polanyi and Zaenen, 2006), which is beyond the ability of these feature-based explanations. To be concrete, sentiment composition considers the classification of a constituent via 1) classifications of its contained sub-constituents and 2) rules operated on them (Moilanen and Pulman, 2007), as shown in Figure 1(c). Thus, the classification of a sentence is decomposed into hierarchical sentiment compositions of its sub-constituents. Such compositionality has been widely studied previously in the form of hierarchical trees including untagged tree and *sentiment tree*, as shown in Figure 1. Untagged tree is usually modeled as a latent variable and learned via the task objective (Yogatama et al., 2017; Maillard and Clark, 2018; Choi et al., 2018; Havrylov et al., 2019; Chowdhury and Caragea, 2021). Then, a TreeLSTM (Tai et al., 2015; Zhu et al., 2015) is adopted to encode the sentence following the hierarchy for the final prediction. However, untagged tree is limited because it can only explain the hierarchy but not give labels on all nodes. Sentiment tree takes a further step that every node within has a polarity score or label. As the most representative example, Socher et al. (2013) creates Stanford Sentiment Treebank (SST) that has sentiment tree annotation. Sentiment tree also appears as a post-hoc explanation giving hierarchical attribution scores (Chen et al., 2020; Zhang et al., 2020). However, in fact, not every constituent is sentimental, some of which are somewhat more functional. For example, while a negator "not" is sentimentally neural, it can functionally flip the sentiment of a constituent. Sentiment labels are therefore not sufficient to explain such phenomenon. To overcome those defects, we propose *semantic tree*, a new tree form capable of explicitly and principally interpreting the sentiment composition. In the semantic tree, each node is assigned a label in *semantic labels* including sentimental and functional ones, and each local inverted-V structure reveals the rule composing adjacent constituents, as shown in Figure 1(c). Inspired by Dong et al. (2015), formally, the semantic tree is a derivation of a context-free grammar (CFG) (Chomsky, 1956) defined by non-terminal symbols (semantic labels), terminal symbols (word vocabulary), rules, and root symbols (*positive* and *negative*). The challenge of designing such grammar lies in designing semantic labels and rules, which requires linguistic knowledge of sentiment composition. To address this, we follow previous work about sentiment composition (Polanyi and Zaenen, 2006; Moilanen and Pulman, 2007; Taboada et al., 2011) to carefully design 11 semantic labels and 62 rules. We believe the grammar could cover most cases in sentiment analysis, as shown in Table 1. We aim to learn a model capable of extracting the semantic tree using data consisting of only sentence-label pairs, which is challenging because the semantic tree is latent without full annotation. To address this, we first build a semantic tree parser, and then marginalize out the semantic tree to induce a sentiment classifier to conduct supervised training on such data. Fortunately, this marginalization over the exponential tree space is computationally tractable resorting to the inside algorithm (Baker, 1979). This process could be abstracted as a module, namely sentiment composition module (SCM), which computes the compatibility of a prediction in the view of sentiment composition but not only pattern recognition. Accompanying an arbitrary neural text encoder with the proposed SCM, we can build a self-explanatory model that can not only predict the sentiment label but also generate a semantic tree as the explanation. To learn more plausible semantic trees, we further propose two extra objectives to guide the preterminals in the semantic tree, and to make the tree structure more syntactically meaningful. We conduct experiments on three datasets including MR (Pang and Lee, 2005), SST2 (Socher et al., 2013) and Amazon (Blitzer et al., 2007) in the setting of regular and cross-domain classification. Quantitative and qualitative results demonstrate that our method not only achieves better or competitive results compared to baselines, and also generates plausible tree explanations. ## 2 Method 2.1 Problem Formalization The dataset is a collection of tuples {(x n, yn)} N n=1, each of which contains a sentence x ∈ V∗and a sentiment label y ∈ Y, where V is the word vocabulary and Y = {*P, N*} is the label set consisting of *positive* (P) and *negative* (N). The task goal is to learn a classifier p(y|x). Since we hope to generate a semantic tree of the input sentence where the sentiment label is its root label, as shown in Figure 1(c), the objective classifier p(y|x) is not directly parameterized by a discriminative model as usual. Instead, we define the classifier as the marginalization of a parser over the latent semantic tree, in which the parser could fulfill this purpose. Concretely, let Tx(y) be the set of all semantic trees rooted y. Naturally, we have: $$p(y|x)=\sum_{t\in{\mathcal{T}}_{x}(y)}p(t|x)\qquad\qquad{\mathrm{(1)}}$$ where p(t|x) is a semantic tree parser that accepts a sentence and generates a semantic tree. We can conduct supervised learning when the classifier p(y|x) is obtained, where the parser p(t|x) is implicitly learned in this process. After training, the model can do the prediction via the induced classifier p(y|x), and generate the semantic tree to real the sentiment composition process of it. The very first issue before solving the summation in Equation (1) is to formalize the semantic tree. For simplicity, we can assume that the label of a constituent is determined immediately by its sub-constituents, regardless of the surrounding context. Therefore, the semantic tree is viewed as a derivation of a CFG that defines specific semantic labels and composition rules. Now, two challenges remain: 1) How to properly define the CFG behind the semantic tree? 2) How to model the parser p(t|x) and efficiently compute the classifier p(y|x)? We shall elaborate these two problems in Section 2.2 and Section 2.3, respectively. ## 2.2 Sentiment Composition Grammar The proposed semantic tree is described by a context-free grammar G consisting a quadruple including the non-terminal symbol set N (semantic label set), the terminal symbol set V (word vocabulary), the composition rule set R and the root symbol set Y (P and N). While V and Y are obvious, the design of semantic labels (N ) and composition rules (R) requires expert knowledge. Fortunately, previous works have concluded different types of compositions exhaustively (Polanyi and Zaenen, 2006; Moilanen and Pulman, 2007; Taboada et al., 2011), inspiring us to design 11 semantic labels and 62 composition rules. We call the proposed grammar as a sentiment composition grammar (SCG). ## Semantic Labels The defined 11 semantic labels include two types as follows: Sentimental labels Including negative N, positive P, neutral O. Functional labels Including negator D, irrealis blocker I, priority riser +, priority reducer −, high negative N +, high positive P +, low negative N −, low positive P−. We shall explain these labels together with composition rules later. ## Composition Rules Formally, the composition rule is in the form of β → A (A ∈ N , β ∈ (*N ∪V*)∗), which determines the label of a constituent given its sub-constituents2. We include three types of rules. The first one is binary rule in the form of BC → A (A, B, C ∈ N ). Binary rules are defined following common binary compositions, which mainly includes four types according to previous works and our observations. We now introduce each composition and its corresponding rules3. Polarity propagation Propagating the polarity: $$N\;O/N\to N,\;P\;O/P\to P,\;O\;O\to O\;\;(2)$$ Negation Flipping the non-neutral polarity (P/N) via a negator (D): $$D\;P\to N,D\;N\to P\qquad\qquad(3)$$ Conflict Resolution Resolving the conflict of nonneutral polarity constituents (P/N) by ranking their priorities based on priority modifiers (+/−). As a typical example, Figure 1 shows a contrastive conjunction (Socher et al., 2013) structure, which the first and the second half of the sentence have opposite polarities. The connector "but" is a priority riser (+) that rises the priority of the second half sentence, which dominates the entire sentence priority. Similarly, there also exist priority reducer (−) such as "although". Thus, rules related to this composition includes those for priority modification: $$+\;P\to P^{+},-\;P\to P^{-}$$ $$\left(4\right)$$ − (4) and those for resolution: $$N\;P^{+}\to P,N^{-}\;P^{+}\to P$$ $$({\boldsymbol{\mathit{S}}})$$ + → P (5) We don't allow the polarity with priority (N +/N −/P +/P −) without a explicit modifier +/−, which a single word with non-neutral polarity can't have priority. Irrealis blocking Neutralizing the non-neutral polarity (P/N) by an irrealis blocker (I): $$I\;P/N\to O$$ I P/N → O (6) The blocker such as modal "would" or connector "if" can set up a context about possibility of some polarities not necessarily expressed by the author. As a result, a literal polarity is canceled. The full binary rule list is shown in Table 6 in Appendix A 4. We also present examples of those 4Readers might ask that why explicit triggers are involved in some rules, for example, we can just define a general "glue" rule P N → P/N to handle conflict resolution instead of defining the modifier (+/−) to trigger the priority modification, as done by Dong et al. (2015). This is because when only the root label annotation is available, this general rule is easily abused so that the semantic tree degenerates to the sentiment tree as a consequence. The optimal binary rule should satisfy that the output label is uniquely determined given the input ones, requiring us to attribute each label to the specific composition as detailed as possible. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) (d) Irrealis blocking. Figure 2: Examples of different binary compositions. | Composition | SST2 | MR | |----------------------|--------|------| | Polarity propagation | 97 | 96 | | Negation | 18 | 18 | | Conflict resolution | 18 | 20 | | Irrealis blocking | 6 | 9 | | None of the above | 3 | 4 | compositions Figure 2. Those compositions appears very commonly. To illustrate this, we randomly sample 100 examples in SST2 and MR and count occurrences of above compositions, where 97 and 98 examples in SST2 and MR can be explained by the above compositions. Thus, we believe our rules can cover most cases. The second type is terminal-unary rule defining the legal preterminals of single words, which is in the form of ω → A (A ∈ Npret = {N, P, O, D, I, +, −}, ω ∈ V). As introduced, A can't be the polarity priority (N +/N −/P +/P −). We further define the preterminal-unary rule as the third type, including rules A → A (A ∈ {*P, N, O, D, I,* +, −}) and D/I/ + /− → O. Those rules can only and must appear on the second layer of the semantic tree, which is designed to cancel the function of misrecognized function constituents, leading to better performance in our experiments. ## 2.3 Sentiment Composition Module We now answer the second question: How to model the parser p(t|x) and compute the classifier p(y|x). We show that this process naturally lead to the sentiment composition module. ## Semantic Tree Parser First, we represent the semantic tree t of a sentence x = (x0, · · · , xT −1) by the set of anchored rules (Eisner, 2016) consisting of a rule and its location indices: $$\begin{array}{c}{{t=\{(B_{i k}C_{k j}\to A_{i j})_{t}|1\leq t\leq T-1\}}}\\ {{\qquad\qquad\cup\{(B_{i}\to A_{i})_{t}|1\leq t\leq T\}}}\\ {{\qquad\qquad\cup\{(x_{i}\to A_{i})_{t}|1\leq t\leq T\}}}\end{array}\quad(7)$$ where Aij (0 ≤ *i < j < T*) is an anchored node suggesting a label A covering the constituent ranging from xito xj−1. Aiis short for Ai,i+1 which is an unary anchored node covering the word xi. Thus, BikCkj → Aij , Bi → Ai and xi → Ai represent the binary, preterminal-unary, and terminalunary anchroed rule, respectively. The semantic tree parser p(t|x) is defined by a Gibbs distribution on anchored rules in a tree (Finkel et al., 2008; Durrett and Klein, 2015): $$p(t|x)=\frac{1}{Z(x)}\prod_{a\in t}\phi(a)=\frac{1}{Z(x)}\exp\left(\sum_{a\in t}s(a)\right)\tag{8}$$ where Z(x) is the log-partition function for normalization. ϕ(a) > 0 is the potential function of the anchored rule a defined in the exponential form exp(s(a)), where s(a) is the score to rate how comfortable it is for a to appear in the tree. Scores for different types of anchored rules are defined as the sum of a few subscores rating the comfortableness of corresponding substructures. $$s(B_{i k}C_{k j}\to A_{i j})=$$ Here the scores of binary and pos-unary rules $$s(B_{ik}C_{kj}\to A_{ij})=$$ $$s_{\rm rule}(BC\to A)+s_{\rm label}(A,x_{ij})+s_{\rm span}(x_{ij})$$ $$s(B_{i}\to A_{i})=$$ $$s_{\rm rule}(B\to A)+s_{\rm label}(A,x_{i})+s_{\rm span}(x_{i})$$ $$s(x_{i}\to A_{i})=s_{\rm rule}(x_{i}\to A)\tag{9}$$ srule(BC → A) and srule(B → A) are scalar parameters. Other scores are modeled by neural networks: $$s_{\rm rule}(x_{i}\to A)={\bf w}_{\rm rule}^{A}\cdot{\bf h}_{i}^{\leq L}+b_{\rm rule}^{A}$$ $$s_{\rm label}(A,x_{ij})={\bf w}_{\rm label}^{A}\cdot{\bf h}_{ij}^{L}+b_{\rm label}^{A}\tag{10}$$ $$s_{\rm span}(x_{ij})={\bf w}_{\rm span}\cdot{\bf h}_{ij}^{L}+b_{\rm span}$$ where · is the vector dot product. w· · and b·· are learning parameters. h l ij is the phrase representation of the constituent xij in the l layer, which is computed by a text encoder m: $$\mathbf{h}_{0}^{0},\cdots,\mathbf{h}_{T-1}^{0},\cdots,\mathbf{h}_{0}^{L-1},\cdots,\mathbf{h}_{T-1}^{L-1}$$ $$=m(\mathbf{e}_{0},\cdots,\mathbf{e}_{T-1})\tag{11}$$ $$\mathbf{h}_{ij}^{l}=\frac{\sum_{t=i}^{j-1}\mathbf{h}_{t}^{l}}{j-i}$$ where eiis the word embedding of xi. Note that we compute slabel and sspan using top layer phrase representations, but compute srule using a lower layer one. This is because the recognition of the preterminal is easier than determining if this label is cancelled. Thus the simple phrase representation h ≤L ij is sufficient for the former, while the more "contextual" one h L ij is in favor by the latter. ## Inducing The Classifier From The Parser As shown in Equation (1), the classifier is induced by marginalizing over all the semantic trees of the input sentence, which can be efficient done by the inside algorithm. To illustrate this, we first let Tx(Aij ) and Tx(BikCkj → Aij ) be sets of subtrees of sentence x that are covered by the anchored node Aij and rule BikCkj → Aij , respectively. The inside algorithm defines the inside term αx(Aij ) = Pt∈Tx(Aij ) Qa∈t ϕ(a), which is the sum of the potentials of subtrees covered by Aij . The inside term is computed recursively in a bottom-up manner: $$\alpha_{x}(A_{i})=\phi(x_{i}\to A_{i})\sum_{B\to A\in{\cal R}}\phi(B_{i}\to A_{i})$$ $$\alpha_{x}(A_{ij})=$$ $$\sum_{B\subset\to A\in{\cal R}}\phi(B_{ik}C_{kj}\to A_{ij})\alpha_{x}(B_{ik})\alpha_{x}(C_{kj})$$ where αx(Ai) is the initial value of this recursion. Obvious, the time complexity of the inside algorithm is O(|R|T 3). It can be shown that the inside term of the root anchored node αx(A0T ), abbreviated as αx(A), equals to the unnormalized probability that the root of the semantic tree is y. Thus, we have: $$p(y=A|x)=\frac{\alpha_{x}(A)}{\sum_{B\in\mathcal{Y}}\alpha_{x}(B)}$$ $$=\frac{\exp(s_{\text{label}}(A,x)+s_{\text{SCM}}(A,x))}{\sum_{B\in\mathcal{Y}}\exp(s_{\text{label}}(B,x)+s_{\text{SCM}}(B,x))}$$ $$s_{\text{SCM}}(A,x)=\underset{\begin{subarray}{c}B\subset\to A\in\mathcal{R}\\ 0<k<T\end{subarray}}{\text{log}}\left(s_{\text{rule}}(BC\to A)\right.$$ $$\left.+\log\alpha_{x}(B_{0k})+\log\alpha_{x}(C_{kT})\right)\tag{13}$$ As seen, the logit in the softmax includes an extra score sSCM(*A, x*) as a complement to the regular one slabel(*A, x*), where the former and the latter can be understood as the accordance of assigning the label A by means of sentiment composition and pattern recognition, respectively. Thus, we call slabel and sSCM as the recognition module and the sentiment composition module, respectively. While the recognition module is only learned from the data, the sentiment composition module incorporates general and invariant human knowledge in the form of sentiment composition rules, which is more robust for domain adaptation, as we shall see in Section 4.1. The last issue is that the proposed SCM is intractable for long documents due to the cube time complexity over length. So for a document, we first cut it into sentences, and then compute their individual logits. Document logits are aggregated by attention on those sentence logits, where attention weights are computed by sentence representations. ## 2.4 Training & Testing Now we've obtained the induced classifier, we can apply supervised training by minimizing: $${\mathcal{L}}_{\mathrm{cls}}=-{\frac{1}{N}}\sum_{n=1}^{N}\log p(y^{n}|x^{n})\qquad{\mathrm{(14)}}$$ This objective might be enough for the classification, but not for a plausible semantic tree explanation. Cases in which a semantic tree can reach a right root label with wrong preterminals and improper structure do exist. For example, if we choose BERT (Devlin et al., 2019) as the encoder, the method might assign non-neutral polarity to [CLS], and recognize any other tokens as neutral polarity, since [CLS] representation is usually treated as the sentence representation. An effective way to improve the plausibility is to learn the explanation via more explicit annotations (Strout et al., 2019; Zhong et al., 2019), even if those annotations are weak or incomplete. Therefore, we additional introduce two objectives to regularize the tree. For the preterminal plausibility, we construct a lexicon to annotate the preterminal sequence of each sentence and conduct weakly-supervised learning on the annotation. As introduced, there are 7 preterminals in the proposed grammar, 3 sentimental and 5 functional. We utilize sentiwordnet (Baccianella et al., 2010) and stopwords in NLTK5 5https://www.nltk.org/ and spaCy6library to annotate non-neutral and neutral sentimental labels, respectively. For functional labels, we manually build a lexicon based on irrealis blockers and priority modifiers from Taboada et al. (2011), and negators in Loughran and McDonald (2011). The functional lexicon is shown in Table 7 in Appendix B. Let o n be the annotated preterminal sequence of the sentence x n, and S n be the set containing the indices of all annotated words. Then, we optimize the following conditional log-likelihood based on the terminal-unary score function in Equation (10): $$\mathcal{L}_{\rm pos}=-\frac{1}{\sum_{n}^{N}|\mathcal{S}^{n}|}\sum_{i\in\mathcal{S}^{n}}\log q(o_{i}^{n}|x^{n})\tag{15}$$ $$q(o_{i}|x)=\frac{\exp(s_{\rm rule}(x_{i}\to o_{i}))}{\sum_{A\in\mathcal{N}_{\rm pos}}\exp(s_{\rm rule}(x_{i}\to A))}$$ For the structural plausibility, we annotate the syntactical tree for each sentence through Berkeley parser (Kitaev and Klein, 2018; Kitaev et al., 2019), which is a SOTA parser based on T5 (Raffel et al., 2020) and trained on the Penn Treebank (PTB) (Taylor et al., 2003). We convert the tree to the form of left-branching chomsky normal form (CNF) (Chomsky, 1963), and omit non-terminal labels to obtain the tree skeleton. Our goal is to make the semantic tree structure resemble the annotated PTB tree structure. Given the annotated skeleton k n of the sentence x n, we minimize the conditional likelihood: $$\begin{array}{c}{{{\mathcal{L}}_{\mathrm{str}}=-\frac{1}{N}\sum_{n=1}^{N}\log r(k^{n}|x^{n})}}\\ {{r(k|x)=\frac{1}{Z^{\prime}(x)}\prod_{c\in k}\exp\left(\sum_{c\in k}s_{\mathrm{span}}(c)\right)}}\end{array}\tag{16}$$ where c is a span in the skeleton k. As seen, r(k|x) is defined by a Gibbs distribution with span score functions in Equation (10). The normalization term Z′(x) is also computed via the inside algorithm similar to Equation (12). The final objective is the linear combination of the above three objectives7: $${\mathcal{L}}=\omega_{\mathrm{cls}}{\mathcal{L}}_{\mathrm{cls}}+\omega_{\mathrm{pos}}{\mathcal{L}}_{\mathrm{pos}}+\omega_{\mathrm{str}}{\mathcal{L}}_{\mathrm{str}}$$ | Method | MR | SST2 | | |----------------------------------------------------------------|--------|--------|-------| | sentence | phrase | | | | Sequential models BiLSTM (1997) | 83.27 | 87.52 | 89.68 | | BERT (2019) | 87.65 | 92.25 | 93.52 | | Sentiment tree models MVRNN (2013) | - | - | 82.90 | | RNTN (2013) | - | - | 85.40 | | BiTreeLSTM (2017) | - | - | 90.30 | | RTCM (2019) | - | - | 90.30 | | TreeLSTM+WG (2019) | - | - | 89.70 | | TreeLSTM+LVG (2019) | - | - | 89.80 | | TreeLSTM+LVeG (2019) | - | - | 89.80 | | Untagged tree (by external parser) models MVRNN (2012) 79.00 - | - | | | | TreeLSTM (2015) | 78.70 | 88.00 | - | | (Liu et al., 2017a) | 81.90 | 87.80 | - | | (Liu et al., 2017b) | 81.70 | 87.80 | - | | (Kim et al., 2019) | 83.80 | - | 91.30 | | Latent untagged tree models RL-SPINN (2017) | - | - | 86.50 | | Gumbel-Tree (2018) | - | - | 90.70 | | (Havrylov et al., 2019) | - | - | 90.20 | | CRvNN (2021) | - | - | 88.30 | | Latent semantic tree models (Ours) BiLSTM+SCM 83.41 | 88.03 | 90.06 | | | BERT+SCM | 88.16 | 92.31 | 93.96 | | Table 2: Sentiment classification accuracy results. | | | | When the model is well-trained, it is able to not only predict the sentiment label but also generate the semantic tree as the explanation: $$\begin{array}{l}{{y^{\star}=\operatorname*{arg\,max}_{y\in{\mathcal{Y}}}p(y|x)}}\\ {{t^{\star}=\operatorname*{arg\,max}_{t\in{\mathcal{T}}_{x}(y^{\star})}p(t|x)}}\end{array}\tag{18}$$ The second argmax is to decode the best semantic tree with the maximal conditional probability, which is solved by the CKY algorithm (Kasami, 1965; Daniel, 1967). ## 3 Experiments In this section, we conduct experiments to illustrate that the proposed SCM module is able to improve the accuracy performance. ## 3.1 Datasets We adopt MR (Pang and Lee, 2005) and SST2 (Socher et al., 2013) in this experiment. MR contains 10662 movie reviews, half of which are positive/negative. Since it has no train/dev/test splits, we follow the convenience to conduct 10-fold cross validation. SST2 is built from SST by binarizing the 5-class sentiment label. Common settings of SST2 include SST2-S which only uses the sentence for training, and SST2-P which uses all labeled non-neutral phrases for training, of which the training size is 6920 and 98794, respectively. In both settings, there are 872/1821 sentences for validation/testing. ## 3.2 Implementation We utilize BiLSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019) (base version) as backbone encoders for modeling the constituent representations. For both models, we use the first layer representations to compute the terminal-unary scores. We use momentum-based gradient descent (Qian, 1999) (we set the momentum to be 0.9), along with cosine annealing learning rate schedule (Loshchilov and Hutter, 2017) to optimize our models. For detailed hyper-parameter settings, please check the configuration files in our publicly available repository. ## 3.3 Baselines Compared models include sequential models and three types of tree models: sentiment tree models, untagged tree models and latent untagged tree models. Both tree models ultilize recursive neural networks (RvNNs) (Socher et al., 2011) for modeling phrases in the sentence following a tree structure. Sentiment tree models have the full sentiment tree supervision, and learned to predict labels of all nodes in the tree. By contrasts, tree structures for untagged tree models are obtained by an external parser, and only the root node label is available for training. Latent untagged tree models learn to generate the tree structure itself, which is implicit supervised by the task objectives. ## 3.4 Results We report the accuracy of different models in Table 2, which we can find that: 1) Compared to the original sequential model, we can see that adding the proposed SCM steadily improves the classification accuracy for both BiLSTM and BERT encoder all the datsets and settings, directly reflecting the effectiveness of our method. 2) Armed with the proposed SCM, the sequential BiLSTM achieves better or competitive performance with previous tree models on both datasets and settings. Specially, it outperforms each baselines on SST-2. This might suggest that the hierarchical RvNN is not necessarily the best way to model compositions, which a flat sequential model could do just as well. 3) We S→T BiLSTM BiLSTM BERT BERT +SCM +SCM B→D 82.65 **82.75** 88.96 **89.95** B→E 76.50 **79.60** 86.15 **87.70** B→K **78.05** 77.75 **89.05** 87.65 D→B 80.80 82.35 **89.40** 88.05 D→E 77.05 **80.85** 86.55 **87.55** D→K 77.65 **79.85** 87.53 **88.30** E→B 73.85 **75.45** 86.50 **86.75** E→D 77.25 78.25 **87.95** 87.30 E→K **84.85** 83.90 91.60 **91.85** K→B 71.65 75.80 **87.55** 86.35 K→D 73.75 76.50 **87.30** 87.25 K→E **82.95** 82.90 90.45 **90.80** Average 78.08 **79.66** 88.25 **88.29** also admit that the performance improvement from our method is not that huge, which our BiLSTM model doesn't surpass all compared models on MR and SST2-P. However, since our motivation is interpretability, we believe that the performance is sufficient. ## 4 Discussion 4.1 Sentiment Domain Adaptation We conduct experiments in the cross-domain setting. We adopt Amazon in this experiment. Amazon is a widely-used domain adaption dataset collected by Blitzer et al. (2007). It contains review documents from the Amazon website in four domains: Books (B), DvDs (D), Electronics (E) and Kitchen & Housewares (K), where each domain contains 2000 labeled reviews. Following previous works, the model is trained on one domain and tested on the other three domains, yielding 12 crossdomain sentiment classification subtasks. For each subtask, we randomly sample 1600 examples in the source domain for training, and left the other 400 examples for validation. We report the accuracy of different subtask in Table 3. As seen, compared to original sequential models, adding the proposed SCM improves the adaptation accuracy in most cases and on average as well, especially for BiLSTM which is trained from scratch. The improvement originates from the injected domain-invariant human knowledge in the proposed SCM, which helps the model to be less sensitive to the domain. The performance improvement of pretrained model BERT is not that significant because the pretraning process has already given the generalization ability to it. ## 4.2 Ablation Study We conduct ablation study on SST2-S to study effects of different components including the grammar and two plausibility objectives. We report the accuracy and the unlabeled tree F1 of the generated semantic tree w.r.t. PTB trees generated by Berkeley parser for each model in Table 4. We find that the grammar doesn't work out alone when two plausibility objectives are absent, where the accuracy drops compared to the original encoder. We speculate this is due to lack of direct information of function labels, making it easier to mis-recognition on those labels. Such error would accumulated from bottom to up in the tree and pollute other sentences including the same constituent, causing the performance drop. The preterminal plausibility objective Lpos alleviates this issue effectively with an obvious performance improvement for both encoders. For the structure plausibility objective Lstr, though it makes the tree structure more syntactically meaningful with higher unlabeled tree F1, it doesn't necessarily guarantee the performance improvement. This suggests that the optimal tree structure might not exactly resemble PTB tree structure. On the contrary, the tree structure learned without Lstr, which has little similarity with PTB tree structure, is also suboptimal with mediocre accuracy. To study the optimal tree structure, we alter the balancing factor ωstr and obtain models with different unlabeled tree F1 w.r.t. PTB trees and accuracy. Then, we visualize relation between these two metrices in Figure 3. We can see that accuracy roughly shows a trend of first increasing and then decreasing when the tree gets more syntactical meaning- Table 5: Accuracy performances of different grammars ![7_image_0.png](7_image_0.png) on SST2-S. ful for both encoders (i.e., has higher unlabeled tree F1). This is contrary to that of Williams et al. (2018) which finds that the optimal tree structure of untagged tree methods RL-SPINN (Yogatama et al., 2017) and Gumbel-Tree (Choi et al., 2018) do not resemble PTB tree structure. This might because our method has a specific grammar with syntactical information restraining the tree structure, while untagged tree methods accommodate for any structure. | Method | Grammar | Acc | |------------|-----------|-------| | BiLSTM+SCM | glue | 87.42 | | SCG | 88.03 | | | BERT+SCM | glue | 92.20 | | SCG | 92.31 | | ## 4.3 Effects Of Scg To show the effectiveness of the proposed SCG, we compare it with the glue grammar (Taboada et al., 2011) whose binary rules are very free and in the form BC → A (A, B, C ∈ {*P, N, O*}). Such rules act like the glue to connect adjacent constituents with any polarities. The results are shown in Table 5, which our proposed SCG is more effective with better accuracy compared to the glue grammar. We think this is because glue grammar rules are too free to carry specific sentiment composition knowledge, which is is helpless for the task. ## 4.4 Qualitative Study | Method | Acc | Tree F1 | |------------------|-------|-----------| | CNF | - | 77.19 | | BiLSTM | 87.52 | - | | +G | 87.10 | 21.38 | | +G + Lpos | 87.59 | 21.04 | | +G + Lstr | 86.77 | 55.04 | | +G + Lpos + Lstr | 88.03 | 46.85 | | BERT | 92.25 | - | | +G | 91.32 | 09.56 | | +G + Lpos | 91.82 | 12.28 | | +G + Lstr | 91.93 | 51.05 | | +G + Lpos + Lstr | 92.31 | 50.94 | We qualitatively show a few examples to show our method can handle compound sentiment composi- ![8_image_0.png](8_image_0.png) tions in Figure 4. The first case is a sentence with two negative constituents joining by a coordinating conjunction, each of which has an irrealis blocking within. The second case is a sentence with negation under conflict resolution. For both cases, the prediction is not simple since the model is susceptible to the surface and literal meaning in the sentence, which might interfere the correct decision. Taking the sentiment composition explicitly, we can see that our method successfully judge the semantic role of different constituents, and finally compose plausible tree explanations. ## 5 Related Works Sentiment composition is one of the key to sentiment analysis, which considers the semantic of a constituent from both recognition and composition views (Polanyi and Zaenen, 2006; Moilanen and Pulman, 2007). That is, it decomposes the classification of a sentence into a hierarchical tree structure explicitly showing how the polarity of the sentence come from the composition of its subconstituents. Early works are mainly based on manual rules and semantic lexicon that is constructed either manually (Wilson et al., 2005; Kennedy and Inkpen, 2006) or automatically (Dong et al., 2015; Toledo-Ronen et al., 2018). Nowadays, represented via different forms of tree, sentiment composition is often learned explicitly or implicitly in the endto-end learning manner of neural network models. Common tree forms include untagged tree and sentiment tree, while the learning paradigm is also varied in literature. To be concrete, untagged tree can either be directly obtained from the external syntactic parser (Socher et al., 2012; Tai et al., 2015; Liu et al., 2017a,b; Kim et al., 2019), or serve as a latent variable learned implicitly (Yogatama et al., 2017; Maillard and Clark, 2018; Choi et al., 2018; Havrylov et al., 2019; Chowdhury and Caragea, 2021). Compared to the untagged one, sentiment tree offers more information about sentiment polarity of each constituent in the tree. As the most representative resource in this form, SST (Socher et al., 2013) formalizes sentiment composition as a parsing task, motivating lots of works to learn the tree supervisedly (Teng and Zhang, 2017; Zhang and Zhang, 2019; Zhang et al., 2019). Sentiment tree is also a popular explanation form for post-hoc interprebility since it can provide hierahical attribution scores (Chen et al., 2020; Zhang et al., 2020). While both existing forms are useful, they are suboptimal due to their in-ability to explicitly interpret sentiment composition, which our proposed semantic tree fills this gap. ## 6 Conclusions In this paper, we present semantic tree to explicitly interpret sentiment compositions in sentiment classification. we carefully design a grammar under each compositions from the linguistic inspiration, and learn to extract semantic tree explanations without full annotations. Quantitative and qualitative results demonstrate that our method is effective and can generate plausible tree explanations. ## 7 Limitations & Ethics Statement Our method is first limited by the proposed grammar that doesn't cover all the realistic cases. As shown in Table 1, there are still a few cases in the randomly sampled 100 examples that none of the defined rules can explain. Secondly, the time complexity of our method is the cube of the sentence length, limiting its direct applications on long documents. So we have to classify the document based on classification of individual sentences, which might be problematic since the sentiment of different sentences in the document may affect each other. All the experiments in this paper are conducted on public available datasets, which has no data privacy concerns. Meanwhile, this paper doesn't involve human annotations, so there are no related ethical concerns. ## Acknowledgements This work was supported by the National Key R&D Program of China (2022ZD0160503) and the National Natural Science Foundation of China (No.61976211, No.62276264), and the Strategic Priority Research Program of Chinese Academy of Sciences (No.XDA27020100). This research was also supported by Meituan. ## References Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). James K Baker. 1979. Trainable grammars for speech recognition. The Journal of the Acoustical Society of America, 65(S1):S132–S132. Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963–2977, Florence, Italy. Association for Computational Linguistics. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447, Prague, Czech Republic. Association for Computational Linguistics. Hanjie Chen, Guangtao Zheng, and Yangfeng Ji. 2020. Generating hierarchical explanations on text classification via feature interaction detection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5578–5593, Online. Association for Computational Linguistics. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5094–5101. AAAI Press. Noam Chomsky. 1956. Three models for the description of language. IRE Transactions on information theory, 2(3):113–124. Noam Chomsky. 1963. Formal properties of grammars. Handbook of Math. Psychology, 2:328–418. Jishnu Ray Chowdhury and Cornelia Caragea. 2021. Modeling hierarchical structures with continuous recursive neural networks. In International Conference on Machine Learning, pages 1975–1988. PMLR. Nhan Cach Dang, María N Moreno-García, and Fernando De la Prieta. 2020. Sentiment analysis based on deep learning: A comparative study. Electronics, 9(3):483. H Younger Daniel. 1967. Recognition and parsing of context-free languages in time n3. Information and control, 10(2):189–208. Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243–3255, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Dong, Furu Wei, Shujie Liu, Ming Zhou, and Ke Xu. 2015. A statistical parsing framework for sentiment classification. Computational Linguistics, 41(2):265–308. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302–312, Beijing, China. Association for Computational Linguistics. Jason Eisner. 2016. Inside-outside and forwardbackward algorithms are just backprop (tutorial paper). In Proceedings of the Workshop on Structured Prediction for NLP, pages 1–17, Austin, TX. Association for Computational Linguistics. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL-08: HLT, pages 959–967, Columbus, Ohio. Association for Computational Linguistics. Serhii Havrylov, Germán Kruszewski, and Armand Joulin. 2019. Cooperative learning of disjoint syntax and semantics. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1118–1128, Minneapolis, Minnesota. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735– 1780. Tadao Kasami. 1965. An efficient recognition and syntax algorithm for context-free languages. Technical report, Air Force Cambridge Research Lab. Alistair Kennedy and Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational intelligence, 22(2):110–125. Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154–3167, Online. Association for Computational Linguistics. Taeuk Kim, Jihun Choi, Daniel Edmiston, Sanghwan Bae, and Sang-goo Lee. 2019. Dynamic compositionality in recursive neural networks with structureaware tag representations. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6594–6601. AAAI Press. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. ArXiv preprint, abs/1612.08220. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017a. Adaptive semantic compositionality for sentence modelling. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4061–4067. ijcai.org. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017b. Dynamic compositional neural networks over tree structure. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4054–4060. ijcai.org. Ilya Loshchilov and Frank Hutter. 2017. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Tim Loughran and Bill McDonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of finance, 66(1):35–65. Jean Maillard and Stephen Clark. 2018. Latent tree learning with differentiable parsers: Shift-reduce parsing and chart parsing. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 13–18, Melbourne, Australia. Association for Computational Linguistics. Karo Moilanen and Stephen Pulman. 2007. Sentiment composition. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115– 124, Ann Arbor, Michigan. Association for Computational Linguistics. Livia Polanyi and Annie Zaenen. 2006. Contextual valence shifters. In Computing attitude and affect in text: Theory and applications, pages 1–10. Springer. Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145–151. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67. Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135–1144. ACM. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211, Jeju Island, Korea. Association for Computational Linguistics. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 129–136. Omnipress. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Julia Strout, Ye Zhang, and Raymond Mooney. 2019. Do human rationales improve machine explanations? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 56–62, Florence, Italy. Association for Computational Linguistics. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2):267–307. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566, Beijing, China. Association for Computational Linguistics. Ann Taylor, Mitchell Marcus, and Beatrice Santorini. 2003. The penn treebank: an overview. Treebanks, pages 5–22. Zhiyang Teng and Yue Zhang. 2017. Head-lexicalized bidirectional tree LSTMs. Transactions of the Association for Computational Linguistics, 5:163– 177. Orith Toledo-Ronen, Roy Bar-Haim, Alon Halfon, Charles Jochim, Amir Menczel, Ranit Aharonov, and Noam Slonim. 2018. Learning sentiment composition from sentiment lexicons. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2230–2241, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347–354, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Ashima Yadav and Dinesh Kumar Vishwakarma. 2020. Sentiment analysis using deep learning architectures: a review. Artificial Intelligence Review, 53(6):4335– 4385. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Die Zhang, Huilin Zhou, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Hao Zhang, Mengyue Wu, and Quanshi Zhang. 2020. Interpreting hierarchical linguistic interactions in dnns. Liwen Zhang, Kewei Tu, and Yue Zhang. 2019. Latent variable sentiment grammar. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4642–4651, Florence, Italy. Association for Computational Linguistics. Yuan Zhang and Yue Zhang. 2019. Tree communication models for sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3518–3527, Florence, Italy. Association for Computational Linguistics. Ruiqi Zhong, Steven Shao, and Kathleen McKeown. 2019. Fine-grained sentiment analysis with faithful attention. ArXiv preprint, abs/1908.06870. Xiao-Dan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1604– 1612. JMLR.org. ## A Binary Rules Table 6 shows all the binary rules contained in the proposed SCG. ## B Functional Lexicon Table 7 lists functional lexicon in the manually constructed lexicon. | Composition | Rules | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------| | O P → P P O → P | | | | Polarity propagation | O N → N N O → N | O O → O P P → P N N → N | | Negation | D P → N | P D → N | | D N → P | N D → P | | | P + → P + N + → N + + P → P + + N → N + + + → P + P + + → N + N + P + → P + + N + → N + P − → P − N + → N − − P → P − − N → N − P − − → P − N − + → N − − P − → P − − N − → N − P + O → P + N + O → N + P − O → P − N − O → N − O P + → P + O N + → N + O P − → P − O P− → P − | N P + → P N − P + → P N − P → P P + N → P P + N − → P P N − → P N P − → N + P N − → N N + P → N P − N → N P − N + → N P N + → N | | | Conflict resolution Irrealis blocking | I P → O | P I → O | | I N → O | N I → O | | Table 6: Binary rules in the proposed SCG. | Label | Words | |--------------------|-----------------------------------| | Priority riser + | but, however, yet, whereas, still | | Priority reducer − | although, though, despite, regardless, nevertheless, nonetheless | | Irrealis blocker I | could, should, would, ought, supposed, if | | Negator D | no, not, n't, neither, nor, never, none, lack, without, cannot, aint, arent, barely, cant, couldnt, didnt, doesnt, dont, hardly, havent, few, isnt, merely, never, nothing, nobody, shouldnt, wasnt, werent, wont, wouldnt | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✗ A2. Did you discuss any potential risks of your work? We didn't see much risks of a sentiment classification work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In the Abstract section and section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2 ✓ B1. Did you cite the creators of artifacts you used? 3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Datasets we use are publicly available. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Datasets we use are publicly available for years, we don't see much concerns on this issue. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Datasets, along with their documentations are publicly available. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3 ## C ✓ **Did You Run Computational Experiments?** 3 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? There are too many parameters, which reporting them makes the paper cumbersome. Please check the config file in our public code repository for details. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Please check the config file in our public code repository for details. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Experimental results are stable with different seeds, and the time complexity is relatively high. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 2.4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-etal-2023-beyond
Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
https://aclanthology.org/2023.findings-acl.472
Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data. In this work, we introduce NeQA, a dataset consisting of questions with negation in which language models do not exhibit straightforward positive scaling. We show that this task can exhibit inverse scaling, U-shaped scaling, or positive scaling, and the three scaling trends shift in this order as we use more powerful prompting methods or model families. We hypothesize that solving NeQA depends on two subtasks: question answering (task 1) and negation understanding (task 2). We find that task 1 has linear scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, and composing these two scaling trends yields the final scaling trend of NeQA. Our work reveals and provides a way to analyze the complex scaling trends of language models.
# Beyond Positive Scaling: How Negation Impacts Scaling Trends Of Language Models Yuhui Zhang∗ † Michihiro Yasunaga∗ Zhengping Zhou∗ **Jeff Z. HaoChen**∗ James Zou Percy Liang Serena Yeung Department of Computer Science Stanford University ## Abstract Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data. In this work, we introduce NeQA, a dataset consisting of questions with negation in which language models do not exhibit straightforward positive scaling. We show that this task can exhibit inverse scaling, U-shaped scaling, or positive scaling, and the three scaling trends shift in this order as we use more powerful prompting methods or model families. We hypothesize that solving NeQA depends on two subtasks: question answering (task 1) and negation understanding (task 2). We find that task 1 has linear scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, and composing these two scaling trends yields the final scaling trend of NeQA. Our work reveals and provides a way to analyze the complex scaling trends of language models. ## 1 Introduction Language models have been shown to exhibit *positive scaling*, where task performance improves as models are scaled up in terms of size, compute, or data, like the blue curve in Figure 1 (Kaplan et al., 2020; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Srivastava et al., 2022; Liang et al., 2022). However, there are exceptions. Recent works show that some tasks can exhibit inverse scaling (McKenzie et al., 2022), where the performance degrades as models are scaled up (green curve), or *U-shaped scaling* (Wei et al., 2022b), where the performance degrades first but then improves as models are scaled up (red curve). Analyzing tasks that exhibit different scaling trends, such as inverse and U-shaped scaling, is therefore useful for better understanding the behaviors of language models, identifying their limitations, and guiding future development. ∗Equal contributions. †Correspondence to: Yuhui Zhang <yuhuiz@stanford.edu>. ![0_image_0.png](0_image_0.png) In this work, we introduce NeQA, a new task of answering multiple-choice questions containing negation words, constructed by transforming questions from OBQA (Mihaylov et al., 2018) and NegatedLAMA (Kassner and Schütze, 2020). We conduct experiments on this task using 4 language model families and 3 prompting methods, and show that large language models do not follow straightforward positive scaling on this task. Specifically, as we use more powerful prompting methods or model families, NeQA exhibits a gradation from inverse scaling to U-shape to positive scaling. This result provides a unified view of when the three types of scaling trends (inverse, U-shaped, and positive scaling) occur for language models. Our result indicates that the development of large language models' capability to process negation may be a complex and nuanced problem. To further understand this nuanced scaling trend of the NeQA task, we decompose the task into two subtasks: question answering (task 1) and negation understanding (task 2). Our empirical results show that task 1 has linearly positive scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, where the transition point is influenced by the prompt method and model family. Combining these two scaling trends yields the final scaling trend observed in NeQA. The task decomposition provides a new way to think of the scaling on a task in terms of a combination of its component skills. In summary, our contributions are (1) the NeQA ![1_image_0.png](1_image_0.png) dataset that contains diverse distributions of texts about negation; (2) an evaluation of different large language models on the NeQA dataset, which exhibits different scaling trends; (3) a task decomposition analysis explaining the above scaling trends. ## 2 Dataset: Neqa We develop NeQA, a question answering dataset designed to evaluate the ability of models to process negation in natural language. Each example of the dataset consists of a negated question and two answer choices, one correct and one incorrect. An example of NeQA looks like: (question "Child does not want?", correct choice "marriage", incorrect choice "love"). To construct this, we leveraged NegatedLAMA (Kassner and Schütze, 2020) and OBQA (Mihaylov et al., 2018). The NegatedLAMA dataset includes negated questions from four subsets: ConceptNet, GoogleRE, SQuAD, and TREx. Each subset comprises multiple files that represent different question distributions, such as questions about different entity relations. Each question is associated with a negated question, an answer, and a misprimed question (i.e., a wrong answer followed by the question). For instance, when "Child wants?" is the original question, "Child does not want?" can be its associated negated question, "love" can be the answer, and "Marriage? Child wants?" can be its misprimed question. We turn it into a multiple choice question by setting the negated question as the question, and setting the wrong answer in the misprimed question in conjunction with the correct answer as the two choices. For instance, in the above example, we get "Q: Child does not want? A. love B. marriage" (Appendix Table 3 ). To ensure diversity and representativeness, we randomly selected at most 50 questions from each file. To be able to analyze the impact of different negation types, we also created additional data by applying diverse rules to transform questions in OBQA (Mihaylov et al., 2018 ) into negated ones. We defined six types of negation transformations: action verb negation (e.g., "cause" -> "does not/- doesn't cause"), linking verb negation (e.g., "is" → "is not/isn't"), modal verb negation (e.g., "can" -> "can not/can't"), conjunction negation (e.g., "because" -> "not because"), negation prefix (e.g., "able" → "unable"), and negation prompt (e.g., add "choose the wrong answer"). For each type, we collected 50 questions by applying a rule-based transformation, sampling an incorrect answer as the correct answer, and treating the correct answer as the incorrect answer. For example, "Pushing on a pedal is an example of" is an original question in OBQA with the correct answer "force" and one of the incorrect answers "speed". We apply the rule-based transformation to change the verb "is" to "isn't" and get "Q: Pushing on a pedal isn't an ![2_image_0.png](2_image_0.png) example of? A. speed. B. force", where "A" is the answer (Appendix Table 3). We employ post-processing techniques such as redistributing labels evenly between "A" and "B" and balancing the use of negation words such as "not" and "n't". The validity of each question is ensured through manual examination and editing. Our dataset comprises a total of 1718 questions sourced from ConceptNet (150 questions), GoogleRE (374 questions), SQuAD (100 questions), TREx (594 questions), and OBQA (500 questions), providing a diverse range of negation types, text distributions, and prompts. We believe that this dataset serves as a valuable benchmark for assessing the ability of language models to process negation. Data distributions are shown in Figure 3. Out of the 1718 questions, we define a set of 944 questions from ConceptNet, TREx, and a subset of OBQA that exhibit clear positive scaling on the corresponding original (non-negated) questions. For our experiments (§3), we randomly select 100 questions from this positive set in order to make the scaling more obvious during our analysis. ## 3 Results 3.1 Evaluation Setup: Models And Prompts We evaluated four different language model families on NeQA: GPT-3 (Brown et al., 2020), GPT-3 Text Series (Ouyang et al., 2022), Cohere (co:here), and Jurassic (AI21Labs) (model details in §C.4). We employed three different prompting methods: zero-shot, zero-shot with hint (Kojima et al., 2022), and few-shot with chain-of-thought (CoT) (Wei et al., 2022c), as illustrated in Figure 3. For zero-shot and zero-shot with hint evaluation, we follow the evaluation protocol of the MMLU paper (Hendrycks et al., 2021). We generate a prompt composed of a question and multiple choice options, where the options are labeled "A" and "B". For example, a prompt may be "Question: Child does not want? A. love B. marriage Answer:". We then generate one token from the language model and rank the probability of the model selecting option "A" or "B". For few-shot with CoT, we follow the evaluation protocol of CoT paper (Wei et al., 2022c) by generating sentences until reaching the end and parsing the answer using regular expressions. As our metric, we report the accuracy of the model predictions, where the chance accuracy is 50% as NeQA is a balanced two-choice dataset. ## 3.2 Scaling Trends Our evaluation reveals that the scaling trends of language models on the NeQA task vary depending on the prompting method and model family used (Figure 2). We found that the scaling trends of all language model families can be altered by different prompts. For example, zero-shot prompting resulted in inverse scaling in 3 out of 4 model families, whereas few-shot CoT prompting consistently resulted in positive scaling. As the prompt becomes stronger (i.e., more information, like rationales and demonstrations, is provided for the language model), we observed a transition from inverse scaling, to U-shaped scaling, to positive scaling. For instance, GPT-3 exhibited inverse scaling, U-shaped scaling, and positive scaling, respectively, with these prompting methods. Additionally, we discovered that switching to a stronger model family can alter the scaling shape. For example, transitioning from GPT-3 to GPT-3 Text Series, which was further trained to align with human values on multiple tasks compared to GPT-3, resulted ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) in a shift from inverse scaling to U-shaped scaling when the same prompting (e.g., zero-shot) is used. In conclusion, stronger prompts or model families lead to a transition from inverse scaling, to U-shaped scaling, to positive scaling on the NeQA task. We may also make the following interpretation: the overarching scaling trend of language models for NeQA is U-shaped, and if the model is weak (i.e., weaker prompt or model family), the left part of "U"/inverse slope is observed; if the model is strong, the right part of "U"/positive slope is observed. ## 3.3 Task Decomposition Analysis We conducted further empirical analysis on the reasons why the scaling trends can be inverse, Ushaped, or positive and can transition with different prompts or model families. We decomposed the NeQA task into two subtasks: task 1 is to answer the original non-negated questions, and task 2 is to "understand negation". In Figure 4, we show the scaling of task 1 and task 2 performance with GPT-3 and GPT-3 Text Series families. The task 1 performance is measured by the accuracy of answering original non-negated questions, and the task 2 performance is measured by the accuracy of differentiating original questions from negated questions. The task examples are shown in Figure 4 right. Both tasks are evaluated in a zero-shot way. Our experiments showed that task 1 scales mostly linearly in a positive direction, whereas task 2 scales like a sigmoid shape with an emergent transition point, analogous to the Grokking curve (Power et al., 2022). Before this transition point, models do not "understand negation", and achieve low accuracy in differentiating original questions from negated questions, which results in outputting the same answer to both the original and negated questions. It is worth noting that the labels for the composed task NeQA are essentially the inverse of the non-negated QA labels for task 1. Therefore, the positive scaling in task 1 results in inverse scaling for the composed task NeQA, because the predictions remain unchanged while the ground-truth labels are inverted. After the transition point, models start to "understand negation" and predict opposite answers to the original questions, resulting in positive scaling. When the transition point never happens within the sizes available in the model family, the overall scaling looks inverse; when the transition point happens before the smallest model, the overall scaling looks positive. When the transition point is in the middle, the overall scaling looks U-shaped. We provide further explanations of the composed performance curve in §A and §B. Interestingly, we found that the transition point can be moved earlier with stronger prompting methods or model families. For example, both GPT- 3 and GPT-3 Text Series show that the transition point happens much earlier when using the stronger prompt compared to the weaker prompt (see Figure 4). Furthermore, GPT-3 Text Series has an earlier transition point than the GPT-3 models. This can explain why using stronger prompts or stronger model families results in a transition from inverse scaling, to U-shaped, to positive scaling. By decomposing a task and studying the scaling trends of the individual subtasks, our analysis offers a new way to understand the complexity of language model scaling trends. This analysis could be applied to various tasks beyond NeQA, especially tasks that consist of multiple subtasks, each of which may be of different levels of difficulty. This analysis can provide a deeper understanding of the strengths and weaknesses of different language models and offer useful insights into the development of better models and training/prompting methods. ## 4 Related Works Scaling trends. Recent years have seen significant scaling of language models, such as scaling from GPT-1 to GPT-3, which has led to tremendous improvements in their performance and capabilities in natural language processing (Radford et al., 2018, 2019; Brown et al., 2020). Researchers have begun to investigate the scaling trends of language models to capture the relationship between model performance and model scale, including the parameter count and amount of training data/compute used (Kaplan et al., 2020). While most scaling papers show positive scaling trends where larger models perform better on various tasks (Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Srivastava et al., 2022; Liang et al., 2022), it is important to also investigate tasks that exhibit other trends such as inverse scaling, which can shed light to limitations in current language model development and guide future improvements. For instance, TruthfulQA (Lin et al., 2022) was one of the earliest tasks that exhibit inverse scaling, where they find larger language models are prone to hallucination and generate more untrue answers. A recent competition, the Inverse Scaling prize (McKenzie et al., 2022), called for tasks that cause inverse scaling. In the first round, four tasks, including NeQA, redefine math, quote repetition, and hindsight neglect, showed inverse scaling. Wei et al. (2022b) then found that some of these tasks show U-shaped scaling after further scaling up language models. In this work, we unify the above findings and provide a holistic picture of scaling trends, including the transition from inverse to U-shaped to positive scaling across model families and prompting methods, and empirical explanations behind these scaling trends. Negation understanding. Negation is a fundamental aspect of natural language understanding (Ackrill et al., 1975; Blanco and Moldovan, 2011). Existing works have found that NLP models can struggle in processing negation in text (JiménezZafra et al., 2020). For example, these works investigate models' abilities to process negation through natural language inference tasks (Cooper et al., 1996; Dagan et al., 2006; Hossain et al., 2020; Geiger et al., 2020), machine translation (Fancellu and Webber, 2015; Hossain et al., 2022), language model prompting (Kassner and Schütze, 2020; Ettinger, 2020; Jang et al., 2022), contrastive reading comprehension (Ravichander et al., 2022), and probing model activations (Burns et al., 2022). In response, existing works have also studied methods to improve the abilities of NLP models to process negation, such as leveraging datasets about negation (Kim et al., 2019; Jiang et al., 2021), auxiliary training objectives/tasks (Khandelwal and Sawant, 2020; Moore and Barnes, 2021; Hosseini et al., 2021; Truong et al., 2022), and neuro-symbolic reasoning modules (Yasunaga et al., 2021, 2022). While these existing works typically study a fixed size or type of models, our work provides the first studies into the effect of negation on the *scaling* trends of language models. We find that negation can exhibit nuanced scaling trends, e.g., U-shaped scaling with increased model size and improved model families and prompting methods. This finding offers a more comprehensive insight into how to improve the abilities of language models to understand negation, e.g., the model size, training algorithm, and prompting method all matter. ## 5 Conclusion We introduced NeQA, a new question answering dataset that yields different scaling trends of language models than traditional positive scaling. We then proposed task decomposition analysis, a general idea to decompose the task to better understand the complex scaling trends and their transitions. We hope that these insights can facilitate the understanding and development of language models. ## Limitations This work introduced NeQA, a question answering dataset for evaluating the ability of large language models to process negation. While our NeQA attempted to cover diverse types of negation (e.g., different negation phrases and positions) and multiple data sources (e.g., OBQA, LAMA), it is possible that the dataset construction misses some types of negation or domains of text. Our future work will extend the dataset to cover more comprehensive types of negation and domains of text, beyond OBQA and LAMA. Additionally, NeQA is an English dataset, and it would be interesting to extend it to non-English languages and conduct a more comprehensive evaluation of language models, including multilingual ones. Another potential limitation is sensitivity in language model prompting. Language model performance is known to be influenced by the specific prompt used to query the model (e.g., a rephrased prompt may lead to different model outputs), and prompt engineering—finding the "right" promptmay be needed to obtain reasonable outputs from the language models (Jiang et al., 2020; Ruis et al., 2022; Wang et al., 2022). As our language model evaluation protocol uses prompting (§3), the evaluation results may inherit such prompt sensitivity. It would be an interesting future work to incorporate techniques to mitigate prompt sensitivity in language model evaluation (e.g., Burns et al. 2022). ## Ethics Statement Our work offers benchmarks and insights to help develop language models that understand negation. Developing language models that understand negation is crucial to the society in many ways. First, as language models are being used in various real-world applications, including fields like finance, healthcare, and law, it is important to ensure that they understand negation and make correct predictions. If they do not understand negation, they may output the opposite of what we actually want and may make harmful decisions for humans. Negation is also a fundamental aspect of natural language understanding, and a language model that does not understand negation correctly may not be able to truly process natural language. This can undermine trust and confidence in the outputs of the model, ultimately undermining its utility. Understanding negation correctly is therefore crucial for the development of reliable language models. We hope that our benchmark and evaluation results provide insights into the behavior of current language models and inspire the future development of language models that understand negation. ## Reproducibility Statement We provide our datasets and implementations at https://github.com/yuhui-zh15/NeQA. The implementations will enable researchers to reproduce datasets and results described here, as well as apply our negation transformations to other datasets and run their own analyses. ## Acknowledgments We greatly thank members of Stanford NLP, PLambda, and MARVL groups for providing valuable feedback. M.Y. is supported by Microsoft Research PhD Fellowship. J.Z. is supported by a Sloan Fellowship and a Chan-Zuckerberg Investigator Award. P.L. is supported by a PECASE Award. S.Y. is supported by a Chan-Zuckerberg Investigator Award. ## References John L Ackrill et al. 1975. *Categories and De interpretatione*. Clarendon Press. AI21Labs. Jurassic language model. Eduardo Blanco and Dan Moldovan. 2011. Semantic representation of negation using focus detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 581–589, Portland, Oregon, USA. Association for Computational Linguistics. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. *ArXiv preprint*, abs/2108.07258. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. 2022. Discovering latent knowledge in language models without supervision. *ArXiv preprint*, abs/2212.03827. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *ArXiv preprint*, abs/2204.02311. co:here. Cohere language model. Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical report, Technical Report LRE 62-051 D-16, The FraCaS Consortium. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*, pages 177–190. Springer. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. *Transactions of the Association for* Computational Linguistics, 8:34–48. Federico Fancellu and Bonnie Webber. 2015. Translating negation: A manual error analysis. In Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015), pages 2–11, Denver, Colorado. Association for Computational Linguistics. Atticus Geiger, Kyle Richardson, and Christopher Potts. 2020. Neural natural language inference models partially embed theories of lexical entailment and negation. In *Proceedings of the Third BlackboxNLP* Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 163–173, Online. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In *9th International Conference on* Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Md Mosharaf Hossain, Dhivya Chinnappa, and Eduardo Blanco. 2022. An analysis of negation in natural language understanding corpora. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 716–723, Dublin, Ireland. Association for Computational Linguistics. Md Mosharaf Hossain, Venelin Kovatchev, Pranoy Dutta, Tiffany Kao, Elizabeth Wei, and Eduardo Blanco. 2020. An analysis of natural language inference benchmarks through the lens of negation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9106–9118, Online. Association for Computational Linguistics. Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, and Aaron Courville. 2021. Understanding by understanding not: Modeling negation in language models. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1301–1312, Online. Association for Computational Linguistics. Joel Jang, Seonghyeon Ye, and Minjoon Seo. 2022. Can large language models truly understand prompts? a case study with negated prompts. *ArXiv preprint*, abs/2209.12711. Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, and Yejin Choi. 2021. "I'm not mad": Commonsense implications of negation and contradiction. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4380–4397, Online. Association for Computational Linguistics. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438. Salud María Jiménez-Zafra, Roser Morante, María Teresa Martín-Valdivia, and L. Alfonso Ureña-López. 2020. Corpora annotated with negation: An overview. *Computational Linguistics*, 46(1):1–52. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *ArXiv* preprint, abs/2001.08361. Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics. Aditya Khandelwal and Suraj Sawant. 2020. NegBERT: A transfer learning approach for negation detection and scope resolution. In *Proceedings of the* Twelfth Language Resources and Evaluation Conference, pages 5739–5748, Marseille, France. European Language Resources Association. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguistics. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *ArXiv* preprint, abs/2205.11916. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. *ArXiv preprint*, abs/2211.09110. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland. Association for Computational Linguistics. Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. 2021. Just train twice: Improving group robustness without training group information. In *Proceedings of the 38th International* Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 6781–6792. PMLR. Ian McKenzie, Alexander Lyzhov, Alicia Parrish, Ameya Prabhu, Aaron Mueller, Najoung Kim, Sam Bowman, and Ethan Perez. 2022. Announcing the inverse scaling prize. *Lesswrong*. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? ArXiv preprint, abs/2202.12837. Andrew Moore and Jeremy Barnes. 2021. Multi-task learning of negation and speculation for targeted sentiment classification. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2838–2869, Online. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. *ArXiv preprint*, abs/2203.02155. Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. 2022. Grokking: Generalization beyond overfitting on small algorithmic datasets. *ArXiv preprint*, abs/2201.02177. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. *OpenAI Blog*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. ArXiv preprint, abs/2112.11446. Abhilasha Ravichander, Matt Gardner, and Ana Marasovic. 2022. ´ Condaqa: A contrastive reading comprehension dataset for reasoning about negation. *ArXiv* preprint, abs/2211.00295. Laura Ruis, Akbir Khan, Stella Biderman, Sara Hooker, Tim Rocktäschel, and Edward Grefenstette. 2022. Large language models are not zero-shot communicators. *ArXiv preprint*, abs/2210.14986. Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. 2020. Distributionally robust neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Nimit Sharad Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. 2020. No subclass left behind: Fine-grained robustness in coarsegrained classification problems. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems* 2020, NeurIPS 2020, December 6-12, 2020, virtual. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *ArXiv preprint*, abs/2206.04615. Thinh Truong, Timothy Baldwin, Trevor Cohn, and Karin Verspoor. 2022. Improving negation detection with negation-focused pre-training. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4188–4193, Seattle, United States. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *ArXiv preprint*, abs/2203.11171. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. Transactions on Machine Learning Research. Survey Certification. Jason Wei, Yi Tay, and Quoc V Le. 2022b. Inverse scaling can become u-shaped. *ArXiv preprint*, abs/2211.02011. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022c. Chain of thought prompting elicits reasoning in large language models. *ArXiv preprint*, abs/2201.11903. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080. Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. In Neural Information Processing Systems (NeurIPS). Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 535–546, Online. Association for Computational Linguistics. ## A Task Decomposition Simulation: ![9_Image_0.Png](9_Image_0.Png) Composing Subtask Scaling Trends Yields U-Shape Scaling In this section, we present a simple simulation to demonstrate how the U-shape scaling trends of a composed task can be obtained through the scaling trends of each decomposed task. Let's assume that the accuracy of Task 1 (Question Answering) is represented by t1(x) and has a linear shape with an initial performance of 0.5 (random performance) and a final performance of 1.0 (perfect performance). Similarly, the accuracy of Task 2 (Negation Understanding) is represented by t2(x) and has a sigmoid-like shape with an initial performance of 0.5 (random performance) and a final performance of 1.0 (perfect performance), where x represents the scale (a combination of model size, data size, training computation, and prompting method). We define the score of negation understanding as s2(x) = (t2(x) − 0.5)/0.5, which represents the probability that the model will treat a negated sentence differently from the original sentence. For the composed task, NeQA, it will have an accuracy of t(x) = t1(x)s2(x) + (1 − t1(x))(1 − s2(x)) given scale x. Figure 5 shows the plots of these three curves, t1(x), t2(x), and t(x). The simulated performance curve of NeQA, t(x), indeed exhibits a U-shape. ## Discussion Of Task Decomposition Validity And Generalizability to Other Tasks. We first clarify that task decomposition analysis is not intended to derive scaling laws (i.e., predict the exact performance of language model scaling). Instead, our analysis aims to explain scaling trends (inverse, U-shape, positive). For example, translation performance may not be simple addition of generation performance and word translation performance but should be positively correlated. Furthermore, while this exact decomposition structure might not hold in more complex tasks, our proposed decomposition analysis is a pioneering attempt to explain scaling trends on a task other than vanilla language modeling. Investigating the applicability of decomposition to other tasks is an essential future direction, and we hope our work will inspire others to push these boundaries. Lastly, we believe that our work's focus on negation is already a well-scoped and significant research contribution, as negation is one of the most common linguistic phenomena. To study negation, we collected the NeQA dataset, which exhibits inverse/U/positive scaling. To explain this, we propose this decomposition intuition, which works well because answering negated questions requires first answering the original questions and then flipping the answers. ## B Fine-Tuning Simulation: Training Data Attributes And Training Computes Also Impact Scaling Trends In addition to the prompting methods and model families that we studied in §3, we are also interested in studying other factors that may contribute to scaling trends, specifically those related to the training process. However, most large language models are not publicly available and training/reproducing them from scratch would require excessive computational resources. In light of this, here we conduct experiments using synthetic data and small-size language models to simulate and analyze the language model learning process. We adapt the SST-2 dataset (Socher et al., 2013) for our simulation. For each sentence s in SST-2, with probability 1−x, we modify it to "s. This does suggest it is good/bad (depending on the label)", and with probability x, we change it to "s. This does not suggest it is good/bad". Then, we finetune different sizes of GPT-2 (Radford et al., 2019) on this synthetic corpus with the standard causal language modeling objective. We vary the numbers of epochs t and negation ratio x to understand their effect to scaling trends. To evaluate the fine-tuned language models, we use the language model to complete "s. This does suggest it is _" for the original sentiment classification task (similar to task 1 in the main paper), and use the language model to complete "s. This does not suggest it is _" for the negated sentiment classification task (similar to the composed task NeQA in the main paper). We report accuracy on ![10_image_0.png](10_image_0.png) the original sentiment classification and negated sentiment classification. Our simulation demonstrates that the scaling trends on negated sentiment classification are influenced by the negation ratio x and training epoch t (Figure 6). With the same number of training epochs t = 1, increasing the negation ratio x from 0.01%, to 0.1% and then to 1% causes the scaling to shift from inverse scaling, to U-shape, then to positive scaling. Additionally, increasing the number of training epochs from 1 to 3 causes the scaling trend to shift from inverse scaling to U-shape when the negation ratio is x = 0.01%, and from U-shape to positive when the same negation ratio is x = 0.1%. This simulation highlights that factors in the training process, such as dataset attributes (e.g., negation ratio) and training compute, also have significant impacts on the scaling trends. Together with the *inference* factors, such as prompting methods and model families discussed in the main paper, we provide a comprehensive understanding of the complexity of scaling trends and how different factors can influence them. The transition of the scaling trends can also be explained by task decomposition, where Task 1 (original sentiment classification) is always positively scaled, while Task 2 (negation understanding) is also positive but is shaped like a sigmoid, with the transition point controlled by the number of negation examples seen by the language model. The number of negations seen can be modified by using a larger negation ratio or more training epochs. The composition of these subtask scaling trends yields the final scaling curves. The reason why Task 1 has a more linear shape, while Task 2 has a more sigmoid-like shape, can be understood with the intuition of deep learning processes. Empirical risk optimization (ERM) optimizes for average performance, and since negated sentences are significantly underrepresented in comparison to non-negated sentences in the training data, they are ignored at the beginning of training (Sagawa et al., 2020; Sohoni et al., 2020; Liu et al., 2021). As a result, the performance for negated sentences lags behind the average. However, as the majority of the training examples are learned, ERM finally starts to optimize for the underrepresented groups, leading to improved performance for negated sentences. This intuition adds new insights into the emergence of language models (Bommasani et al., 2021; Wei et al., 2022a), and we leave more rigorous analyses to future works. ## C Experimental Details C.1 Results The performance of various models on different tasks that generate Figure 2 and Figure 4 can be found in Table 1 and Table 2. ## C.2 Data In Table 3, we provide examples showing the data generation process of the NeQA dataset that was introduced in §2. In Table 4 and 5, we present a list of 100 data samples from NeQA that were utilized throughout | Model | Prompting | Shape | ada | babbage | curie | davinci | | | |-------------------|-------------|---------|-------|-----------|---------|-----------|------------|------------| | Zero-shot | Inverse | 0.54 | 0.54 | 0.36 | 0.33 | | | | | GPT-3 | Few-shot | Inverse | 0.51 | 0.55 | 0.51 | 0.22 | | | | Zero-shot w/ Hint | U-Shape | 0.55 | 0.47 | 0.35 | 0.51 | | | | | Few-shot w/ CoT | Positive | 0.48 | 0.48 | 0.53 | 0.65 | | | | | Model | Prompting | Shape | ada | babbage | curie | davinci | davinci-v2 | davinci-v3 | | Zero-shot | U-Shape | 0.61 | 0.53 | 0.48 | 0.31 | 0.56 | 0.71 | | | GPT-3 Text Series | Few-shot | U-Shape | 0.52 | 0.44 | 0.45 | 0.09 | 0.79 | 0.80 | | Zero-shot w/ Hint | U-Shape | 0.54 | 0.51 | 0.48 | 0.49 | 0.88 | 0.86 | | | Few-shot w/ CoT | Positive | 0.34 | 0.45 | 0.47 | 0.89 | 0.98 | 0.98 | | | Model | Prompting | Shape | small | medium | large | xlarge | | | | Zero-shot | Inverse | 0.44 | 0.44 | 0.38 | 0.38 | | | | | Cohere | Few-shot | Inverse | 0.51 | 0.52 | 0.08 | 0.08 | | | | Zero-shot w/ Hint | U-Shape | 0.40 | 0.39 | 0.44 | 0.43 | | | | | Few-shot w/ CoT | Positive | 0.47 | 0.47 | 0.75 | 0.75 | | | | | Model | Prompting | Shape | large | grande | jumbo | | | | | Zero-shot | Inverse | 0.59 | 0.49 | 0.49 | | | | | | Jurassic | Few-shot | U-Shape | 0.52 | 0.39 | 0.45 | | | | | Zero-shot w/ Hint | Inverse | 0.58 | 0.48 | 0.44 | | | | | | Few-shot w/ CoT | Positive | 0.49 | 0.53 | 0.51 | | | | | | Task | Model | Prompting | Shape | ada | babbage | curie | davinci | d-v2 | d-v3 | |-------------------|-------------------|-------------|----------|-------|-----------|---------|-----------|--------|--------| | Task 1 | GPT-3 | Zero-shot | Positive | 0.44 | 0.47 | 0.61 | 0.76 | - | - | | GPT-3 Text Series | Zero-shot | Positive | 0.41 | 0.47 | 0.51 | 0.88 | 0.94 | 0.95 | | | GPT-3 | Zero-shot | Sigmoid | 0.49 | 0.50 | 0.22 | 0.53 | - | - | | | GPT-3 | Zero-shot w/ Hint | Sigmoid | 0.50 | 0.50 | 0.46 | 0.83 | - | - | | | Task 2 | GPT-3 Text Series | Zero-shot | Sigmoid | 0.63 | 0.49 | 0.50 | 0.51 | 0.95 | 0.99 | | GPT-3 Text Series | Zero-shot w/ Hint | Sigmoid | 0.51 | 0.49 | 0.50 | 0.94 | 1.00 | 0.99 | | the paper to examine scaling behaviours and task decomposition. ## C.3 Prompts The specific prompts utilized for various prompting methods and tasks are outlined in Table 6. ## C.4 Models In Table 7, we present a list of all the models used in this work, including 4 model families and 17 models. Model details are from Liang et al. (2022). ## D Additional Analyses D.1 Few-Shot Prompting Few-shot in-context learning has been demonstrated to be an effective method for adapting pretrained language models to specific tasks. We experimented with few-shot prompting (not few-shot chain-of-thought prompting) but didn't include the results in the main paper because the scaling shapes were often the same as zero-shot prompting across 3 out of 4 studied model families (inverse for GPT3 and Cohere, U-shape for GPT-3 Text-Series; only Jurassic changes from inverse to U-shape). We provide the few-shot prompting results in Table 1. Several recent works can explain why few-shot prompting doesn't alter the scaling curve shape. For example, Min et al. (2022) and Xie et al. (2021) show that in-context learning can be viewed as a Bayesian inference process, with the model learning more about input-output format than inputoutput mapping. When providing demonstrations of negated question-answer pairs, the model fails to learn the mapping between them and predicts the same answer as without demonstrations. ## D.2 Prompt Variations Due to the sensitivity of language model performance to prompts (Jiang et al., 2020; Ruis et al., 2022; Wang et al., 2022) (also discussed in limitations), we experimented with various prompts and found: 1. Minor changes like word substitution or paraphrasing result in similar scaling shapes; 2. Major prompt changes can alter curve shape, e.g., adding 'For example, "isn't", "is not", "not because ", "do not" are strong signs of negation' to zero-shot w/ hint prompting changes GPT-3 from inverse to a weak Ushape. This can be seen as increasing CoT strength by providing more hints/rationales; 3. Varying CoT information levels affects the shape. Intermediate-level information in CoT prompts shows a scaling shape between U-shape (zero-shot w/ hint; weakest CoT version) and strong positive (few-shot CoT; strongest CoT version). ## D.3 Dataset Validity NeQA is curated by applying rule-based transformations on existing QA datasets. To ensure the dataset quality, we carefully design and verify the transformation rules through manual inspection of the transformed examples. We found that the transformation rules generally work well and only removed a few questions due to grammatical errors after adding negation. Furthermore, as part of the submission for the inverse scaling prize (McKenzie et al., 2022), the organizers have done crowdsourcing experiments to demonstrate the validity of our dataset. Specifically, they validated labels by crowdsourcing 50 random examples from NeQA, and found the average agreement between workers and gold labels is 100% with no confusing questions. ## D.4 Subset Selection The NeQA dataset is composed of five subsets: ConceptNet, GoogleRE, SQUAD, TREx, and OBQA. For the purpose of this analysis, we only include ConceptNet, TREx, and OBQA. Our goal is to examine the scaling trends, so we aim for steeper scaling. However, GPT-3 does not exhibit strong positive scaling and inverse scaling on the original and negated GoogleRE and SQUAD datasets (Figure 7), so these subsets were not included in the analysis. Furthermore, these scaling trends of NeQA subsets provide additional verification of our task decomposition analysis. When language models fail to understand negation (Task 2), a stronger positive scaling on the original dataset (Task 1) causes a stronger inverse scaling on the negated data (Composed Task). ## D.5 Negation Category In Figure 8 (left), we find that negating by adding "un-/in-" prefix to a word or negating modal verbs | Dataset | Original Data | Transformed Data | | |------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|---------------| | NegatedLAMA | Original Question: Child wants? | Question: Child does not want? | (take Negated | | Question) | | | | | (including ConceptNet, GoogleRE, | Original Answer: love | Choices: [love, marriage] (take Original Answer and misprime in Misprimed Question) | | | SQuAD, TREx) | Negated Question: Child does not want? | Answer: marriage (take misprime in Misprimed Question) | | | Misprimed Question: Marriage? Child wants? | | | | | OBQA | Original Question: Pushing on a pedal is an example of? | Question: Pushing on a pedal isn't an example of? (apply rule-based transformation on Original Question) | | | Original Choices: [patching, force, practice, speed] | Choices: [force, speed] (take Original Answer and sample one incorrect Original Choices) | | | | Original Answer: force | Answer: speed (take sampled incorrect Original Choices) | | | Table 3: Data generation process of NeQA dataset. NeQA is constructed by transforming two existing QA datasets: NegatedLAMA and OBQA. All the fields of the original and transformed questions are shown. (e.g., "can" to "cannot") does not show clear inverse scaling in zero-shot prompting. We suspect that the difference is because these negation categories replace a word instead of adding an additional word "not". We leave the further analysis to future work. ## D.6 Wrong Choice This experiment aimed to understand whether more confusing choices will change the scaling (Figure 8 (middle)). For example, given the question "Apple is not made by", the wrong choice can be "Microsoft" (high-ranked, more confusing), or "air" (low-ranked, less confusing), or a random word "China". We find that the wrong choice has little impact on the scaling trends. ## D.7 Mispriming Following Kassner and Schütze (2020), we put the wrong choice (i.e., the correct choice before negation) before the question (e.g., change "iPhone is not made by" to "Apple? iPhone is not made by"). Mispriming makes inverse scaling stronger on negated questions in zero-shot prompting setting (Figure 8 (right)). Interestingly, we also note a phase change happens in small-size models. While this is a very interesting finding, mispriming might not be frequent in real-world applications of language models, so we are not including this in the NeQA dataset. | Question | Choices | Answer | |----------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|----------| | Eden Roc Renaissance Hotel Miami Beach is not a? | [hotel, download] | 1 | | The Betty Hutton Show was not originally aired on? | [youtube, cbs] | 0 | | Hot 8 Brass Band does not play ___ music? | [jazz, friedrich] | 1 | | Crying is not part of? | [life, love] | 1 | | Melbourne International Film Festival is not located in? | [april, melbourne] | 0 | | Mitsubishi G4M is not developed by? | [mitsubishi, vinci] | 1 | | A man grows flowers in the winter using his greenhouse. The flowers blossom there, despite the cold, | [they are regularly chilled, they are carefully protected] | 0 | | freezing weather and lack of sunshine, not because The main reason people want to use wind as an energy source is not because there is | [make cookies, an everlasting supply] | 0 | | A light isn't refracted when | [it shines at a mirror, it shines at a gem] | 0 | | John Selden did not die in? | [manila, london] | 0 | | adenine does not consist of? | [chocolate, nitrogen] | 0 | | Kolkata Book Fair is not located in? | [chinatown, kolkata] | 0 | | Fall is not? | [coming, fearing] | 1 | | Maurice Joly is not a ___ by profession? | [barrister, lawyer] | 0 | | Sojitra is not a? | [village, town] | 1 | | Avoiding the use of fossil fuel is not an example of | [Ecosystem Vulnerability, Conservationism] | 0 | | The native language of Jean-Paul Sartre is not? | [french, greek] | 1 | | Birds cannot? | [fly, ulster] | 1 | | Worms are excellent decomposers not because | [worms enjoy eating dirt and decay, they | 0 | | enjoy eating carcasses and expel waste back to earth] | | | | The capital of Azerbaijan is not? | [baku, kidnapped] | 1 | | An electric car uses less gasoline than a regular car not because | [gas is unable to be used, power is prioritized over gas] | 0 | | chicken nugget does not consist of? | [beef, chicken] | 0 | | The native language of Eugene Kaspersky is not? | [russian, korean] | 1 | | Digital Audio Tape is not owned by? | [sony, parsons] | 1 | | Lizards do not have a? | [tail, song] | 1 | | When the moon, sun, and earth line up, tides aren't at their | [Happy, Max] | 0 | | Jean Nicolas Pierre Hachette did not die in? | [navarro, paris] | 0 | | Rottendorf is not located in? | [germany, croatia] | 1 | | David Sainsbury, Baron Sainsbury of Turville never worked in? | [london, director] | 1 | | ground beef is not a subclass of? | [beef, pigs] | 1 | | President of Ireland is not a legal term in? | [irish, ireland] | 0 | | The official language of Ruovesi is not? | [wrestled, finnish] | 0 | | A creature can hide in plain sight from predators not because | [it misdirects, it burns] | 1 | | Information is not a? | [beauty, commodity] | 0 | | The native language of Jules Roy is not? | [french, polish] | 1 | | disruptive coloration is not a subclass of? | [insects, camouflage] | 0 | | Organisms cannot? | [teatro, die] | 0 | | Abdel Sattar Sabry does not play in ___ position? | [wing, midfielder] | 0 | | Some animals get caught easily by other animals not because they are slow while the predators are | [manipulative, speedy] | 0 | | Ano Liosia Olympic Hall is not located in? | [strikeouts, athens] | 0 | | Esko Aho does not work for? | [nintendo, nokia] | 0 | | Riga and ___ are not twin cities? | [tallinn, augsburg] | 1 | | OKK Beograd is not located in? | [homosexual, belgrade] | 0 | | Canjuers is not located in? | [bordeaux, france] | 0 | | Child does not want? | [lab, love] | 0 | | Tungsten is a metal which is found in large amounts in filaments, which are used in incandescent light | [the filament glows warmly, light bulbs get very warm] | 1 | | bulbs. These light bulbs work not because Apple Thunderbolt Display is not developed by? | [intel, apple] | 0 | | Funkadelic does not play ___ music? | [strips, funk] | 0 | | Morice River is not a? | [muscle, river] | 0 | | one inherited characteristic of fur isn't it's | [hue, weight] | 1 | | Table 4: (1/2) Data samples from NeQA we used for scaling trends and task decomposition analyses. Note: labels | | | Table 4: (1/2) Data samples from NeQA we used for scaling trends and task decomposition analyses. Note: labels are balanced. | Question | Choices | Answer | |----------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|----------| | Salvadoran Football Federation is not a member of? | [fifa, uefa] | 1 | | Biblioteka Dlya Chteniya was not written in? | [russian, helpful] | 1 | | Pete Lesperance does not play? | [guitar, trombone] | 1 | | Ohio House of Representatives is not a legal term in? | [delaware, ohio] | 0 | | Jonathan Haidt does not work in the field of? | [music, psychology] | 0 | | Internet Archive was not written in? | [english, polish] | 1 | | Teres Ridge is not located in? | [antarctica, europe] | 1 | | Joseph Clay was not born in? | [ponce, philadelphia] | 0 | | One of the reasons some species go extinct is not because predators | [go dancing with them, murder too many of them] | 0 | | Cans cannot be? | [impressed, recycled] | 0 | | Lev Naumov does not play? | [piano, harp] | 1 | | Tom Cruise is not affiliated with the ___ religion? | [scientology, islam] | 1 | | Anti-Oedipus does not work in the field of? | [philosophy, algebra] | 1 | | Two wheeled conveyances are not held together by | [glue, helically ridged pins] | 0 | | Louisiana Voodoo was not founded in? | [nagoya, louisiana] | 0 | | Animator.ru was not created in? | [russia, argentina] | 1 | | Dan Sealey does not play? | [guitar, pipe organ] | 1 | | Football Association of Brunei Darussalam is not a member of? | [fifa, application] | 1 | | The leading cause of soil and rock erosion is not | [NaCl, H2O] | 0 | | Cyprus is not an? | [accessory, island] | 0 | | Cape Town and ___ are not twin cities? | [johannesburg, jerusalem] | 1 | | The Earth's closest heat source isn't | [our celestial fireball, gamma rays] | 1 | | Cars are big polluters not because they release toxins from the gas into the air making it | [space, unhealthy to breathe] | 0 | | Italy does not maintain diplomatic relations with? | [lebanon, insights] | 1 | | Heribert of Cologne has not the position of? | [mayor, archbishop] | 0 | | Parippally is not located in? | [propagation, india] | 0 | | Which of these isn't less likely to cause pollution | [chopper, mountain bike] | 0 | | National Film Board of Canada does not work in the field of? | [animation, art] | 1 | | fermented milk product does not consist of? | [syntax, milk] | 0 | | Quartz crystals aren't made up of | [hexagons, majic] | 1 | | nickel silver does not consist of? | [shrine, copper] | 0 | | Rats cannot? | [sadness, bite] | 0 | | A landslide is not likely to occur on a slope as | [downward movement is facilitated, there is more debris] | 1 | | Tire is not ___ wheels? | [rubber, culminated] | 1 | | A person wanting to find a live bear in a forest will have difficulty not because bears | [are friendly, avoid humanity] | 0 | | Corn and wheat aren't among the favorite foods of | [humans, fish] | 1 | | Percussion instruments is not made of? | [wood, temple] | 1 | | Chevrolet Trax is not produced by? | [specialist, chevrolet] | 0 | | The original language of The Bitch is not? | [english, breton] | 1 | | Bill Gates does not work for? | [microsoft, ibm] | 1 | | Rajesh Khanna not used to communicate in? | [hindi, yiddish] | 1 | | Edinburgh Airport is not named after? | [edinburgh, bourne] | 1 | | William of Hirsau has not the position of? | [abbot, republic] | 1 | | Austrian Football Association is not a member of? | [lecturer, fifa] | 0 | | Daniele Franceschini was not born in? | [telegraph, rome] | 0 | | The native language of Louis Barthou is not? | [french, mortar] | 1 | | A thing's position is not altered when | [thing feels moved emotionally, the thing adjusts its location] | 0 | | Name is not proper? | [noun, address] | 1 | | Which isn't recyclable | [Styrofoam food container, soda can] | 0 | | The Jazztet does not play ___ music? | [jazz, classical] | 1 | | Table 5: (2/2) Data samples from NeQA we used for scaling trends and task decomposition analyses. Note: labels | | | Table 5: (2/2) Data samples from NeQA we used for scaling trends and task decomposition analyses. Note: labels are balanced. | Zero-Shot | The following are multiple choice questions (with answers) about common sense. Question: Child does not want? A. love B. marriage Answer: | |---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Zero-Shot w/ Hint | The following are multiple choice questions (with answers) about common sense. Note that if there is a negation in the question, we should choose the wrong answer to the original question. Question: Child does not want? A. love B. marriage Answer: | | Few-Shot w/ CoT | The following are multiple choice questions (with answers) about common sense. Question: If a cat has a body temp that is below average, it isn't in A. danger B. safe ranges Answer: Let's think step-by-step. First, let's answer non-negated question: "If a cat has a body temp that is below average, it is in?" For this non-negated question, we have A. Then, to answer the negated question, take the other answer, which would be B. So the answer is B. Question: As the barometer reading goes lower there is not a greater chance of A. sunshine B. getting wet Answer: Let's think step-by-step. First, let's answer non-negated question: "As the barometer reading goes lower there is a greater chance of?" For this non-negated question, we have B. Then, to answer the negated question, take the other answer, which would be A. So the answer is A. Question: Coral is a type of living organism which cannot be identified in A. saltwater locations that are open B. any where with liquid Answer: Let's think step-by-step. First, let's answer non-negated question: "Coral is a type of living organism which can be identified in?" For this non-negated question, we have A. Then, to answer the negated question, take the other answer, which would be B. So the answer is B. Question: Child does not want? A. love B. marriage Answer: | | Task 1 | The following are multiple choice questions (with answers) about common sense. Question: Child wants? A. love B. marriage Answer: | | Task 2 (Weaker Prompt) | Sentence 1: "Child wants love." Sentence 2: "Child does not want love." Question: The above two sentences are? A. the same B. different Answer: | | Task 2 (Stronger Prompt) | Negated sentences are different from original sentences. Sentence 1: "Child wants love." Sentence 2: "Child does not want love." Question: The above two sentences are? A. the same B. different Answer: | | Table 6: Specific prompts for various prompting methods and tasks. Note: for Task 2 prompts, we randomly swap | | Table 6: Specific prompts for various prompting methods and tasks. Note: for Task 2 prompts, we randomly swap labels "the same" and "different" to balance the distribution. | Family | Model | Details | |-------------------|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------| | ada | Original GPT-3 (350M parameters) autoregressive language model. | | | babbage | Original GPT-3 (1.3B parameters) autoregressive language model. | | | GPT-3 | curie | Original GPT-3 (6.7B parameters) autoregressive language model. | | davinci | Original GPT-3 (175B parameters) autoregressive language model. | | | ada | text-ada-001 model that involves supervised fine-tuning on human-written demonstrations. | | | babbage | text-babbage-001 model that involves supervised fine-tuning on human-written demonstrations. | | | curie | text-curie-001 model that involves supervised fine-tuning on human-written demonstrations. | | | davinci-v1 | text-davinci-001 model that involves supervised fine-tuning on human-written demonstrations. | | | davinci-v2 | text-davinci-002 model that involves supervised fine-tuning on human-written demonstrations. Derived from code-davinci-002. | | | davinci-v3 | text-davinci-003 model that involves reinforcement learning (PPO) with reward models. Derived from text-davinci-002. | | | GPT-3 Text Series | small | Cohere small v20220720 (410M parameters). | | medium | Cohere medium v20220720 (6.1B parameters). | | | Cohere | large | Cohere large v20220720 (13.1B parameters). | | xlarge | Cohere xlarge v20220609 (52.4B parameters). | | | large | Jurassic-1 Large (7.5B parameters). | | | Jurassic | grande | Jurassic-1 Grande (17B parameters) with a few tweaks to the training process. | | jumbo | Jurassic-1 Jumbo (178B parameters). | | Table 7: List of models used in this work, including 4 model families and 17 models. Note that the publicly ![17_image_0.png](17_image_0.png) available GPT-3 Text Series model APIs used in this paper differ from those described in the original InstructGPT paper (Ouyang et al., 2022), and OpenAI does not provide information on the training procedure and appropriate model sizes (Liang et al., 2022). ![17_image_1.png](17_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Page 6. ✓ A2. Did you discuss any potential risks of your work? Page 6. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Page 1. ✓ A4. Have you used AI writing assistants when working on this paper? Used ChatGPT and Grammarly to check and improve grammar. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2. ✓ B1. Did you cite the creators of artifacts you used? Section 2. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Our data will be made publicly available and can be used for research purposes. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 2. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Our dataset does not contain names or sensitive information. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 2. ## C ✓ **Did You Run Computational Experiments?** Section 3. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 2. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We annotated the data by ourselves. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We annotated the data by ourselves. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We annotated the data by ourselves. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Our data does not have ethics concern. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We annotated the data by ourselves.
khalifa-etal-2023-contrastive
Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents
https://aclanthology.org/2023.findings-acl.473
We investigate semi-structured document classification in a zero-shot setting. Classification of semi-structured documents is more challenging than that of standard unstructured documents, as positional, layout, and style information play a vital role in interpreting such documents. The standard classification setting where categories are fixed during both training and testing falls short in dynamic environments where new classification categories could potentially emerge. We focus exclusively on the zero-shot learning setting where inference is done on new unseen classes. To address this task, we propose a matching-based approach that relies on a pairwise contrastive objective for both pretraining and fine-tuning. Our results show a significant boost in Macro F1 from the proposed pretraining step and comparable performance of the contrastive fine-tuning to a standard prediction objective in both supervised and unsupervised zero-shot settings.
# Contrastive Training Improves Zero-Shot Classification Of Semi-Structured Documents Muhammad Khalifa1∗, Yogarshi Vyas2†**, Shuai Wang**2, Graham Horwood2**, Sunil Mallya, Miguel Ballesteros**2 1University of Michigan, 2AWS AI Labs khalifam@umich.edu, {yogarshi,wshui,ghorwood,ballemig}@amazon.com, mallya16@gmail.com ## Abstract We investigate semi-structured document classification in a zero-shot setting. Classification of semi-structured documents is more challenging than that of standard unstructured documents, as positional, layout, and style information play a vital role in interpreting such documents. The standard classification setting where categories are fixed during both training and testing falls short in dynamic environments where new document categories could potentially emerge. We focus exclusively on the zero-shot setting where inference is done on new unseen classes. To address this task, we propose a matching-based approach that relies on a pairwise contrastive objective for both pretraining and fine-tuning. Our results show a significant boost in Macro F1 from the proposed pretraining step in both supervised and unsupervised zero-shot settings. ## 1 Introduction Textual information assumes many forms ranging from *unstructured* (e.g., text messages) to *semistructured* (e.g., forms, invoices, letters), all the way to fully structured (e.g., databases or spreadsheets). Our focus in this work is classification of semi-structured documents. A semi-structured document consists of information that is organized using a regular visual layout, and includes tables, forms, multi-columns, (nested) bulleted lists, and that is either understandable only in the context of its visual layout or that requires substantial more work to understand without the visual layout. Automatic processing of semi-structured documents comes with a unique set of challenges including a non-linear text flow (Wang et al., 2021), layout inconsistencies, and low-accuracy optical character recognition. Prior work has shown that integrating the two-dimensional layout information ∗Work done while at AWS AI Labs †Corresponding author of such documents is critical in models for analyzing such documents (Xu et al., 2020, 2021; Huang et al., 2022; Appalaraju et al., 2021). Due to these challenges, methods for unstructured document classification, such as static word vectors (Socher et al., 2013) and standard pretrained language models (Devlin et al., 2019; Reimers and Gurevych, 2019; Liu et al., 2019) perform poorly with semi-structured inputs as they model text in a one-dimensional space and ignore information about document layout and style (Xu et al., 2020). Past work on semi-structured document classification (Harley et al., 2015; Iwana et al., 2016; Tensmeyer and Martinez, 2017; Xu et al., 2020, 2021) has focused exclusively on the *full-shot* setting, where the target classes are fixed and identical across training and inference, neglecting the zero-shot setting (Xian et al., 2018), which requires generalization to unseen classes during inference. Our work addresses zero-shot classification of semi-structured documents in English using the matching framework, which has been used for many tasks on unstructured text (Dauphin et al., 2014; Nam et al., 2016; Pappas and Henderson, 2019; Vyas and Ballesteros, 2021; Ma et al., 2022). Under this framework, a matching (similarity) metric between documents and their assigned classes is maximized in a joint embedding space. We extend this matching framework with two enhancements. First, we use a pairwise contrastive objective (Rethmeier and Augenstein, 2020; Radford et al., 2021; Gunel et al., 2021) that increases the similarity between documents and their ground-truth labels, and decreases it for incorrect pairs of documents and labels. We augment the textual representations of documents with layout features representing the positions of tokens on the page to capture the twodimensional nature of the documents. Second, we propose an unsupervised contrastive pretraining procedure to warm up the representations of documents and classes. In summary, (i) we study zero-shot classification of semi-structured documents, which, to the best of our knowledge, has not been explored before. **(ii)** we use a pairwise contrastive objective to both pretrain and fine-tune a matching model for the task. This technique uses a layout-aware document encoder and a regular text encoder to maximize the similarity between documents and their ground-truth labels. **(iii)** Using this contrastive objective, we propose an unsupervised pretraining step with pseudo-labels (Rethmeier and Augenstein, 2020) to initialize document and label encoders. The proposed pretraining step improves F1 scores by 9 and 19 points in supervised and unsupervised zero-shot settings respectively, compared to a setup without this pretraining. ## 2 Approach This section describes our proposed architecture (§ 2.1), pretrained model (§ 2.2), as well as the contrastive objective used for pretraining (§ 2.3) and fine-tuning (§ 2.4). ## 2.1 Model Our goal is to learn a matching function between documents and labels such that similarity between a document and its gold label is maximized compared to other labels, which can be seen as an instance of metric learning (Xing et al., 2002; Kulis et al., 2012; Sohn, 2016). This requires encoding documents and class names1into a joint documentlabel space (Ba et al., 2015; Zhou et al., 2019; Chen et al., 2020; Hou et al., 2020). In this work, documents and class names are of different naturedocuments are semi-structured (§ 1), while class names are one or two-word fragments of text. We use two encoders to account for this difference: a document encoder Φdoc suitable for semistructured documents, and a label (class) encoder Φ*label* suitable for the natural language representations of the class labels. Φ*label* is simply a vanilla pretrained BERTBASE model (Devlin et al., 2019). Φdoc, as in prior work (Xu et al., 2020; Lockard et al., 2020), is a pretrained language model that encodes the text and the layout of the document using coordinates of each token. The next section explains this model, LayoutBERT, in detail. We choose this model for its simplicity, but our proposed approach can be combined with more sophisticated ![1_image_0.png](1_image_0.png) document encoders that incorporate layout and visual information in different ways (Huang et al., 2022; Xu et al., 2021; Appalaraju et al., 2021). ## 2.2 Layout**Bert** LayoutBERT is a 6-layer Transformer based on BERTBASE (Devlin et al., 2019) and is pretrained using masked language modeling on a large collection of semi-structured documents (§ 3). Unlike prior work, LayoutBERT has a simpler architecture that decreases model footprint while maintaining accuracy. Specifically, there are three main architectural differences between LayoutBERT and LayoutLM, which is the most comparable architecture in the literature (Xu et al., 2020): (a) LayoutLM uses 12 transformer layers while LayoutBERT uses only 6 layers (b) LayoutLM uses four positions per token, namely upper-left and bottom-right coordinates, while LayoutBERT use only two positions viz. the centroid of the token bounding box. (c) Unlike LayoutLM, LayoutBERT does not use an image encoder to obtain CNN-based visual features.2 ## 2.3 Unsupervised Contrastive Pretraining Φ*label* and Φdoc are models that have been pretrained independently. To encourage these models to produce similar representations for documents and their labels, we continue pretraining Φ*label* and Φdoc via an unsupervised procedure based on a pairwise contrastive objective. The unsupervised objective can learn from large amounts of unlabeled semi-structured documents. This also allows us to directly use the pretrained encoders in an unsupervised zero-shot setting (§ 3.3.1). 2The results in Xu et al. (2020) show that image features are not always useful. To keep things simple, we do not include the CNN component in our model. Since we do not assume access to ground truth labels for this step, our pretraining procedure relies solely on self-supervision via *pseudo-labels* (Rethmeier and Augenstein, 2020). These pseudo-labels are generated by sampling a continuous block of tokens from the document with a length drawn from a shifted geometric distribution. A pseudo-label extracted from a document is treated as the positive label for that document and is encoded using Φ*label*. We now describe our contrastive objective which is based on the multi-class n-pair loss (Sohn, 2016; Radford et al., 2021). Let B be a training batch that consists of training documents D and their pseudo-labels L, such that D = (d1, d2*, ..., d*|B|) and L = (l1, l2*, ..., l*|B|). Let Φdoc and Φ*label* be the document and label encoders, respectively. We start by encoding each document and pseudolabel in the batch and then computing a matching matrix MB ∈ R|B|×|B| of pairwise dot products between every document-label pair, such that MB ij = Φ*label*(li) T· Φdoc(dj ). Our objective is to increase the value of diagonal elements Mij , where i = j, as compared to all other elements. More precisely, the loss function for a batch is a symmetric loss, L B, that can be expressed with the equation: $${\mathcal{L}}^{B}={\frac{1}{2}}[{\mathcal{L}}_{r o w}^{B}+{\mathcal{L}}_{c o l}^{B}].\qquad\qquad(1)$$ Hhere L B row and L B col are the per-batch row-wise and column-wise losses, respectively, with $${\mathcal{L}}_{r o w}^{B}=\sum_{i=1}^{|B|}\left[-\log(\exp(M_{i i}^{B}))+\log(\sum_{j=1}^{|B|}\exp(M_{i j}^{B}))\right].\tag{2}$$ The first term in Eq. 2 maximizes the diagonal elements, while the second term minimizes the offdiagonal elements. The column-wise loss is the same with i and j swapped. We directly optimize the raw dot products rather than cosine similarity as we observed dot-products to perform much better, which also agrees with Karpukhin et al. (2020). ## 2.4 Contrastive Fine-Tuning For the supervised zero-shot setting (§ 3.3.2), we fine-tune the model using the same objective as the pretraining step (Equation 1), except that the labels L = (l1, l2*, ..., l*|B|) for a batch B are ground-truth labels and not pseudo-labels. ## 3 Experiments And Results 3.1 Data We evaluate our approach on the RVL-CDIP dataset (Harley et al., 2015), which consists of 400K documents balanced across 16 classes such as letter, advertisement, scientific report, form, etc. Since zero-shot performance can vary depending on which classes are used for train and test, we follow previous work (Ye et al., 2020) and create four zero-shot splits of the data with non-overlapping test classes. Thus, each split has 8 training classes (200K documents), 4 validation classes (100K documents), and 4 test classes (100K documents).3 Our document encoder is pretrained on documents from CommonCrawl (see Appendix B for more details).4 While this pretraining corpus is different from the one used for LayoutLM, our objective is not to compare directly with this model but to explore zero-shot classification. Our contrastive pretraining corpus consists of 800K documents sampled from this pretraining corpus. We first sample l ∼ *Geometric*( 1 20 ), and then sample a block of l tokens from each document to obtain a pseudo-label for that document. We run contrastive pretraining for 50K steps with batch size of 256. ## 3.2 Experimental Setup LayoutBERT is a 6-layer model initialized using BERTBASE weights and further pretrained using the MLM loss with layout information for 50K steps with a batch size of 2048 and a peak learning rate of 10−4. Unlike LayoutLM, where the extra position embeddings are initialized from scratch, we initialize them from BERT positional embeddings, which we found to speed up convergence. We used dynamic subtoken masking (Liu et al., 2019) with p*mask* = 0.15 and p*replace* = 0.80. The representation of the [CLS] token is used as the encoding of input documents and an affine layer with a dimension of 768 is applied to the output of both encoders. We fine-tune the matching model on the data from the train classes for 30 epochs with a batch size of 40 and a learning rate of 3 × 10−5. The model with the best macro F1 on the validation set is used for evaluation on the held out test set. ## 3.3 Results We experiment with two settings - unsupervised zero-shot, and supervised zero-shot. In the former, 3The exact classes used for each split are in Appendix A. 4https://commoncrawl.org/ Method I II III IV Valid Test Valid Test Valid Test Valid Test **Avg.** BERT (doc and label) 12.05 10.64 13.77 14.08 10.89 13.28 13.94 12.25 12.61 LayoutBERT (doc), BERT (label) 12.05 **30.64** 16.77 22.04 **31.11** 17.32 21.75 12.04 20.47 CPT, LayoutBERT (doc), BERT (label) **50.5** 21.25 **24.60 61.36** 21.65 24.58 61.50 51.57 **39.63** Table 1: Unsupervised zero-shot performance (Macro F1) on 4 splits of RVL-CDIP. CPT: Contrastive Pretraining. Method I II III IV Valid Test Valid Test Valid Test Valid Test **Avg.** Cross-entropy FT 34.76 25.33 **35.64** 23.29 11.67 28.84 29.68 36.75 28.76 Contrastive FT 37.35 25.76 32.55 26.05 18.14 27.63 29.86 32.74 28.25 CPT + Standard FT 48.24 **26.97** 30.45 37.81 **27.20** 28.11 48.82 **46.09** 36.71 CPT + Contrastive FT **49.68** 25.82 30.31 **44.44** 20.80 **30.43 51.26** 45.07 **37.23** no fine-tuning is involved and all models are directly used for inference. In the latter, all models are fine-tuned on data from classes different than those present in the test set. Thus, the latter is strictly more challenging. ## 3.3.1 Unsupervised Zero-Shot We start with the more challenging unsupervised setup and compare three models (Table 1). The first model uses a vanilla pretrained BERTBASE as both the document and label encoders. The second model replaces the BERTBASE document encoder with LayoutBERT model. For these two models, we remove the affine layer after both encoders (§ 3.2) since in the absence of pretraining/finetuning, they will not be trained. The third model uses the same components as the second model but is pretrained using the unsupervised contrastive loss (§ 2.3). The results yield three key observations. First, the vanilla BERT model performs the worst with an F1 score of 13. This is unsurprising as BERT does not capture any layout information. Second, the value of layout information can be verified by replacing the BERTBASE document encoder with LayoutBERT. This improves the average F1 by ~8 points. Finally, contrastive pretraining (CPT) is critical to produce better initialization for the encoders and it improves the average performance of the previous model by ~19 F1 points. ## 3.3.2 Supervised Zero-Shot Next, we turn to the supervised zero-shot setup, where models are finetuned on data from classes different than those in the test set. We only experiment with the LayoutBERT (doc), BERT (label) setup since it performed the best in unsupervised settings. Table 2 shows the Macro F1 with our in-batch contrastive training objective as well as a standard cross-entropy loss (Dauphin et al., 2014; Ye et al., 2020). We also show the fine-tuning performance with contrastive pretraining (§ 2.3). We observe that the in-batch contrastive objective yields comparable F1 to the cross-entropy loss on average (with and without pretraining). However, the in-batch loss also has higher variance across different runs compared to the cros-entropy loss,5 possibly due to the stochastic nature of inbatch contrastive training. Crucially, though, we observe a strong F1 boost in almost all cases with contrastive pretraining, and in some cases as much as ~21 F1 points. This reemphasizes the importance of pretraining in producing similar representations for related documents and labels. Finally, comparing Tables 1 and 2 shows that the zero shot performance is better in the unsupervised case than the supervised case. This is likely due to the fact that in the latter, the model is fine-tuned towards a specific type of documents (i.e. those present in the training/validation) classes, which hinders generalization to unseen inference classes. More sophisticated approaches (Finn et al., 2017; Nichol et al., 2018) can potentially improved the supervised setup, but we leave this to future work. ## 4 Conclusion This work explores zero-shot classification of semistructured documents. We proposed two contrastive techniques for pretraining and fine-tuning 5Tables 4 and 5 in Appendix C show means and standard deviations with three random seeds. Experiments with more random seeds did not yield any meaningful differences. of a matching model. Our fine-tuning objective showed comparable results to the standard crossentropy loss used widely in the literature and our contrastive pretraining significantly boosted zeroshot F1 in supervised and unsupervised scenarios. ## 5 Limitations The current work is an initial attempt at studying the problem of zero-shot classification of semistructured documents. There are two key aspects that this work does not cover and we encourage future work to explore. First, as pointed out in §2.1, we choose Layout*BERT* as our document encoder, Φdoc. This work does not experiment with the variety of encoding strategies in the literature that combines textual, visual, and layout information (Appalaraju et al., 2021; Xu et al., 2021; Huang et al., 2022). It is likely that richer document representations derived from these diverse encoders will further push the limits of zero-shot classification when combined with our proposed unsupervised contrastive pretraining procedure. Second, results in this paper are on a single dataset, i.e. the RVL-CDIP dataset. While we mitigate this to a large extent by creating four nonoverlapping test splits (see §3.1 and Appendix A), results on more datasets might yield more useful insights. In practice, the lack of datasets for this task (of semi-structured document classification) is what makes this exploration difficult and might require creation of new resources ## Acknowledgements We thank our colleagues at AWS AI Labs and the ACL reviewers who have helped improve the paper through their feedback. ## References Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R. Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 993– 1003. Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. 2015. Predicting deep zero-shot convolutional neural networks using textual descriptions. Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, and Benjamin Van Durme. 2020. Reading the manual: Event extraction as definition comprehension. Yann N. Dauphin, Gökhan Tür, Dilek Hakkani-Tür, and Larry P. Heck. 2014. Zero-shot learning and clustering for semantic utterance classification. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of *Proceedings of Machine Learning Research*, pages 1126–1135. PMLR. Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In *International Conference on Learning Representations*. Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 991–995. IEEE. Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1381– 1393, Online. Association for Computational Linguistics. Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. Brian Kenji Iwana, Syed Tahseen Raza Rizvi, Sheraz Ahmed, Andreas Dengel, and Seiichi Uchida. 2016. Judging a book by its cover. arXiv preprint arXiv:1610.09204. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769–6781. Association for Computational Linguistics. Brian Kulis et al. 2012. Metric learning: A survey. *Foundations and trends in machine learning*, 5(4):287–364. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. ZeroShotCeres: Zeroshot relation extraction from semi-structured webpages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8105–8117, Online. Association for Computational Linguistics. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3449–3460, Florence, Italy. Association for Computational Linguistics. Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956– 1971, Dublin, Ireland. Association for Computational Linguistics. Jinseok Nam, Eneldo Loza Mencía, and Johannes Fürnkranz. 2016. All-in text: Learning document, label, and word representations jointly. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 30. Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. Nikolaos Pappas and James Henderson. 2019. Gile: A generalized input-label embedding for text classification. *Transactions of the Association for Computational Linguistics*, 7:139–155. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Nils Rethmeier and Isabelle Augenstein. 2020. Dataefficient pretraining via contrastive self-supervision. arXiv preprint arXiv:2010.01061. Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Y. Ng. 2013. Zero-shot learning through cross-modal transfer. In *Advances in Neural Information Processing Systems 26: 27th Annual* Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 935–943. Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In *Proceedings of the 30th International Conference on Neural* Information Processing Systems, pages 1857–1865. Chris Tensmeyer and Tony Martinez. 2017. Analysis of convolutional neural networks for document image classification. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 1, pages 388–393. IEEE. Yogarshi Vyas and Miguel Ballesteros. 2021. Linking entities to unseen knowledge bases with arbitrary schemas. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 834–844, Online. Association for Computational Linguistics. Zilong Wang, Yiheng Xu, Lei Cui, Jingbo Shang, and Furu Wei. 2021. LayoutReader: Pre-training of text and layout for reading order detection. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, pages 4735–4744, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 41(9):2251–2265. Eric Xing, Michael Jordan, Stuart J Russell, and Andrew Ng. 2002. Distance metric learning with application to clustering with side-information. *Advances in* neural information processing systems, 15:521–528. Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. 2021. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591, Online. Association for Computational Linguistics. Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutlm: Pre-training of text and layout for document image understanding. In *Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data* Mining, pages 1192–1200. Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot text classification via reinforced self-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024. Ben Zhou, Daniel Khashabi, Chen-Tse Tsai, and Dan Roth. 2019. Zero-shot open entity typing as typecompatible grounding. ## A Data Splits As stated in section 3, we split the RVL-CDIP dataset into four splits with non-overlapping test classes. Table 3 shows the classes used in each split. ## B **Pre-Training Data From Common Crawl** We build our pre-training corpus by first extracting all documents from CommonCrawl with a '.pdf' extension. We then remove duplicate documents based on the MD5 hash using fdupes.6. The resulting documents are then passed through PDF-PLUMBER7to extract both the text as well as the co-ordinates of the tokens in the documents, and any documents that cannot be processed byPDF-PLUMBER are discarded. We analyzed a sample of the crawled documents and found a large amount of structured information in the documents, so we use all documents at this stage without additional filtering. This leaves us with 2.3 million documents with approximately 850 million tokens. ## C Supervised Zero-Shot Results Tables 4 and 5 shows the full results of the supervised zero-shot finetuning with macro F1 means and standard deviations across three different runs. While in-batch contrastive fine-tuning outperforms the standard loss in many cases, we can see that, in general, the contrastive loss exhibits higher F1 variance. For example, in Table 4, the standard deviation when evaluating on the test set of the split II is 10.28, which is very high. | Split | Train Classes | Val Classes | Test Classes | | |---------------------------------------|-------------------------------------------------------------------------|----------------------------------|----------------|------| | I | letter, | form, | | | | email, | handwritten, | | | | | advertisement, scientific | report, | | | | | scientific | publication, | | | | | specification | file | folder, | news | | | article, | budget, | | | | | invoice | presentation, questionnaire, resume, memo | | | | | II | file | folder, | news | | | article, | budget, | | | | | invoice, | presentation, | | | | | questionnaire, | resume, | | | | | memo | letter, form, email, | advertisement, | | | | handwritten | scientific | report, | | | | scientific publication, specification | | | | | | III | advertisement, scientific | report, | | | | scientific | publication, | | | | | specification, | file | | | | | folder, | news | article, | | | | budget, invoice | presentation, questionnaire, resume, memo | letter, form, email, handwritten | | | | questionnaire, | resume, | | | | | memo,letter, | form, | email, | | | | handwritten | advertisement, scientific report, scientific publication, specification | file | folder, | news | | article, | budget, | | | | | invoice | | | | | | I | II | | | | |----------------------|--------------|--------------|--------------|--------------| | Valid | Test | Valid | Test | | | Standard FT | 34.76 ± 6.75 | 25.33 ±2.40 | 35.64 ± 2.25 | 23.29 ± 2.92 | | Contrastive FT | 37.35 ± 2.34 | 25.76 ± 1.70 | 32.55 ±1.03 | 26.05 ± 2.78 | | CPT + Standard FT | 48.24±3.08 | 26.97±3.10 | 30.45±1.05 | 37.81±5.36 | | CPT + Contrastive FT | 49.68±0.95 | 25.82±1.96 | 30.31±0.99 | 44.44±10.28 | III IV Valid Test Valid Test Standard FT 11.67 ±0.98 28.84± 1.84 29.68 ±7.03 36.75 ±3.32 Contrastive FT 18.14 ± 1.37 27.63 ±3.91 29.86 ±4.55 32.74 ±2.33 CPT + Standard FT **27.20**±4.70 28.11±1.55 48.82±1.88 **46.09**±2.10 CPT + Contrastive FT 20.80±0.40 **30.43**±0.71 **51.26**±2.19 45.07±5.27 Table 5: Supervised zero-shot performance (Marco F1) on splits III and IV of the RVL-CDIP dataset. We show the mean and standard deviations across 3 runs with different seeds. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1 ✓ B1. Did you cite the creators of artifacts you used? 3.1 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3.1 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Data used as in prior work ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Data used as in prior work ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3.3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. did not use existing packages D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-extracting
Extracting Shopping Interest-Related Product Types from the Web
https://aclanthology.org/2023.findings-acl.474
Recommending a diversity of product types (PTs) is important for a good shopping experience when customers are looking for products around their high-level shopping interests (SIs) such as hiking. However, the SI-PT connection is typically absent in e-commerce product catalogs and expensive to construct manually due to the volume of potential SIs, which prevents us from establishing a recommender with easily accessible knowledge systems. To establish such connections, we propose to extract PTs from the Web pages containing hand-crafted PT recommendations for SIs. The extraction task is formulated as binary HTML node classification given the general observation that an HTML node in our target Web pages can present one and only one PT phrase. Accordingly, we introduce TrENC, which stands for Tree-Transformer Encoders for Node Classification. It improves the inter-node dependency modeling with modified attention mechanisms that preserve the long-term sibling and ancestor-descendant relations. TrENC also injects SI into node features for better semantic representation. Trained on pages regarding limited SIs, TrEnc is ready to be applied to other unobserved interests. Experiments on our manually constructed dataset, WebPT, show that TrENC outperforms the best baseline model by 2.37 F1 points in the zero-shot setup. The performance indicates the feasibility of constructing SI-PT relations and using them to power downstream applications such as search and recommendation.
# Extracting Shopping Interest-Related Product Types From The Web Yinghao Li1∗, Colin Lockard2, Prashant Shiralkar2**, Chao Zhang**1 1Georgia Institute of Technology, Atlanta, USA 2Amazon, Seattle, USA 1{yinghaoli,chaozhang}@gatech.edu 2{clockard,shiralp}@amazon.com ## Abstract Recommending a diversity of product types (PTs) is important for a good shopping experience when customers are looking for products around their high-level shopping interests (SIs) such as hiking. However, the SI-PT connection is typically absent in e-commerce product catalogs and expensive to construct manually due to the volume of potential SIs, which prevents us from establishing a recommender with easily accessible knowledge systems. To establish such connections, we propose to extract PTs from the Web pages containing handcrafted PT recommendations for SIs. The extraction task is formulated as binary HTML node classification given the general observation that an HTML node in our target Web pages can present one and only one PT phrase. Accordingly, we introduce TRENC, which stands for Tree-Transformer Encoders for Node Classification. It improves the internode dependency modeling with modified attention mechanisms that preserve the longterm sibling and ancestor-descendant relations. TRENC also injects SI into node features for better semantic representation. Trained on pages regarding limited SIs, TRENC is ready to be applied to other unobserved interests. Experiments on our manually constructed dataset, WEBPT, show that TRENC outperforms the best baseline model by 2.37 F1 points in the zero-shot setup. The performance indicates the feasibility of constructing SI-PT relations and using them to power downstream applications such as search and recommendation. ## 1 Introduction Customers of e-commerce websites fall in various stages of the purchase funnel1in their journey to purchase specific products. While lower-funnel customers target specific products or product categories, a customer in the middle to upper funnel only has vague shopping interests (SIs) and ![0_image_0.png](0_image_0.png) requires additional guidance to determine the right products to purchase. Existing e-commerce websites are limited today in their ability to assist them in this kind of interest-oriented shopping. For example, a customer searching for *COVID-19 crisis* gets top results showing product types (PTs) such as books and test kits, while missing other essential categories such as the face mask, thermometer, or medicine. Moreover, the search result is a random assortment of products, without a clear organization that helps upper-funnel customers discover products within relevant categories. The main problem is the concept of "shopping interest" is generally absent in e-commerce catalogs, which makes it difficult to directly establish the SI-PT connections and give corresponding recommendations. To circumvent such system limitations, customers today are accustomed to researching their products on hand-curated "hub Web pages"2, each related to an SI and presenting PT ∗All work performed while interning at Amazon. 1https://en.wikipedia.org/wiki/Purchase_funnel suggestions as organized lists, before returning to e-commerce websites. This stretches the total time spent on a purchase. We aim to find SI-related PTs directly on the e-commerce website, reducing customer effort for all their interest-oriented needs. Figure 1 shows the desired search experience. The first step to this end is collecting hub pages, which is realized by querying Google Search with automatically selected prompts (appendix A). The rest of the paper focuses on PT extraction from the HTML pages, which presents several challenges. First, hub websites are heterogeneous in their format and terminology, with PTs often interspersed among long descriptive paragraphs, making it challenging for any solution designed for one or a few websites to work well for others. Second, our page collection approach assumes that all PTs presented on a page are related to the same SI, which may not hold true in practice, requiring us to filter out irrelevant PTs. Finally, our goal to find PTs for a wide range of SIs motivates us to consider a zeroshot learning setup (Xian et al., 2019) *w.r.t.* SIs, to generalize to interests not seen during training. Representing an HTML document by a Document Object Model (DOM) tree whose nodes are HTML tags with text sequences, we formulate PT extraction as a node classification task that entails checking whether its text sequence represents a PT phrase. It is based on the empirical discovery that in our collected hub pages, a PT phrase generally occupies a single DOM node within a coherent group of enumerated HTML elements such as section titles or bullet points, where knowing one PT phrase suggests the potential presence of other PT phrases in the neighboring elements (Figure 3a). Node classification emphasizes learning inter-node structural dependencies rather than intra-node token interactions, which results in better generalization to a wide variety of HTML structures. Due to the absence of a dedicated DOM tree encoding method, we propose TRENC (Tree-Transformer Encoders for Node Classification) to fill in the blanks. Adapted from the Transformer (Vaswani et al., 2017), TRENC incorporates ancestor-descendant and sibling node relations using modified self-attention mechanisms and positional embeddings that are suited to the unique DOM node arrangement of the rendered hub pages. The ancestor-descendant relation provides relative structural information between nodes in the DOM node hierarchy, whereas the sibling relation tracks the semantical connection among sibling nodes. The modified attention mechanisms reconstruct the tree architecture from the linearized input nodes and facilitate long-term dependency modeling. To capture the relevance between an SI and a node, we leverage a gating network to dynamically integrate SI semantics with a node's textual semantics, which generalizes TRENC to unseen SIs. Evaluated on our dataset WEBPT with 453 Web pages covering 95 interests, TRENC achieves 2.37 absolute F1 performance gain over the strongest baseline method. Our contributions include - a novel and practical research topic of product type extraction from the Web pages associated with a given shopping interest; - TRENC, a Transformer encoder-based model with structural attention mechanisms for recovering the DOM tree architecture from the node sequence to promote classification; - a dataset WEBPT, and comprehensive evaluations of graph encoding techniques to verify the effectiveness of our model design. The dataset is made publicly accessible at https://github.com/Yinghao-Li/WebIE to promote future research. ## 2 Related Works Web Information Extraction Information extraction from the semi-structured Web data is a long-studied topic (Chang et al., 2006; Banko et al., 2007; Sleiman and Corchuelo, 2013). The works most relevant to ours are those on product attribute extraction (Zheng et al., 2018; Xu et al., 2019; Lockard et al., 2020; Zhou et al., 2021; Wang et al., 2022; Deng et al., 2022). For example, Zheng et al. (2018) train a BiLSTM-CRF network (Huang et al., 2015) for each attribute to locate its corresponding values on text sequences. Xu et al. (2019) scale it up by injecting the attribute name into the network as an attention objective. Wang et al. (2022) encode the DOM tree with graph attention network (Velickovi ˇ c et al. ´ , 2018) to incorporate the dependencies between nodes. However, attribute extraction is different from our PT extraction task at two major points. First, attributes are typically extracted from product detail pages, each of which mentions multiple attributes; and the attribute name-value pairs cluster around titles, bullet points and descriptions. In contrast, a hub page generally focuses on a single SI, with PTs scattered throughout the page. Unlike attribute extraction approaches that limit the searching scope to certain regions, the characteristics of hub pages require us to consider a page holistically instead of a small part. Second, attribute extraction is performed as token-level entity recognition in previous works, while PT extraction requires a node-level classification, which prevents approaches for the former from being directly applied to the latter. To our best knowledge, no applicable DOM node classification or similar dataset exists in openly available benchmarks such as OGB (Hu et al., 2020). Graph Transformers Recently, graph neural networks (GNNs) such as the graph convolutional network (GCN, Kipf and Welling, 2017) and graph attention network (GAT, Velickovi ˇ c et al. ´ , 2018; Brody et al., 2022) have dominated the graph encoding research. But some works try to model graphs using Transformers (Dwivedi and Bresson, 2020; Maziarka et al., 2020; Ying et al., 2021; Park et al., 2022; Wu et al., 2022), to which our work is more related. For example, Maziarka et al. (2020) add inter-atomic distances into the self-attention heads to parameterize the molecular graph structure. Also targeting molecules, Graphormer (Ying et al., 2021) takes a step further and introduces centrality encoding, edge encoding and spacial encoding to evaluate the atom importance and capture the edge and graph structure. Park et al. (2022) and Wu et al. (2022) extend Transformers to knowledge graphs with partial message-passing tricks. Although applicable, the hierarchical and acyclic nature of DOM trees is different from the graphs for which the approaches were designed. Directly applying them to DOM trees leads to sub-optimal performance, as shown in § 5. fi ## 3 Problem Setup We possess the DOM tree of a Web page associated with a given shopping interest C. The DOM tree can be represented by a set of nodes V = {V1, V2, . . . , V|V|} as well as a set of edges E = {E1, E2, . . . , E|E|} that connect the parent and children nodes. |V| and |E| are the sizes of node and edge sets respectively. We aim to design a binary node classifier f : V∪E∪{C} 7→ {0, 1}|V| to judge whether the text sequence in each node is a phrase representing a product type. The nodes with positive labels are referred to as "PT nodes" and the labels are denoted by ym = 1, m ∈ 1 : |V|. We focus our discussion on one DOM tree and use m ∈ 1 : |V| as its node index. ![2_image_0.png](2_image_0.png) ## 4 Method We propose TRENC to model the DOM tree of hub Web pages for PT extraction. Figure 2 shows the model architecture. We treat the problem as a DOM node classification task that entails detecting whether its textual sequence defines a PT phrase. We first create a node representation that integrates three basic signals of a node that may be indicative of a PT (§ 4.1). We then adapt the Transformer architecture by adding two attention mechanisms, namely path attention and sibling attention, that allow capturing of inter-node dependencies presented by their HTML structure (§ 4.2.1). We also include three kinds of positional encodings that assist the attention layers with the node's unique positional information within the DOM tree (§ 4.2.2). Finally, we integrate the outputs from the path and sibling attention layers, which are used in a classification layer to predict node labels (§ 4.2.3). The implementation details are in appendix B.1. ## 4.1 Node Features Besides the SI C associated with the tree, we consider two features for each node Vm: 1) its HTML tag tm ∈ T where T is a finite tag set; and 2) the text sequence Sm = {wm,1, wm,2*, . . . , w*m,|Sm|}, where |Sm| is the length and w is the token. 7511 ![3_image_1.png](3_image_1.png) PT ![3_image_0.png](3_image_0.png) HTML Tag HTML tags are a finite vocabulary of keywords that define how browsers display their content. Specifically, they convey the semantic nature of their enclosed content. For example, <p> denotes a paragraph, while <ul> represents a list. Based on the observation that some tags tend to contain PT phrases more than others, we capture the tag information as a distinct structural feature and encode tm with a vector tm ∈ R dmodel using an embedding layer. Here, dmodel is the model dimensionality as in Transformers. Text Sequence Text sequences convey the semantic character of an HTML document. In addition to directly indicating a PT phrase, they can also serve as useful contextual information about the neighboring nodes' propensity to contain a PT phrase. For example, a node stating "Essentials for camping" is a clear indicator that what follows is likely a set of camping-related PT nodes. We leverage the power of pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) to properly encode their semantics. For a given sequence, BERT generates an embedding wm,i ∈ R dBERT for each token wm,i, i ∈ 1 : |Sm|, besides two special tokens wm,0 and wm,|Sm|+1 representing the start and end of the sequence. We derive the sequence embedding sm ∈ R dmodel by taking an average of all the token embeddings and passing it through a feed-forward network (FFN) layer: $$\mathbf{s}_{m}=\mathbf{W}^{\mathrm{seq}}(\mathrm{GELU}(\frac{1}{|\mathbf{S}_{m}|+2}\sum_{i=0}^{|\mathbf{S}_{m}|+1}\mathbf{w}_{m,i})),\tag{1}$$ where $\mathbf{w}_{m,i}=\mathbf{w}_{m,i}-\mathbf{w}_{m,i}$. where $W^{\rm seq}\in\mathbb{R}^{d_{\rm model}\times d_{\rm BERT}}$ are parameters. Shopping Interest Although we assume that a DOM tree is associated with only one SI C, in rare cases this assumption does not hold. We are thereby motivated to capture the relevance between a node and the interest. Accordingly, we incorporate C with an embedding vector c ∈ R dmodel in a similar manner as that for node text sequence (1), and let the model learn the relevance between C and related PTs to rule out any false positive cases. Feature Integration We integrate node features into the node embedding em ∈ R dmodel in two steps to honor the distinctiveness between the structural feature tm and semantic features sm and c. First, we merge the semantic features. Since different nodes have differing levels of correlations with the interest, we use gating vectors (Hochreiter and Schmidhuber, 1997) to automatically control how much interest embeddings c should be integrated into the sequence embedding sm. We calculate the weights g as: $$g(\mathbf{x}_{1},\mathbf{x}_{2})=\sigma(\mathbf{W}_{1}\mathbf{x}_{1}+\mathbf{W}_{2}\mathbf{x}_{2}+\mathbf{b}),\quad(2)$$ where x1 and x2 are feature vectors; W1 and W2 are trainable square matrices; b is the bias, and σ is the sigmoid function. With (2), the updated sequence embedding vector becomes $${\mathbf{s}}_{m}^{\prime}={\mathbf{g}}({\mathbf{c}},{\mathbf{s}}_{m})\odot{\mathbf{c}}+{\mathbf{s}}_{m},$$ where is the element-wise product. Then, we integrate the semantic and structural embeddings using concatenation followed by an FFN layer to maintain the embedding dimensionality. The integrated node embedding em is $$e_{m}=W^{\mathrm{emb}}[{\boldsymbol{s_{m}^{\prime}}}^{\mathsf{T}};{\boldsymbol{t_{m}^{\mathsf{T}}}}]^{\mathsf{T}},$$ where [·; ·] represents vector concatenation and Wemb ∈ R dmodel×2dmodel is an FFN layer. ## 4.2 Tr**Enc Architecture** Compared with conventional GNNs that generally aggregate only 1-hop neighboring messages in each layer, Transformers are better at tracking long-term dependencies.However, applying the Transformer encoder to DOM trees as is can lead us astray because it is not designed to naturally accommodate the hierarchical structure of a tree. To address this limitation, we adapt the Transformer architecture by adding *structural attention* mechanisms with node positional encodings to better encode unique information within the DOM trees with the existing abilities of the Transformer architecture. ## 4.2.1 Structural Attentions The DOM tree structure presents two kinds of relations that convey how nodes are related. The ancestor-descendant relation, represented by the edges E, conveys the granular nature of a node (high or low) within the DOM hierarchy. The sibling relation between nodes conveys how they semantically represent a coherent group, as shown in Figure 3a. We incorporate these relationships via structural attention mechanisms, namely path attention and sibling attention. Correspondingly, we represent these two views of the DOM tree by two types of node sets: *path node sets* and sibling node sets. A path set N P ⊂ V is the ordered collection of all nodes in an HTML path, from the root node to an arbitrary node, as illustrated in Figure 3b. A sibling set N S ⊂ V consists of the immediate children of a non-leaf node. Thereupon, we develop *path* and *sibling attention* mechanisms, as described below, to explore the potential of modeling tree structures with Transformers. Path Attention The path attention mechanism captures the granularity of a node Vm within the DOM tree, which carries useful information about the node's tendency to present a PT phrase. It limits the attention target of a DOM node to its ancestors or descendants only, echoing the edges E that define the DOM tree structure. Path node sets help define an attention mask toward this purpose by leaving out all "off-path" elements during the self-attention message passing operation. Suppose the input is HP ∈ R*|V|×*dmodel, in each attention head, the path attention scores a Pm ∈ (0, 1)1*×|V|* of Vm attending to all DOM nodes are $$\mathbf{a}_{m}^{\mathrm{P}}=\mathrm{SoftMax}(\frac{\mathbf{H}_{m}^{\mathrm{P}}\mathbf{W}^{\mathrm{Q}}(\mathbf{H}^{\mathrm{P}}\mathbf{W}^{\mathrm{K}})^{\mathrm{T}}}{\sqrt{d_{k}}}+\mathbf{M}_{m}^{\mathrm{P}}).\tag{3}$$ Here W ∈ R dmodel×dk are the FFN layers that map the latent features to the reduced dk-dimensional single-head attention space, as in (Vaswani et al., 2017). MP ∈ {0, −∞}*|V|×|V|* is the path attention mask as shown in Figure 3b. ∀*u, v* ∈ 1 : |V|, $$M_{u,v}^{\mathrm{P}}=\begin{cases}0,&\exists{\mathcal{N}}^{\mathrm{P}}\;s.t.\;V_{u}\in{\mathcal{N}}^{\mathrm{P}},\,V_{v}\in{\mathcal{N}}^{\mathrm{P}};\\ -\infty,&\mathrm{otherwise}.\end{cases}\tag{4}$$ a Pm has non-zero values at positions corresponding to Vm's ancestors or descendants. The single-head attention output of Vm becomes $\eqref{eq:walpha}$ $$\mathrm{Attn}_{m}^{\mathrm{P}}=a_{m}^{\mathrm{P}}H^{\mathrm{P}}W^{\mathrm{V}}.$$ $$({\mathfrak{H}})$$ The rest of the architecture such as the layer norm and the residual connection is the same as in the Transformer and thus is omitted. Sibling Attention Although sibling relations are not described by the edges E, encoding them can provide a useful contextual signal based on the observation that sibling PT phrases often form a group. Accordingly, analogous to path attention, we develop sibling attention by imposing an attention mask MS, which forces a node to focus only on its siblings via self-attention. The sibling node set N S helps define the mask. Its calculation is identical to (3)–(5), except that the variables are superscripted by sibling "· S" instead of path "· P". ## 4.2.2 Node Positional Encodings Different from graphs, a DOM tree is acyclic and heterogeneous; the order of nodes influences their relations and how the elements are rendered. As Transformers do not encode such node order, positional embeddings are critical to capture such positioning. (Yun et al., 2020). We consider three types of absolute indices: global, *level* and *sibling* positional indices, as shown in Figure 3a. The global positional index i Gm represents the position of each node in the tree in the depth-first order. It helps TRENC understand how the nodes are organized in the rendered HTML pages. The level index i Lm and sibling index i Sm on the other hand are developed to assist the path and sibling attentions. i Lm describes the level or depth of a node, to help distinguish a parent from its children during the path attention, while the i Sm captures the relative order among siblings within the sibling attention. We encode positional indices by first applying sinusoid functions (Vaswani et al., 2017) to convert them to vectors i Gm, i Lm, i Sm ∈ [0, 1]dmodel, followed by applying an affine transformation that maps each of them into distinct latent spaces: $\hat{\mathbf{i}}_{m}^{\rm G}=\mathbf{W}^{\rm G}\hat{\mathbf{i}}_{m}^{\rm G};\quad\hat{\mathbf{i}}_{m}^{\rm L}=\mathbf{W}^{\rm L}\hat{\mathbf{i}}_{m}^{\rm L};\quad\hat{\mathbf{i}}_{m}^{\rm S}=\mathbf{W}^{\rm S}\hat{\mathbf{i}}_{m}^{\rm S},$ where $\mathbf{W}\in\mathbb{R}^{d_{\rm model}\times d_{\rm model}}$ are FFN parameters. In each layer, the path and sibling signals are modeled by two parallel branches, which are identical except for the positional embeddings and attention mechanisms (Figure 2). Denoting the input feature of layer l by H(l) ∈ R*|V|×*dmodel, we have3 $$\mathbf{H}_{m}^{\mathrm{P}}=\mathbf{H}_{m}^{(l)}+{\hat{\mathbf{i}}}_{m}^{\mathrm{L}};\quad\mathbf{H}_{m}^{\mathrm{S}}=\mathbf{H}_{m}^{(l)}+{\hat{\mathbf{i}}}_{m}^{\mathrm{S}},\quad(6)$$ which are passed into the attention sublayers (3)– (5) for message passing.4 The branch outputs Hˆ P and Hˆ Sare aggregated by a gating layer that generates the layer output Hˆ (l): $$\begin{array}{c}{{\hat{H}_{m}^{(l)}=\!g(\hat{H}_{m}^{\mathrm{P}},\hat{H}_{m}^{\mathrm{S}})\odot\hat{H}_{m}^{\mathrm{P}}+}}\\ {{(1-g(\hat{H}_{m}^{\mathrm{P}},\hat{H}_{m}^{\mathrm{S}}))\odot\hat{H}_{m}^{\mathrm{S}}.}}\end{array}\quad(7)$$ The input of the first layer is the summation of the node embedding and global positional embedding H (1) m = em + ˆi Gm, while the last output Hˆ (N)is fed into a classification layer to predict node labels, assuming the model has N layers in total. ## 4.3 Training And Inference We use binary cross-entropy as our training objective. Suppose the predicted *logit* is yˆ, then the loss at the level of a DOM tree is calculated as $$\ell=-\sum_{m=1}^{|\mathcal{V}|}y_{m}\log\sigma(\hat{y}_{m})+(1-y_{m})\log\sigma(1-\hat{y}_{m}).$$ During inference, we use 0.5 as a hard classification. During inference, we use 0.5 as a hard classification threshold for the predicted probability σ(ˆy). ## 5 Evaluation In this section, we first describe a new dataset of interests and their associated webpages, specifically created to benchmark methods for the PT extraction 3Other positional encoding approaches such as (Chen et al., 2021) show similar performances. 4We omit the layer indicator · (l)if possible for simplicity. problem. We then evaluate TRENC on the same, pitting it against a range of applicable baselines. Finally, we look at the effectiveness of various model components via ablation studies. ## 5.1 Experiments Dataset We constructed a dataset containing 95 shopping interests and queried Google for hub pages using automatically selected prompts such as "[hiking] equipment list", where "hiking" is the SI. For each SI, we downloaded the top 100 returned pages and labeled them with PT nodes using a semi-automatic process. First, we applied simple heuristic rules to create noisy PT labels, based on structure and tag matching. Thereafter, for each SI, we presented roughly 5 webpages having a noisy label to a human annotator to further refine the labels. Even so, the dataset is not entirely noise-free given the subjective nature of the labeling process, with many ambiguous cases, such as deciding whether a software such as "VSCode" makes a valid product type. The pages without any positive human label were discarded. This process ultimately resulted in a collection of 453 HTML webpages having 94,167 nodes, among which 12,548 nodes are positive. Further details are described in appendix A. Setup We focus on a zero-shot setup *w.r.t.* SIs since our goal is to evaluate various methods on SIs not seen during training. Therefore, we split the collection of webpages *stratified by their associated SIs* (recall that a webpage is assumed to be associated with only one SI) into training (75%), validation (10%) and test partitions (15%), ensuring that no SI is shared across partitions. As our dataset is small, we randomly split the collection 5 times and generated 5 distinct datasets, each with the three partitions. This approach is aimed to mitigate the impact of random factors while measuring real model performance. We identify the datasets as WEBPT-n, where n ∈ 1 : 5 is the split index. Baselines We consider the following simple to complex methods. 1) **Heuristic rules** are heuristic functions we manually designed to locate PT nodes from the DOM trees, which were also used to generate the initial, noisy node labels. 2) **Text similarity** decides whether a node is positive based on the cosine similarity between text and SI embeddings. 3) Fine-tuned BERT (BERT-FT) fine-tunes a BERTbase model to independently classify each tree node based on its text. 4) **Multilayer perceptron** (MLP) also classifies each node independently, but with | Models | WEBPT-1 | WEBPT-2 | WEBPT-3 | WEBPT-4 | WEBPT-5 | F¯1 ( precision / recall ) | | |--------------------|------------|-----------|-----------|-----------|-----------|------------------------------|-------------------------| | Heuristic | Similarity | 40.12 | 39.14 | 35.84 | 36.55 | 33.80 | 37.09 ( 28.52 / 52.44 ) | | Methods | Rules | 56.53 | 62.44 | 56.90 | 59.68 | 58.28 | 58.77 ( 44.20 / 88.02 ) | | MLP | 66.65 | 66.28 | 66.31 | 74.71 | 61.90 | 67.17 ( 72.11 / 63.38 ) | | | BERT-FT | 72.50 | 71.63 | 73.03 | 77.87 | 65.69 | 72.14 ( 68.32 / 76.65 ) | | | Graphormer | 71.09 | 81.76 | 75.73 | 66.81 | 69.67 | 73.01 ( 76.61 / 70.89 ) | | | GAT | 71.31 | 85.45 | 74.83 | 78.40 | 67.84 | 75.57 ( 77.07 / 74.28 ) | | | GCN | 76.13 | 84.07 | 79.16 | 81.50 | 71.92 | 78.56 ( 84.44 / 73.57 ) | | | TRENC | 79.65 | 88.26 | 78.99 | 82.40 | 75.35 | 80.93 ( 84.06 / 77.81 ) | | | Supervised Methods | | | | | | | | ![6_image_0.png](6_image_0.png) fixed BERT text embeddings followed by a set of FFN layers. 5) Graph neural networks (GNNs) propagate node semantics throughout the graph by aggregating neighboring node embeddings. GNN family has many variances, and we focus on GCN and GAT. 6) **Graphormer** (Ying et al., 2021), designed for molecular graphs, adds special encodings and attention masks to the Transformer model. Please see appendix B.2 for implementation details. Metrics We evaluate each model with the F1 scores corresponding to each split WEBPT-i and the macro-averaged F1 score F¯1 = 1 5 P5n=1 F1n with the corresponding macro precision and recall. All trainable methods are equipped with early stopping techniques based on validation F1 scores. To further reduce the influence of random factors without increasing training pressure, we store 5 snapshots of the models that perform the best on the validation dataset during training. During the test, we predict 5 sets of labels from the model snapshots and use the majority-voted labels as the final model predictions. It can be regarded as a simplified model ensemble method often used to improve model robustness (Dong et al., 2020). ## 5.2 Main Results Table 1 shows the results of our comparative evaluation. As seen, TRENC outperforms all methods, exceeding the strongest baseline, GCN, by a margin of 2.37 absolute F1 on average. Considering the small size of our datasets, it is not surprising that the test F1 scores have relatively large variation across different data splits, as the correlation of data distributions of the training and test sets is susceptible to random factors. Nonetheless, TRENC achieves the best performance on 4 out of 5 splits as well as exceeds by a good margin on average, which strengthens the confidence of evaluation. Surprisingly, Graphormer underperforms GNN models and barely outperforms BERT-FT, a model that treats nodes independently without considering the tree structures. It indicates that models designed for other graphs such as molecular graphs are not directly applicable to our case. Instead of helping, the features Graphormer emphasizes prevent the model from learning a reasonable representation of the DOM tree. Table 1 also shows that the cosine similarity between SI and PT embeddings does not present a good performance. This is not unexpected as SI and PTs are not usually semantically similar, making it a sub-optimal way to directly compare their embeddings. We also compare TRENC with GCN at varying levels of DOM tree complexity. Figure 4a shows tree-level F1 scores of each DOM tree against its depth, which is the average depth of its nodes 1 |V| P|V| m=1 i Lm and roughly echos the tree complexity. Figure 4b divides the depth equally into 5 levels and presents the average F1 for each level. As seen, TRENC has better overall performance than GCN at all depths. In addition, the gap between TRENC and GCN increases when the tree is deeper, which indicates that TRENC can better encode complex trees due to the global message-passing ability of the self-attention mechanism. ## 5.3 Ablation Studies We ablate input features and model components from TRENC to understand their effectiveness. Ta- | Models | Average F1 | Gap | SI | Node text sequence | | | |---------------------------------------------------------------------------------------------------------------------------------|--------------------------------|-------------------------------------|-------------|------------------------|---------|------------------------| | TRENC | 80.93 | - | at-home-spa | Esthetics or Skin Care | | | | FP | hiking | Merrell Overlook Tall 2 WP Boot | | | | | | running | Credit card | | | | | | | w/o SI | 79.83 | 1.10 ↓ | | | | | | Input | w/o tag | 78.82 | 2.11 ↓ | | | | | features | w/o text | 57.98 | 22.95 ↓ | FN | fishing | Rods for River Fishing | | canoeing | Water bottle - 1 litre is good | | | | | | | w/o gating | 80.39 | 0.54 ↓ | | | | | | Transformer | 78.44 | 2.49 ↓ | | | | | | w/o pos emb | 79.60 | 1.33 ↓ | | | | | | w/o pth attn | 78.98 | 1.95 ↓ | | | | | | w/o sbl attn | 78.04 | 2.89 ↓ | | | | | | Sequence | BERT-large | 79.36 | 1.57 ↓ | | | | | RoBERTa | 78.54 | 2.39 ↓ | | | | | | encoding | Sentence-BERT | 74.73 | 6.20 ↓ | | | | | Model | | | | | | | | components | Table 3: | Examples of common mistakes made by | | | | | | TRENC. FP/FN indicates false positives/negatives. attention means a node no longer has access to any other nodes from the tree. | | | | | | | ## Ble 2 Shows The Ablation Results. Input Features As seen, although removing any input feature (§ 4.1) impairs the model performance, text sequence is the most critical feature for TRENC. We further notice that without text sequence, TRENC performs quite close to the heuristic rules that utilize very limited lexical features (Table 1). This may indicate that TRENC exhausts the structural information available in a DOM tree. Although not as significant as text sequences, incorporating SIs and tags does enhance the model performance. Injecting SIs turns the model's attention to their correlation with PTs. But such improvement is limited as the correlation is not strong, as discussed in § 5.2. Model Components We investigate the functionalities of model components by removing them separately. The Transformer model discards edges E and treats the tree as a linearized sequence of nodes arranged by their global positional indices i G. Although it learns certain structural dependencies, as indicated by its advance in comparison with MLP (Table 1), missing explicit edge knowledge still affects the model's judgment. The results also show that path attention, sibling attention and positional encodings all contribute to better tree encoding. The row "w/o pos enc" removes the level and sibling encodings i L, i S but keeps the global encoding i G. Without i L and i S, the model cannot properly identify the hierarchy and sibling order between nodes and therefore performs worse. Compared to path attention, sibling attention demonstrates a higher importance in context understanding, even though removing path Sequence Encoding In our implementation, we use the uncased BERT-base model with d BERT = 768 as our encoders for sequence embeddings e and concept embeddings c. 5 The embeddings are fixed during the training process. We also test other pre-trained language models, including BERT-large, RoBERTa (Liu et al., 2019) and Sentence-BERT (Reimers and Gurevych, 2019), which is designed for comparing the sequence similarities and claims better sentence embedding performance than BERT. However, Table 2 shows that none outperforms BERT-base (Devlin et al., 2019). The reason might be the incompatibility of their training corpus and objective to our task. The results indicate that choosing an encoding model is vital to have good performance. ## 5.4 Case Studies On Classification Mistakes Table 3 shows a few false positive (FP) and false negative (FN) examples to illustrate certain text sequence patterns where TRENC fails. As seen from FP cases, TRENC either struggles to determine whether it is a broad PT category (1 st row), has challenges discerning a PT from a specific product (2 nd row), or makes mistakes when unavoidable non-purchasable items are mentioned on the page along with other valid PTs (3 rd row). From the FN cases, we conjecture that long descriptions may overwhelm the textual semantics and deviate its embedding, thereby preventing TRENC predict correctly (4 th & 5 th rows). The reason might be that TRENC have a stronger dependency on the node semantics than the structure, which is also indicated by the ablation results, and properly balancing the conditional terms may mitigate this issue. ## 6 Conclusion In this paper, we consider a new problem of extracting product types from the Web pages that are relevant to broad shopping interests such as camping. We model the problem as a node classification task and propose TRENC, a Transformer encoder-based model that leverages unique characteristics of DOM trees to perform product type extraction. In addition to the node-level signals including HTML tags, text sequences and shopping interest semantics, TRENC design path and sibling attention mechanisms based on DOM tree's ancestor-descendant and sibling relations. Together with the tree-based positional embeddings, the structural attention mechanisms promote the tree architecture understanding and make the classification more effective. Zero-shot experiments on a new dataset TRENC containing 95 shopping interests and 453 pages show that TRENC outperforms the baseline graph encoding models. This work pushes the frountier of researches of a more organized and intuitive result recommendation for middle-funnel customers. ## Limitations Apart from the issues mentioned in § 5.4, another limitation of TRENC is that it does not integrate any pre-training process such as BERT, which is effective in increasing the language understanding ability and adopted by previous works focusing on token-level classification tasks (Wang et al., 2022; Deng et al., 2022). Two factors lead to this decision. First, we use DOM nodes instead of tokens as the classification object and focus on relations between nodes rather than tokens. As the node text sequence is a composition of an arbitrary number of tokens, adopting the conventional masked language modeling (MLM) training objective (Devlin et al., 2019) seems impractical since there is no direct mapping from an embedding vector, one-hot encoded or not, to a sentence. The second reason is simply that we do not possess the corpus or computation resources for model pre-training. In fact, we expect a properly designed pre-training scheme to bring better node semantics representation and SI-PT relation modeling. It is an interesting topic and deserves further study. ## Acknowledgments This work was supported in part by Amazon.com Services LLC, NSF IIS-2008334, IIS-2106961, and ## Career Iis-2144338. We would like to thank Xian Li, Binxuan Huang, Chenwei Zhang, Yan Liang, and Jingbo Shang for their insightful advice on this work. ## References Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In *Proceedings of the 20th International Joint Conference* on Artifical Intelligence, IJCAI'07, pages 2670– 2676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Shaked Brody, Uri Alon, and Eran Yahav. 2022. How attentive are graph attention networks? In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Chia-Hui Chang, M. Kayed, M.R. Girgis, and K.F. Shaalan. 2006. A survey of web information extraction systems. IEEE Transactions on Knowledge and Data Engineering, 18(10):1411–1428. Pu-Chin Chen, Henry Tsai, Srinadh Bhojanapalli, Hyung Won Chung, Yin-Wen Chang, and ChunSung Ferng. 2021. A simple and effective positional encoding for transformers. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 2974–2988, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xiang Deng, Prashant Shiralkar, Colin Lockard, Binxuan Huang, and Huan Sun. 2022. DOM-LM: learning generalizable representations for HTML documents. *CoRR*, abs/2201.10608. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. 2020. A survey on ensemble learning. Frontiers Comput. Sci., 14(2):241–258. Vijay Prakash Dwivedi and Xavier Bresson. 2020. A generalization of transformer networks to graphs. CoRR, abs/2012.09699. Matthias Fey and Jan E. Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. *Neural Computation*, 9(8):1735–1780. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. In *Advances in Neural Information Processing Systems* 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 612, 2020, virtual. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. ZeroShotCeres: Zero-shot relation extraction from semi-structured webpages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8105–8117, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Lukasz Maziarka, Tomasz Danel, Slawomir Mucha, Krzysztof Rataj, Jacek Tabor, and Stanislaw Jastrzebski. 2020. Molecule attention transformer. CoRR, abs/2002.08264. Jinyoung Park, Seongjun Yun, Hyeon-Jin Park, Jaewoo Kang, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, and Hyunwoo J. Kim. 2022. Deformable graph transformer. *CoRR*, abs/2206.14337. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing Systems 32: Annual Conference* on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035. Nils Reimers and Iryna Gurevych. 2019. Sentencebert: Sentence embeddings using siamese bertnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980–3990. Association for Computational Linguistics. Hassan A. Sleiman and Rafael Corchuelo. 2013. A survey on region extractors from web documents. *IEEE* Transactions on Knowledge and Data Engineering, 25(9):1960–1981. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 49, 2017, Long Beach, CA, USA, pages 5998–6008. Petar Velickovi ˇ c, Guillem Cucurull, Arantxa Casanova, ´ Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *International* Conference on Learning Representations. Qifan Wang, Yi Fang, Anirudh Ravula, Fuli Feng, Xiaojun Quan, and Dongfang Liu. 2022. Webformer: The web-page transformer for structure information extraction. In *Proceedings of the ACM Web Conference 2022*, WWW '22, pages 3124–3133, New York, NY, USA. Association for Computing Machinery. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, and Junchi Yan. 2022. Nodeformer: A scalable graph structure learning transformer for node classification. In *Advances in Neural Information Processing Systems*. Yongqin Xian, Christoph H. Lampert, Bernt Schiele, and Zeynep Akata. 2019. Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly. *IEEE Trans. Pattern Anal. Mach. Intell.*, 41(9):2251–2265. Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5214–5223, Florence, Italy. Association for Computational Linguistics. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and TieYan Liu. 2021. Do transformers really perform badly for graph representation? In *Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems* 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 28877–28888. Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J. Reddi, and Sanjiv Kumar. 2020. Are transformers universal approximators of sequence-to-sequence functions? In *8th International Conference on Learning Representations,* ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, KDD '18, pages 1049–1058, New York, NY, USA. Association for Computing Machinery. Yichao Zhou, Ying Sheng, Nguyen Vo, Nick Edmonds, and Sandeep Tata. 2021. Simplified dom trees for transferable attribute extraction from the web. ![11_image_0.png](11_image_0.png) ## A Dataset Details A.1 Dataset Construction We build WEBPT to realize a quantitative analysis of different PT extraction methods. WEBPT is a collection of hub pages relevant to a set of pre-defined SIs. Its construction process mainly consists of 5 steps: 1) defining SIs; 2) crawling hub pages; 3) processing HTML documents; 4) labeling documents; and 5) splitting data points. Defining Shopping Interests As the first step, we establish a set of SIs through brainstorming. Particularly, we focus on popular activities, sports, hobbies and special events. Please check Table 6 and 7 for a complete list of SIs. Crawling Hub Pages The hub pages are the webpages, each providing PTs related to a specific SI. Due to the variety of SIs, it is infeasible to focus on one or several websites for hub page collection. For example, a website specializing in sports will not provide information on "sewing" with a high chance and vice versa. In addition, gathering information from different websites may eliminate the bias probably existing in one website, according to the law of large numbers. Considering this situation, we take advantage of Google Search with a simple query selector to locate the hub pages. Each SI C is combined with suffices "equipment list", "supply list", "tool list" and "checklist" before being fed into the search engine for querying. The system selects the combination with the largest number of results, whose top-100 query results are saved for later usage. We keep only the HTML pages and discard other documents such as PDFs or CSVs, so the actual number of saved documents may vary. Processing HTML Documents This step aims to simplify the DOM tree structure to facilitate PT extraction. The RAW DOM tree is complicated with decorative and supporting scripts irrelevant to the content, which easily submerges the useful information we want to extract and decreases the false positive rate. We prune the trees by removing all headers, footers, and leaf nodes with empty text sequences. Then, we replace the nodes with only one child by their children to decrease the tree depth. To reduce the tree depth, we delete the nodes with only one child and then connect their children with subsequent subtrees directly with their parents. The process is illustrated in Figure 5. Experiments show that this HTML processing strategy successively simplifies the DOM structure without sacrificing any targeted content. Labeling Documents and Splitting Data Points These two steps are sufficiently discussed in § 5.1 as will not be repeated. The only supplement is that the heuristic method used for initializing the noisy labels and compared in Table 1 is empirically developed. We omit its discussion since it is complex and not the focus of this paper. The detailed dataset splits are presented in Table 6 and 7. Data Processing for Transformers One limitation of the Transformer models such as BERT and TRENC is that they need to set a constraint to the length of the input sequence |V| since the complexity of the self-attention mechanism is O(|V|2) and easily explodes when |V| is too large. Considering this drawback, for the node Transformers including Graphormer and TRENC, we set 512 as the maximum size of a DOM tree and split those that exceed this size. In addition, we guarantee that each split tree has 64 nodes at minimum. Figure 6 shows an example of the separation process. ## A.2 Dataset Statistics We present the dataset statistics in Table 4. DOM trees are larger than molecular graphs but significantly smaller than knowledge graphs. | Attribute | Value | |---------------------------|---------| | # Shopping Interests | 95 | | # DOM Trees | 453 | | # Total Nodes | 94,167 | | # Leaf Nodes | 70,161 | | # Positive PT Nodes | 12,548 | | Average # Nodes per Tree | 207.87 | | Maximum # Nodes in a Tree | 2,748 | | Minimum # Nodes in a Tree | 19 | | Median # Nodes in a Tree | 156 | | Average Tree Depth | 7.06 | | Maximum Tree Depth | 18 | | Minimum Tree Depth | 3 | | Median Tree Depth | 7 | | Average # Trees per SI | 4.77 | | Average # Nodes per SI | 991.23 | | Maximum # Nodes for an SI | 3,050 | | Minimum # Nodes for an SI | 363 | | Median # Nodes for an SI | 935 | ## A.3 Labeling Quality The dataset is labeled by one individual as the task is straightforward. To investigate the labeling quality, we randomly select 25 DOM trees, removing their original labels and presenting them to 2 individuals for re-labeling. Table 5 presents the statistics and results. It shows that our labeling quality is decent despite some inevitable disagreements on ambiguous cases, as exampled in Figure 7. | Attribute | Value | |---------------------|---------| | # DOM Trees | 25 | | # Total Nodes | 5,938 | | # Positive PT Nodes | 683 | | # Disagreement | 87 | | # Fleiss' κ | 98.53 | Table 5: Annotation quality investigation. | Ambiguous Case 1 | | |--------------------------|------------------| | <p2> "Shirt" | | | <div> "" | | | <strong> "Men's shirt" | Ambiguous Case 2 | | <p2> "Shirt" | | | <div> "" | | | <p> "..." | | | <root> "" | <root> "" | | <span> "..." | | | <strong> "Women's shirt" | <p> "..." | | <strong> "shirt" | | | <span> "..." | <p> "..." | ## A.4 Data Usage All Web pages used by WEBPT are included in the Common Crawl repository.6 They are intended to provide information on a topic or interest, so consistent with that idea, we labeled the product types on each page. The labels do not contain any personally identifiable information. We are making the annotated dataset available to encourage further research on the product type extraction problem. The WEBPT dataset is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. ## B Implementation Details B.1 Tr**Enc Hyper-Parameters** We set the model dimensionality dmodel = 128 and the number of TRENC layers N = 12. Each attention branch has 4 attention heads, and the single-head attention dimensionality dk = 32. The feed-forward layer above the attention layer (Figure 2) first maps the features from dmodel to a 512dimensional latent space and then maps it back. The classification layer consists of 2 FFN sublayers that first downscale the TRENC layer output to 16-dimensional and then to the 1-dimensional output logits yˆ. We use the same activation functions and dropout strategy as described in (Vaswani et al., 2017). Our experiments show that the performance remains similar when we use 6 or 8 as the number of heads or use model dimensionality dmodel = 512. We train the model using 10−4as the peak learning rate of the AdamW optimizer (Loshchilov and Hutter, 2019) with linear scheduler with 0.1 warmup ratio. The batch size is 8 and the random seed is 42. We do not take multiple runs for each model on each dataset as our dataset and evaluation strategies (§ 5.1) can minimize the impact of random factors. Using another random seed (0) only changes the F¯1 scores of TRENC and GCN by 0.03 and 0.05, respectively. The model is implemented with the "Transformers" library (Wolf et al., 2020) in PyTorch (Paszke et al., 2019). The hyper-parameters not mentioned above keep their default values. ## B.2 Baseline Methods Text Similarity We adopt the same approach as described in § 4.1 with the uncased BERT-base 6https://commoncrawl.org/ model to generate the text sequence embedding em of each node Vm and the concept embeddings c. Then, we compute their cosine similarity through $$\mathrm{sim}_{m}\in(0,1)=\frac{e_{m}^{\mathsf{T}}c}{\|e_{m}\|\|c\|}.$$ We decide the classification threshold by exhausting possible values with 0.01 interval within (0, 1) and select the one that gives the largest F1 score. Notice that this threshold searching method is only applied to the text similarity baseline. Others take a constant threshold 0.5, as described in § 4.3. BERT-FT BERT-FT classifies each node Vm independently by fine-tuning the uncased BERT-base model with the sequence classification task. The model input is the combination of the sequence Sm and the concept C, *i.e.*, "[CLS] Sm [SEP] C [SEP]". It does not consider the tag tm. We append a one-layer FFN to the embedding corresponding to the [CLS] token to map it to a 1dimensional logit. The training objective is minimizing the binary cross-entropy. MLP MLP can be considered as a TRENC model without TRENC layers. In other words, it directly feeds the node embeddings e (§ 4.1) into the classification layer (Figure 2) without considering any inter-dependencies between nodes. We increase its classification layer depth until the validation F1 stops improving for a fair comparison. GNNs Similar to MLP, GNN models substitute the TRENC layers in the TRENC model with the GCN and GAT layers, respectively. The GNN layers are implemented with the "PyTorch Geometric" library (Fey and Lenssen, 2019). The number of GNN layers is fine-tuned according to the validation performance. Graphormer We take the original implementation of Ying et al. (2021) and keep all model components.7 The differences are that we initialize the node features with node embeddings e instead of atom categories, and we train the model with node classification instead of graph classification. We keep its scheme for encoding edges but introduce only one edge category representing the ancestordescendant relationship. SI WEBPT-n ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) ![13_image_3.png](13_image_3.png) ![13_image_4.png](13_image_4.png) 1 2 3 4 5 3d-printing tr tr tr tr tr airsoft-paintball tr tt tr tt tr archery tr tr tt vl tr astronomy tr tr tr tr tr at-home-fitness tr vl tr tt tt at-home-spa tt tr tr vl tr badminton tr vl tr tr vl baking tr tr vl tr tr bartending vl tr tr tr tr baseball tr tr tr tr tr basketball tr tr vl tr tt billiards-pool tt vl vl tt tr bird-watching tr tt tr tt tr boating tr tr tt tr tr bowling tr tt tt tr vl boxing tr tr tr tr tr calligraphy tr tr vl tr vl camping tr tr tr tr tr candle-making tr tr tr tt vl canoeing tr tt tr tr tr cheerleading tr tr tr tr tr cleaning tr tr tr tr tr climbing tt tr tr tr vl coffee tr tr tr tr tr comics-manga tr tr tr tr tr content-creation tt vl tt tr vl cricket tr tr vl tt tr crossfit vl tr tr tr tr cycling tr tr tt tr tr digital-art tr tr tr vl tr diy-home-improvement tr tr tr tr tr dj tr tr tr tr tr drag-queen tr tr tr tr tt drawing-and-sketching tr tr tt tr tr fencing tr tr tt tr tr field-hockey tr vl vl tr tr fishing tt vl tr tr tr floral-arranging tr tr tr tr tr football vl tr tr tr tr gaming tt tr tr tr tr gardening tr tt vl vl vl golfing tr tr tr tr tr gymnastics vl tr tr tr tr hair-care tr tr vl vl tt hiking tt tr tr tr tr hockey tr vl tr tr tr home-entertainment tt vl tr tr tt home-schooling tr tr tr tr tr horse-riding tr tr tr tr tt ![13_image_5.png](13_image_5.png) indoor-plants tr tr tt vl tr interior-design tr tr tr vl tr kayaking tr tr tr tr vl knitting tr tt tr tr tr lacrosse tr tt tt tr tr leathercraft vl tt tr tr tr makeup tr tr tr tr tt model-trains tr tt tr tr tr music-production vl tr tr tr tr nails tr tr tt tr tr painting tr tr tr tr tt paper-crafting tr tr tr tr tr parenting tr tr tr tr tr party-planning tr tr tr tr tt pet tt tr tr tt tr pilates tr tr tr tr tr pottery tt tr tr tr tr rugby tr tr tr tt tr running tr tt tr tr tr sailing tr tr tr tr tr scrapbooking tr tr tr tr tr scuba-diving tr tr tt tr tt sewing tr tr tr tr tr skating tr tr tr tr tr skiiing tt tr tr tr tr skin-care vl tr vl tr tr smart-home tr tr tr tr tr snowboarding vl tr tr tr tr soap-making vl tr tr tr tt soccer tr tr tr tt vl softball tr tr tr tr tr storage-and-organization tr tt tr tr tt student-dorm tr tt tr tr tr surfing tr tr tr vl tr swimming tr tr tr tt tr table-tennis tr tr tt tr tt teaching tt vl tr tt tr tennis tt tr tr tr tr travel tr tr tt tr tr volleyball tr tr tr tr tr weaving-and-spinning tt vl vl vl tr wedding tr tt tr tt tr wine tr tr tr vl vl work-from-home vl tt tr tt tt wrestling tr tr tr tr tr yoga tr tr tt tt tr Table 7: SIs and splits (cont.). ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✗ A2. Did you discuss any potential risks of your work? Our work is totally built upon publicly accessible materials. To the best of our knowledge, It will not trigger any ethical or safety concerns. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section "Abstract" and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** This topic is mainly discussed in appendix A and appendix B. ✓ B1. Did you cite the creators of artifacts you used? Appendix B ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A.4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A.4 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix A.4 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5.2 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5 and Appendix A ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? The annotation guideline is straightforward and made orally. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No additional compensation was provided. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ai-fang-2023-multilingual
Multilingual Pre-training with Self-supervision from Global Co-occurrence Information
https://aclanthology.org/2023.findings-acl.475
Global co-occurrence information is the primary source of structural information on multilingual corpora, and we find that analogical/parallel compound words across languages have similar co-occurrence counts/frequencies (normalized) giving weak but stable self-supervision for cross-lingual transfer. Following the observation, we aim at associating contextualized representations with relevant (contextualized) representations across languages with the help of co-occurrence counts. The result is MLM-GC (MLM with Global Co-occurrence) pre-training that the model learns local bidirectional information from MLM and global co-occurrence information from a log-bilinear regression. Experiments show that MLM-GC pre-training substantially outperforms MLM pre-training for 4 downstream cross-lingual tasks and 1 additional monolingual task, showing the advantages of forming isomorphic spaces across languages.
# Multilingual Pre-Training With Self-Supervision From Global Co-Occurrence Xi Ai College of Computer Science Chongqing University barid.x.ai@gmail.com ## Abstract Global co-occurrence information is the primary source of structural information on multilingual corpora, and we find that analogical/parallel compound words across languages have similar co-occurrence counts/frequencies (normalized) giving weak but stable selfsupervision for cross-lingual transfer. Following the observation, we aim at associating contextualized representations with relevant (contextualized) representations across languages with the help of co-occurrence counts. The result is MLM-GC (MLM with Global Cooccurrence) pre-training that the model learns local bidirectional information from MLM and global co-occurrence information from a logbilinear regression. Experiments show that MLM-GC pre-training substantially outperforms MLM pre-training for 4 downstream cross-lingual tasks and 1 additional monolingual task, showing the advantages of forming isomorphic spaces across languages. ## 1 Introduction Empirical studies (Lample et al., 2018a; Conneau et al., 2020a,c) show multilinguality and crosslinguality emerge from MLM pre-training on multilingual corpora without any supervision. The model is trained/pre-trained as a generator that yields masked token probabilities over the vocabulary. To improve cross-lingual transfer, we present MLM-GC (MLM with Global Co-occurrence) with the combined objective of the generator and a global log-bilinear regression for multilingual pretraining. Our starting point is from two observations on multilingual MLM pre-training. Language's structural information is every property of an individual language that is invariant to the script of the language. Conneau et al. (2020c); Karthikeyan et al. (2020); Sinha et al. (2021); Pires et al. (2019) show that structural similarities across languages can contribute to cross-lingual transfer. Bin Fang College of Computer Science Chongqing University fb@cqu.edu.cn Co-occurrence information or n-gram is the primary source of structural information available to all methods. Some methods like span-based masking (Devlin et al., 2019; Joshi et al., 2020; Levine et al., 2021) now exist to leverage this information for new masking schemes in *monolingual* MLM pre-training, aiming at improving context understanding. However, in *multilingual* MLM pre-training, the question still remains as to how meaning is generated from these statistics on multilingual corpora, how the structural similarities could be learned from that meaning across languages, and how cross-lingual transfer might be improved from that meaning. Furthermore, GloVe (Pennington et al., 2014) prove that leveraging global co-occurrence information can search for relevant information on *monolingual* embedding space. Inspired by GloVe, we assume that global co-occurrence information can also be used to search for relevant information across languages on *multilingual* corpora. This assumption underlies Zipf's law (Ha et al., 2002; Søgaard, 2020) that analogical words and compound words across languages might have similar frequencies/counts on the multilingual corpora. Our empirical studies further justify our assumption that analogical/parallel compound words across languages have similar co-occurrence counts (normalized). Meanwhile, in multilingual MLM pre-training, one of the ultimate goals is to form contextualized representations. Then, global co-occurrence information might be used to regularize representation learning in multilingual MLM pre-training, which allows for better contextualized representations in cross-lingual transfer. In this work, we present MLM-GC to utilize global co-occurrence information. MLM-GC builds on MLM with an extra objective of global log-bilinear regression that minimizes the error between dot products of neighboring contextualized representations and the matrix of global cooccurrence counts. Since MLM only needs to predict masked tokens, we only consider the contextualized representations of the masked tokens and their neighbors, factorizing relevant global cooccurrence counts from the matrix. The model is pre-trained to learn bidirectional information from MLM and the global co-occurrence information from the global log-bilinear regression. On multilingual corpora, MLM-GC pre-training can improve cross-lingual transfer because analogical/parallel compound words across languages might have similar co-occurrence counts allowing for cross-lingual transfer, which is justified in our empirical studies on translation pairs. We have three contributions. 1) We present MLM-GC pre-training for multilingual tasks. The model is additionally supervised by co-occurrence counts on multilingual corpora. 2) MLM-GC pre-training outperforms MLM pre-training on 4 multilingual/cross-lingual tasks. The objective of MLM-GC can be adapted to encoder-decoderbased MLM models, e.g., MASS (Song et al., 2019) and encoder-based MLM models, e.g., XLM (Lample and Conneau, 2019). MLM-GC pre-training can also work on monolingual corpora for language understanding tasks. 3) MLM-GC pre-training can help the model to form isomorphic embedding spaces across languages, which is potentially useful for cross-lingual and multilingual tasks. Our empirical study shows that analogical compound words across languages have similar co-occurrence counts (normalized) contributing to structural similarities across languages for cross-lingual transfer. ## 2 Related Work And Comparison Structural Similarity and Zipf's Law Zipf's law (Zipf, 1949, 2013; Søgaard, 2020) indicates that words or phrases appear with different frequencies, and one may suggest analogical words or phrases appear with relatively similar frequencies in other languages. In multilingual MLM pre-training, Conneau et al. (2020c); Karthikeyan et al. (2020); Pires et al. (2019); Karthikeyan et al. (2020); Sinha et al. (2021) shed light on studying structural information and find that structural similarities across languages are essential for multilinguality, where in this paper, structural similarities mean similar counts as Zipf's law indicated. We follow this line, consider structural similarities from co-occurrence counts, and provide an empirical study to observe how the model learns structural similarities from global co-occurrence counts on multilingual corpora. Meanwhile, GloVe (Pennington et al., 2014) report that co-occurrence counts can provide regularities for embeddings to understand word analogies for monolingual tasks. We extend the scope of GloVe to contextualized representations and multilingual tasks, helping the model form isomorphic spaces across languages in multilingual MLM pre-training. N-gram, Co-occurrence, and Regularity in MLM pre-training Studying co-occurrence or n-gram is not a novel idea in MLM pre-training. Whole Word Masking (Devlin et al., 2019), SpanBERT (Joshi et al., 2020), and PMI-Masking (Levine et al., 2021) suggest *n-gram* spans across several sub-tokens for masking to improve context understanding in monolingual tasks because the model may only learn from easier multi-tokens instead of usefully hard context, where easier multitokens are in a subset of the context and result in sub-optimization. In contrast, we show that cooccurrence counts can refine contextualized representations for improving context understanding and allow for better cross-lingual transfer, suggesting a new objective for MLM pre-training instead of a new masking scheme to capture global cooccurrence information in multilingual pre-training. On the other hand, for cross-lingual transfer, the contextualized representations could be further regularized and refined by aligning cherry-picked pairs after MLM pre-training on multilingual corpora (Ren et al., 2019; Chaudhary et al., 2020; Wang et al., 2020; Cao et al., 2020; Aldarmaki and Diab, 2019; Artetxe et al., 2020; Ai and Fang, 2021). Compared to that, MLM-GC pre-training does not require dictionaries, translation tables, or statistical machine translation models. ## 3 Approach 3.1 Global Regression Modeling In Monolingual Embedding Space GloVe (Pennington et al., 2014) presents a logbilinear regression model: $ \mathcal{L}=\sum_{i,j=1}^{V}f(X_{w_iw_j})(E_{w_i}^T E_{w_j}-\log X_{w_iw_j})^2,$ where $ f(x)=\left\{\begin{array}{c}(x/x_{max})^{\alpha},x<x_{max}\\ 1,otherwise\end{array}\right.,\;V$ is the vocabulary, $ E_w$ is the embedding of token $ w,\,X$ ### 7. stands for the matrix of token-token co-occurrence counts, entries Xwiwj tabulate the number of times token wj occurs in the context of token wi, and xmax is empirically set to 100. The model is able to distinguish relevant embeddings from irrelevant embeddings and discriminate between the two relevant embeddings. ## 3.2 Global Co-Occurrence Modeling For Contextualized Representations In MLM pre-training, when wt at the position t is replaced by the artificial masking token [M]t, the final hidden state or the contextualized representation H[M]t of position t is factorized from the final sequence representation of the input sentence to predict wt (with a *sof tmax* operation). We further factorize a neighboring contextualized representation Hwk (of wt) at position k for the neighboring token wk. Note that wk could be masked (if spanbased masking strategies are applied, e.g., MASK (Song et al., 2019) ) or unmasked (e.g., XLM (Lample and Conneau, 2019)), and we test both scenarios in our experiments. Then, similar to global regression modeling in monolingual embedding space, we consider a regression model 1: $$\mathcal{L}_{[\mathcal{M}]_{t}w_{k}}=f(X_{w_{t}w_{k}})(H^{T}_{[\mathcal{M}]_{t}}H_{w_{k}}-\log X_{w_{t}w_{k}})^{2}.\tag{2}$$ **Example 1**: _The $\mathcal{M}$-norm of the $\mathcal{M}$-norm For all the neighboring tokens wt±n of the input sentence at position [t − n, ..., t) ∪ (*t, ..., t* + n], i.e., excluding position t, we have the model LGC. Then, we employ the new global log-bilinear regression model in MLM pre-training. Formally, given the factorized Hwt±n and H[M]t from H and Xwtwt±n from X, we have the model: $$\mathcal{L}_{GC}=\frac{1}{2n}\sum_{n}$$ $$f(X_{w_{t}w_{t\pm n}})(\frac{H_{[\mathcal{M}]_{t}}^{T}H_{w_{t\pm n}}}{\sqrt{d}}-\log X_{w_{t}w_{t\pm n}})^{2},\tag{3}$$ where $d$ is the model dimension. Compared to Eq.1, we add scaling √d and weight 1 2n to make training stable, where √d is inspired by scaled dotproduct attention (Vaswani et al., 2017) to prevent the dot products get large. They serve as principled hyper-parameters. To obtain the matrix of token-token cooccurrence counts on multilingual corpora for multilingual tasks, we follow GloVe's suggestion that a distance weight scheme is employed. Specifically, in a context window of size 2n + 1, we calculate the token-token co-occurrence counts for positions [t − *n, ..., t, ..., k* + n] with the rule [c*lang*/(n + 1), ..., clang/2, 0, clang/2, ..., c*lang*/(n + 1)] over the shared vocabulary, which means we do not calculate the unigram counts or self-co-occurrence Xwtwtfor the centric token wt at position t. Meanwhile, we are aware that the probability is not normalized and equivalent to token-token co-occurrence counts on the multilingual corpora. However, not all the languages have the same amount of samples in the corpora (e.g., low-resource vs. high-resource). Considering this, we use the language-wise constant c*lang* = CEn/C*lang*, where CEn is the total number of tokens in English corpora, and C*lang* is the total number of tokens in the language *lang*, i.e., cooccurrence counts are normalized by c*lang*. ## 3.3 Multilingual Mlm-Gc Pre-Training In multilingual pre-training, we have a combined objective of MLM and global co-occurrence modeling2, attempting to train the model to understand the masked tokens from bidirectional information and linguistic structures surrounding the masked tokens from global co-occurrence counts, and the result is our MLM-GC pre-training: $$\mathcal{L}_{M L M-G C}=\mathcal{L}_{M L M}+\lambda\mathcal{L}_{G C}.$$ $$(4)$$ In the early experiment, we experiment with λ ∈ {0.1, 0.5, 1, 2}. We find λ = 1 is a general choice for experiments. On the other hand, we find warm_up (Vaswani et al., 2017) of lr, √d, and 1 2n (Eq. 3) are significant. The model might collapse to LGC without *warm*_up, √d, or 1 2n because LGC converges too fast and is unstable. In this situation, the model ignores the objective of MLM. Then, the model can only learn co-occurrence information and does not learn the language knowledge. The result is presented in Table 1. ## Improved Contextualized Representation LGC considers the correspondence in the context [t − n, ..., t) ∪ (*t, ..., t* + n] with an explicit objective. In this way, the model is encouraged to learn from usefully hard context instead of easier multitokens under the supervision from co-occurrence information, where easier multi-tokens are in a subset of the context and result in sub-optimization 2We discuss the scope and limitation in §Limitation. 1We provide an alternative in Appendix D ![3_image_0.png](3_image_0.png) (Levine et al., 2021), as discussed in §Related Work. Meanwhile, co-occurrence counts help the model disambiguate word representations (Ai and Fang, 2022) in language modeling by distinguishing relevant information from irrelevant information and discriminating between the two relevant information in the language. Improved Cross-lingual Transfer With the objective of LGC, we aim at associating HT [M]t Hwt±n with HT [M]t˜ Hw˜t˜±nof different languages if Xwtwt±n = Xw˜t˜w˜t˜±n , where compound words wtwt±n and w˜t˜w˜t˜±n are analogical in different languages. In this way, it underlies the basic assumption that analogical compound words across languages have similar co-occurrence counts (normalized by c*lang*), i.e., wtwt±n and w˜t˜w˜t˜±n are analogical compound words =⇒ Xwtwt±n = Xw˜t˜w˜t˜±n . Although Zipf's law supports this assumption (Ha et al., 2002; Søgaard, 2020) in linguistics, we are still interested in the questions: how it reflects on the multilingual corpora we use and whether analogical pair of wtwt±n and w˜t˜w˜t˜±n =⇒ Xwtwt±n = Xw˜t˜w˜t˜±n . To answer these questions, we extract all the pairs of parallel compound words in En and De from the open-source translation tables (OPUS, Wikipedia v1.0)⋄, e.g., "ist die" (De) and "is the" (En), and compute co-occurrence counts on {*De, En*} Wikipedia dumps (the same dataset we use in our experiment). For any pair, we compute the absolute difference |log(De) − log(En)|, the sum log(De) + log(En) (sorted into bins), and the ratio |log(De)−log(En)|/(log(De)+log(En)) for statistics in Figure 1. The figure tells us that the absolute difference avg and the ratio avg for all the pairs are relatively small and have narrow confidence (95%) intervals. Although the absolute difference avg is proportional to the sum, the ratio avg has no proportional relationship with the sum and is small throughout all the bins. Note that some pairs have low translation scores resulting in large absolute differences. The absolute difference avg is not 0, i.e., an exact match for any pair. However, it still confirms that analogical compound words across languages have similar (but not identical) co-occurrence counts, which might give weak (not 0) but stable (relatively small with high confidence) self-supervision for cross-lingual transfer. Meanwhile, the model is encouraged to distinguish relevant information from irrelevant information and to discriminate between the two relevant information across languages from co-occurrence counts and refine contextualized representations accordingly, which is beneficial for cross-lingual transfer. For example, in our experiment (n = 2), given the translation pair "ist die" (De) and "is the" (En), the relevant pair "ist die" and "is a" (En), the irrelevant pair "ist die" and "locally known" (En), we find |log(ist die) − log(*is the*)| = 0.67 < |log(ist die) − log(*is a*)| = 1.73 < |log(ist die) − log(*locally known*)| = 5.45 < |log(ist die) − log(*En avg*)| = 5.58, where log(*En avg*) is the avg co-occurrence counts in En. Efficiency 1) Computing the co-occurrence matrix is laborious on large corpora. However, it requires a single pass through the entire corpora to collect the statistics, which is a one-time up-front cost and is easy to obtain new information from new corpora for updating. 2) For memories, the co-occurrence matrix is huge, e.g., ≈ 11 G for a 60k BPE vocabulary with float 32. However, it is somewhat trivial because the memory is allocated to CPUs, not GPUs. This can be automatically finished by DL platforms like TensorFlow. Also, the matrix can be formatted to float 16 or even float 8 by pre-*logging* the co-occurrence counts, which will significantly reduce the memory. 3) Meanwhile, we save the token-token cooccurrence matrix as dictionaries {(wi, wj ): tokentoken co-occurrence counts} so that querying the co-occurrence counts for Xwiwj is O(1). Tokenization Sub-token-level vocabularies may impact the co-occurrence counts. In extreme cases, several connective tokens of co-occurrence may only come from one word. However, As discussed in §Improved Contextualized Representation, even in this scenario, the model can be improved from the co-occurrence counts in MLM-GC pre-training. See experiments in Appendix B. ![4_image_0.png](4_image_0.png) ## 4 Experiment All the links of datasets, libraries, scripts, and tools marked with ⋄ are listed in Appendix E. A preview version of the code is submitted, and we will open the source code on GitHub. ## 4.1 Mlm Instance, Configuration, Data Preprocessing And Pre-Training We use XLM (Lample and Conneau, 2019) and MASS (Song et al., 2019) as the MLM instances, where XLM is a token-based encoder model, and MASS is a span-based encoder-decoder model (see Appendix §A.1 for more details). The Transformer configuration is identical to XLM and MASS, where word embeddings, hidden states, and filter sizes are 1024, 1024, and 4096 respectively (**default**). To be fair, we reimplement all the baseline models with our configurations, using official XLM⋄, Tensor2Tensor⋄, and HuggingFace⋄ as references. We compare the results of our reimplementation with the reported results on the same test set to ensure the difference less than 2% in overall performance (Appendix C). For the context window size 2n + 1 of the co-occurrence counts and Eq.4, **we set** n = 2 **for all the experiments**, which is decided by our *dev experiment*. Data preprocessing is identical to XLM and Mass. Specifically, we employ fastBPE⋄ to learn BPE (Sennrich et al., 2016b) with a sampling criterion from Lample and Conneau (2019) for all the experiments. To tokenize {*Zh, T h, Ne*}, we use Stanford Word Segmenter⋄, PyThaiNLP⋄, and Indic-NLP Library⋄, respectively. For the others, we use the Moses tokenizer⋄ with default rules. Our code is implemented on Tensorflow 2.6 (Abadi et al., 2016). We use Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9,β2 = 0.999, ϵ = 1e − 8, *warm*_up = 10000 (Vaswani et al., 2017) and lr = 1e − 4. We set dropout regularization with a drop rate *rate* = 0.1. The mini-batch size is set to 8192 tokens. ## 4.2 Multilingual Task Readers can refer to Appendix §A.2 or references for more introductions to these tasks. Cross-lingual Embedding We attempt MUSE⋄ (Lample et al., 2018a) tasks that measure similarities between two paired words to generally evaluate the degree of the isomorphism of languages' embedding spaces. As discussed in Lample and Conneau (2019); Wang et al. (2020) and our preliminary experiment, the performance of the isomorphism is potentially proportional to the performance of cross-lingual transfer. We treat this experiment as our *dev experiment* to search for n. UNMT UNMT (unsupervised neural machine translation) (Lample and Conneau, 2019; Lample et al., 2018b; Song et al., 2019; Liu et al., 2020) tackles bilingual translation (Bahdanau et al., 2015; Vaswani et al., 2017) on non-parallel bilingual corpora without any cross-lingual signal. ![5_image_0.png](5_image_0.png) Cross-lingual Classification We test XNLI⋄ (Conneau et al., 2020b) on 15 languages (including English) under the cross-lingual transfer setting. The model is pre-trained on multilingual corpora and fine-tuned on the English dataset, aiming at zero-shot classification for other languages. Cross-lingual Question Answering MLQA⋄ (Lewis et al., 2020b) on 7 languages (including English) requires identifying the answer to a question as a span in the corresponding paragraph. We pre-train the model on multilingual corpora and fine-tune it on the English dataset, and then we attempt zero-shot prediction for other languages. ## 4.3 Secondary Monolingual Task Recall that, as presented in Eq.2, H[M]t is the contextualized representation or the final hidden state. Therefore, MLM-GC pre-training is general and can work for other MLM instances such as BERT (Devlin et al., 2019), mBART (Liu et al., 2020), SpanBERT (Joshi et al., 2020), BART (Lewis et al., 2020a), and ALBERT (Lan et al., 2020). Meanwhile, MLM-GC pre-training is substantially better than MLM pre-training beyond multilingual tasks. We provide further experiments on monolingual tasks including SQuAD v1&v2 (Rajpurkar et al., 2016) in Appendix §B, using ALBERT as the MLM instance. ## 5 Result 5.1 Cross-Lingual Embedding And Understanding Co-Occurrence Setup We configure an identical MLM instance to XLM with a 12-layer Transformer encoder. However, instead of 80K BPE and 15 languages in the original work, we learn 60K BPE and pretrain the model on Wikipedia dumps⋄ of the 2 languages. After 400K pre-training steps, we extract the embeddings required by the test set from the embedding space of the model. For words split into 2+ sub-tokens, we average all the sub-token embeddings. See details in Appendix A.2.1. As mentioned early, this is our *dev experiment*. Performance We follow the instruction to compute the cosine similarity for the MUSE task, reporting the result in Table 2 for En ↔ De test sets. MLM-GC pre-training outperforms the baseline model with different n. A large n does not consistently improve performance. We suspect that a large n may impact the capacity of the contextualized representation, which makes the model hard to be trained. Furthermore, n = 2 shows the best performance, and we may explain that in our comparison of co-occurrence counts (Figure 1), n = 2 has slightly smaller absolute difference avg and the ratio avg and narrower confidence (95%) intervals in large-count bins (*> log*(1e7)) contributing to over 45% co-occurrence counts on the multilingual corpora. Since we do not inject any cross-lingual supervision into the embedding space, this test can quantitatively report how MLM-GC refines the language spaces from co-occurrence counts for the isomorphic space and multilinguality. ## Visualization And Multilingual Word Analogy We visualize all the words from the MUSE test sets. Since the task is originally designed for word translation including nouns, verbs, and other meaningful words, analogical words should be clustered and aligned in isomorphic spaces. As reported in Google's NMT (Johnson et al., 2017), the t-SNE can visualize isomorphic spaces across languages. Then, we employ the t-SNE visualization 3to observe the isomorphic space. Figure 2 shows that MLM-GC pre-training help the model learn to form a better isomorphic space than MLM pre-training. Another insight is from the classic analogy test: "English: *King - Man + Woman = Queen* and German: *König-Mann+Frau = Königin*", and we show results in Table 3. MLM-GC pre-training consistently improves the performance on monolingual tests (only English or German) and multilingual tests (mixing English with German). Then, we can further observe the effectiveness of our method in improving the quality of isomorphic spaces across languages. ## 5.2 Unmt Setup&Training We consider two similar language pairs {De, Ro} ↔ En from WMT⋄ (Bojar et al., 2018) and a dissimilar language pair 3We reduce the dimension of embeddings to 3 by using PCA and then configure the t-SNE visualization. ![6_image_1.png](6_image_1.png) En ↔ Ne (Nepali) from FLoRes⋄ (Guzmán et al., 2019). Transformer, configurations, corpora, and BLEU scripts are identical to XLM and MASS. We pre-train the model around 400K iterations on only monolingual corpora of the two languages. After MLM-GC pre-training, we follow XLM and MASS to train the model for translation from pre-trained weights. In the training phase, we use Adam optimizer with parameters β1 = 0.9, β2 = 0.997 and ϵ = 10−9, and a dynamic learning rate with warm_up = 8000 (learning_*rate* ∈ (0, 7e−4]) is employed. We set dropout with *rate* = 0.1 and label smoothing with *gamma* = 0.1. After around | X | cos (X , Queen) | cos(X , Königin) | | | |-------------------------|-------------------|--------------------|----------|------| | XLM | XLM+OURS | XLM | XLM+OURS | | | mono: King-Man+Woman | 0.44 | 0.46 | 0.35 | 0.39 | | mono: König-Mann+Frau | 0.33 | 0.42 | 0.45 | 0.52 | | multi: King-Man+Frau | 0.34 | 0.41 | 0.33 | 0.37 | | multi: King-Mann+Woman | 0.45 | 0.48 | 0.33 | 0.38 | | multi: King-Mann+Frau | 0.42 | 0.49 | 0.35 | 0.40 | | multi: König-Man+Woman | 0.35 | 0.39 | 0.44 | 0.49 | | multi: König-Man+Frau | 0.25 | 0.34 | 0.40 | 0.46 | | multi: König-Mann+Woman | 0.38 | 0.42 | 0.43 | 0.49 | ![6_image_0.png](6_image_0.png) Performance In Table 4, we report *multiBLEU.perl*⋄ to compare with XLM and MASS and sacreBleu⋄ to compare with mBART (Liu et al., 2020) so that the evaluation is based on the same BLEU script. MLG-GC pre-training consistently improves the performance of baseline models on all the similar language pairs by 3% ∼ 7% and on the dissimilar pair by 2.5 ∼ 5.0 BLEU. The performance on the dissimilar pair is competitive to SOTA: mBART and is better than mBART on similar language pairs. However, mBART uses CC25 (Wenzek et al., 2020) for pre-training and obtains benefits from more languages (25 languages) and samples. The global co-occurrence information across languages is general and abstract for isomorphic spaces, which allows for cross-lingual representations. It eventually helps the model understand translation knowledge. Meanwhile, we observe substantial gains on MASS + OURS (and ALBERT (Lan et al., 2020) in Appendix B), where MASS (ALBERT) is based on span masking. As discussed in §Related Work and Introduction, spanbased masking (also including Whole Word Masking (Devlin et al., 2019) and PMI-Masking (Levine et al., 2021) ) implicitly leverages co-occurrence information for improving context understanding. In addition to the empirical study in Figure 1, the gain further confirms that global co-occurrence information significantly injects some signals for crosslingual transfer beyond improving context understanding. ## 5.3 Cross-Lingual Classification Setup&Fine-tuning The model configuration, preprocessing, and corpora are identical to XLM4. For the classification objective, we deploy a linear classification layer on top of the encoder. Af- 4In the literature, this setup also refers to XLM-15. Model en fr es de el bg ru tr ar vi th zh hi sw ur Avg baseline 73.7 67.7 68.7 67.7 68.9 67.9 65.4 64.2 64.8 66.4 64.1 65.8 64.1 55.7 58.4 65.6 mBERT 82.1 73.8 74.3 71.1 66.4 68.9 69.0 61.6 64.9 69.5 55.8 69.3 60.0 50.4 58.0 66.3 12-layer Transformer encoder, 80K BPE, and 15 XNLI languages from Wikipedia dumps downloaded by WikiExtractor⋄. XLM 83.2 76.5 76.3 74.2 73.1 74.0 73.1 67.8 68.5 71.2 69.2 71.9 65.7 64.6 63.4 71.5 XLM + PMI-Masking ⋆ 84.1 78.4 77.8 76.6 75.1 75.5 74.9 69.7 70.8 73.0 70.7 73.4 68.1 66.1 65.3 73.3 XLM + OURS 84.9 78.6 78.7 77.5 76.2 77.1 74.8 71.5 72.6 75.7 72.6 76.2 68.2 67.5 66.5 74.6 + Parallel Sentences from OPUS⋄ XLM + TLM 85.0 78.7 78.9 77.8 76.6 77.4 75.3 72.5 73.1 76.1 73.2 76.5 69.6 68.4 67.3 75.1 XLM + TLM + OURS 85.0 79.5 79.4 78.5 77.3 78.0 76.2 73.1 74.0 76.8 74.0 77.1 70.5 70.0 68.5 75.9 Table 5: Results of cross-lingual classification on XNLI. ⋆ is reimplemented. Model en es de ar hi vi zh Avg ![7_image_1.png](7_image_1.png) mBERT-102 77.7 / 65.2 64.3 / 46.6 57.9 / 44.3 45.7 / 29.8 43.8 / 29.7 57.1 / 38.6 57.5 / 37.3 57.7 / 41.6 12-layer Transformer encoder, 80K BPE, and and 15 XNLI languages from Wikipedia dumps downloaded by WikiExtractor⋄. XLM 74.9 / 62.4 68.0 / 49.8 62.2 / 47.6 54.8 / 36.3 48.8 / 27.3 61.4 / 41.8 61.1 / 39.6 61.6 / 43.5 XLM + PMI-Masking ⋆ 76.0 / 63.9 69.2 / 50.2 64.1 / 48.0 55.8 / 38.0 49.8 / 28.5 62.9 / 42.2 63.3 / 40.5 63.1 / 44.4 XLM + OURS 77.7 / 65.9 71.5 / 51.1 65. 7 / 48.9 57.4 / 40.0 51.5 / 30.3 64.5 / 43.2 64.7 / 41.9 64.7 / 45.9 ter pre-training, we deploy the randomly initialized linear classifier and fine-tune the encoder and the linear classifier on the En NLI dataset with mini-batch size 16. We use Adam optimizer with lr = 5 × 10−4and linear decay of lr. After finetuning, we make zero-shot prediction for the other 14 languages. See details in Appendix A.2.3. Performance We report the result in Table 5. Our method consistently improves baseline models by 3.5% (Avg). As discussed in previous models (Conneau et al., 2020b; Karthikeyan et al., 2020; Wu and Dredze, 2019; Pires et al., 2019; Dufter and Schütze, 2020), multilinguality is essential for this task. Then, we confirm the effectiveness of MLM-GC pre-training. Furthermore, our method outperforms XLM + PMI-Masking (span-based). Similar to the comparison in UNMT, MLM-GC pre-training uses co-occurrence information for better context understanding and cross-lingual transfer, whereas XLM + PMI-Masking leverages cooccurrence information for context understanding but performs worse for cross-lingual transfer because of the lack of a mechanism to help crosslingual transfer. We also include XLM + TLM (Lample and Conneau, 2019) for comparison. In this experiment, XLM + TLM using parallel sentences in pre-training slightly outperforms MLMGC, which indicates the knowledge gap between co-occurrence information and parallel sentences for cross-lingual supervision. Besides, when applying MLM-GC pre-training for XLM + TLM, we still observe gains. We attribute the additional gains to the contextualized representations that are further refined by co-occurrence information to represent similar abstractions for cross-lingual transfer. Intuitively, the co-occurrence information gives ![7_image_0.png](7_image_0.png) extra cross-lingual supervision beyond a limited amount of parallel sentences. ## 5.4 Cross-Lingual Question Answering Setup&Fine-tuning The setup is similar to §Cross-lingual Classification. We follow the instruction of SQuAD from BERT, fine-tuning the model with a span extraction loss on the English dataset. We use Adam optimizer with lr = 5 × 10−5and linear decay of lr. As suggested, we finetune the model on SQuAD v1.1 (Rajpurkar et al., 2016) and then make zero-shot prediction for the 7 languages of MLQA. See details in Appendix A.2.4. Performance In Table 6, MLM-GC pre-training substantially improves the performance (Avg) in both F1 and EM metrics by 4.8 % and 5.0 % respectively. Meanwhile, MLM-GC pre-training yields more improvements for low-resource languages. We attribute all the improvements to the global cooccurrence objective the model learns in MLM-GC pre-training. Intuitively, spans (groups of words) of answers across languages are most likely to consist of nouns and terms and can be easily represented, clustered, and aligned in the improved isomorphic space because they are analogical and might have similar co-occurrence counts as discussed in the empirical study (Figure 1). ## 6 Conclusion In this work, we leverage the global co-occurrence information from multilingual corpora. The result is MLM-GC pre-training with a combined objective of MLM and global co-occurrence modeling. ![8_image_0.png](8_image_0.png) Our experiments show that MLM-GC pre-training can substantially improve the performance of naive MLM pre-training for 4 multilingual tasks, and additional experiments show that it can work for monolingual tasks. The isomorphic space across languages benefits from co-occurrence information, which allows for cross-lingual transfer. Meanwhile, the model is encouraged to distinguish relevant information from irrelevant information and to discriminate between the two relevant information across languages from co-occurrence counts (normalized) and refine contextualized representations accordingly. We believe that leveraging cooccurrence information for cross-lingual transfer is an interesting avenue in multilingual pre-training. ## 7 Limitation Theoretically, our method might benefit from comparable corpora across languages, where words and compound words might have similar distribution because Zipf's law might be satisfied only for similar domains. For instance, as presented in Figure 3, word distributions of De and En on Wikipedia are similar after applying BPE. In our experiments, we only confirm the effectiveness of our methods on Wikipedia corpora in different languages, which are comparable across languages. This might limit the scope of our method. However, multilingual models are commonly pre-trained on comparable corpora, e.g., Wikipedia and CC. Another limitation is about the combined objective in Eq. 4. In our experiments, we try to eliminate the MLM objective, only considering global regression modeling LGC. The result is not promising, and it seems that LGC can not work well without the help of the MLM objective. However, our experiment is very simple. This might be further confirmed or designed in future work. ## References Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In *12th USENIX Symposium on Operating Systems* Design and Implementation (OSDI 16), pages 265– 283. Xi Ai and Bin Fang. 2021. Empirical regularization for synthetic sentence pairs in unsupervised neural machine translation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 12471–12479. Xi Ai and Bin Fang. 2022. Leveraging relaxed equilibrium by lazy transition for sequence modeling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2904–2924, Dublin, Ireland. Association for Computational Linguistics. Hanan Aldarmaki and Mona Diab. 2019. Context-aware cross-lingual mapping. In *Proceedings of the 2019* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3906–3911, Minneapolis, Minnesota. Association for Computational Linguistics. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In *3rd International* Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. Ond rej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In *Proceedings of the First Conference* on Machine Translation, pages 131–198, Berlin, Germany. Association for Computational Linguistics. Ond rej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Conference on Machine Translation, pages 272–307, Belgium, Brussels. Association for Computational Linguistics. Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual Alignment of Contextual Word Representations. In *8th International Conference on Learning* Representations, ICLR 2020 - Conference Track Proceedings. Pi Chuan Chang, Michel Galley, and Christopher D Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In 3rd Workshop on Statistical Machine Translation, WMT 2008 at the Annual Meeting of the Association for Computational Linguistics, ACL 2008, pages 224– 232. Aditi Chaudhary, Karthik Raman, Krishna Srinivasan, and Jiecao Chen. 2020. Dict-mlm: Improved multilingual pre-training using bilingual dictionaries. arXiv preprint arXiv:2010.12566. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020a. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau, Ruty Rinott, Guillaume Lample, Holger Schwenk, Veselin Stoyanov, Adina Williams, and Samuel R. Bowman. 2020b. XNLI: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485. Association for Computational Linguistics. Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020c. Emerging cross-lingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022–6034, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philipp Dufter and Hinrich Schütze. 2020. Identifying elements essential for BERT's multilinguality. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing*, pages 4423–4437, Online. Association for Computational Linguistics. Francisco Guzmán, Peng Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The Flores evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 6098–6111. Le Quan Ha, E. I. Sicilia-Garcia, Ji Ming, and F. J. Smith. 2002. Extension of Zipf's law to words and phrases. In COLING 2002: The 19th International Conference on Computational Linguistics. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. *Transactions of the Association for Computational Linguistics*, 5:339–351. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. *Transactions of the Association for Computational Linguistics*, 8:64–77. K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In *8th International Conference* on Learning Representations, ICLR 2020 - Conference Track Proceedings. Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015 - Conference Track Proceedings. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. In *Advances in* neural information processing systems. Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrasebased & neural unsupervised machine translation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In *8th International Conference on Learning Representations,* ICLR 2020 - Conference Track Proceedings. Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, and Yoav Shoham. 2021. PMI-Masking: Principled masking of correlated spans. In *9th International Conference* on Learning Representations, ICLR 20201- Conference Track Proceedings. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020b. MLQA: Evaluating Cross-lingual Extractive Question Answering. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7315– 7330. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Mauro Mezzini. 2018. Empirical study on label smoothing in neural networks. In WSCG 2018 - Short papers proceedings. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? pages 4996– 5001. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics, volume 2, pages 784–789. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuad: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Explicit cross-lingual pre-training for unsupervised machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 770–779, Hong Kong, China. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In *Proceedings of the* 54th Annual Meeting of the Association for Computational Linguistics, pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. arXiv preprint arXiv:2104.06644. Anders Søgaard. 2020. Some languages seem easier to parse because their treebanks leak. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 2765–2770. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pretraining for language generation. In *Proceedings of* the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5926–5936. PMLR. Ashish Vaswani, Google Brain, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in neural information processing systems, pages 5998–6008. Pascal Vincent. 2010. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. *Journal of Machine Learning Research*, 11:3371–3408. Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime Carbonell. 2020. Crosslingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework. In 8th International Conference on Learning Representations, ICLR 2020 - Conference Track Proceedings. Guillaume Wenzek, Marie Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003– 4012. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 833–844, Hong Kong, China. Association for Computational Linguistics. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho Jui Hsieh. 2020. Large batch optimization for deep learning: Training bert in 76 minutes. In *8th International* Conference on Learning Representations, ICLR 2020 - Conference Track Proceedings. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In *Proceedings of the* IEEE International Conference on Computer Vision, pages 19–27. George Kingsley Zipf. 2013. *The psycho-biology of language: An introduction to dynamic philology*. Routledge. ## A Experiment A.1 Mlm Instance A.2 Multilingual Task A.2.1 Cross-Lingual Embedding We adapt our method to two MLM instances: XLM (Lample and Conneau, 2019) and MASS (Song et al., 2019). We follow the instructions of BERT (Devlin et al., 2019) that each selected token is replaced with the probabilities (p[unchanged], p[*random*], p[*mask*]) = (0.1, 0.1, 0.8). XLM XLM is similar to BERT (Devlin et al., 2019) but uses text streams of an arbitrary number of sentences. Following the instruction, we randomly select 15% of the tokens from the input sentence for replacing. MASS MASS is different from XLM and BERT but similar to SpanBERT (Joshi et al., 2020), using spans to replace consecutive tokens. Given an input sentence with length N, we randomly select consecutive tokens with length N/2 for replacing. We are interested in the isomorphism of languages' embedding spaces. To investigate, we attempt MUSE⋄ tasks (Lample et al., 2018a) that measure similarities between two paired words. This test can generally evaluate the degree of the isomorphism of languages' embedding spaces. Meanwhile, as discussed in Lample and Conneau (2019); Wang et al. (2020) and our preliminary experiment, the performance of the isomorphism is potentially proportional to the performance of cross-lingual transfer learning tasks. Therefore, we treat this experiment as our *dev experiment* to search for n. Setup We configure a 12-layer Transformer encoder and use Moses tokenizer⋄ with default rules for tokenization, identical to XLM (Lample and Conneau, 2019). For fast *dev experiment*, we employ fastBPE⋄ to learn 60K BPE (Sennrich et al., 2016b) from concatenated corpora with a sampling criterion from (Lample and Conneau, 2019) and pre-train the model on 2 languages instead of 80K BPE and 15 languages in the reported work. George Kingsley Zipf. 1949. Human behavior and the principle of least effort: an introd. to human ecology. Training In the pre-training phase, we pre-train the model on Wikipedia dumps⋄ of the two languages for 400K steps. After pre-training, we extract the words' embeddings required by the test set from the embedding space of the model. For words split into 2+ sub-tokens, we average all the extracted embeddings of sub-tokens. We then evaluate paired embeddings in cosine similarity. ## A.2.2 Unmt UNMT (unsupervised neural machine translation) (Lample and Conneau, 2019; Lample et al., 2018b; Song et al., 2019; Liu et al., 2020) tackles bilingual translation (Bahdanau et al., 2015; Vaswani et al., 2017) on non-parallel bilingual corpora without having access to any parallel sentence. In other words, there is no supervision for translation. The model requires pre-training to obtain some initial multilingual knowledge for decent performance. Setup We configure an identical Transformer model to XLM (Lample and Conneau, 2019) and MASS (Song et al., 2019), which has 6 layers in both the encoder and decoder using default configurations. We consider multiple families of languages. Specifically, we consider similar language pairs {De, Ro} ↔ En, using the same dataset as previous works (Lample and Conneau, 2019). The dataset consists of monolingual corpora {*De, En*} from WMT 2018⋄ (Bojar et al., 2018) including all available *NewsCrawl* datasets from 2007 through 2017 and monolingual corpora Ro from WMT 2016⋄ (Bojar et al., 2016) including *NewsCrawl* 2016. We report the performance for {De, Ro} ↔ En on *newstest2016*. Meanwhile, we share the FLoRes⋄ (Guzmán et al., 2019) task to evaluate a dissimilar language pair Ne ↔ *English* (Nepali). For tokenization, we use the Moses tokenizer⋄ developed by Koehn et al. (2007) with default rules except for Ne that is tokenized by Indic-NLP Library⋄. We employ fastBPE⋄ to learn 60K BPE (Sennrich et al., 2016b) from concatenated corpora of paired languages, using the same sampling criteria in Lample and Conneau (2019). We use learnable language embeddings and position embeddings. Training In MLM-GC pre-training, the model is pre-trained around 400K iterations on only monolingual corpora of different languages. In the training phase, we use Adam optimizer (Kingma and Ba, 2015) with parameters β1 = 0.9,β2 = 0.997 and ϵ = e − 9, and a dynamic learning rate with warm_up = 8000 and learning_*rate* ∈ (0, 7e− 4]) (Vaswani et al., 2017) is employed. We set dropout regularization with a drop rate *rate* = 0.1 and label smoothing with *gamma* = 0.1 (Mezzini, 2018). On-the-fly back-translation (Sennrich et al., 2016a) (the inference mode of the model) performs to generate synthetic parallel sentences that can be used for training of translation as NMT (neural machine translation) is trained on genuine parallel sentences in a supervised manner. Meanwhile, UNMT learns an objective of denoising language modeling (Vincent, 2010) to maintain language knowledge in the training phase except for MASS. After around 400K iterations, we report BLEU computed by *multi-BLEU.perl*⋄ and *scareBLEU*⋄ with default rules, according to baseline models. In conclusion, in pre-training, we only have the objective of MLM-GC, and in training, we have the two objectives: 1) denoising language modeling for XLM or MASS itself and 2) translation (i.e., NMT), where the translation objective is finished by using synthetic pairs sentences from on-the-fly back-translation. ## A.2.3 Cross-Lingual Classification We experiment with XNLI⋄ (Conneau et al., 2020b), a general cross-lingual classification task on 15 languages (including English) under the cross-lingual transfer setting. The model takes in two input sentences and is required to classify into one of the three labels: entailment, contradiction, and neutral. The model is fine-tuned on the English dataset and then attempts zero-shot classification for other languages. Setup Following the previous work5(Lample and Conneau, 2019), we use raw sentences including 15 XNLI languages from Wikipedia dumps downloaded by WikiExtractor⋄. Sentences in different languages are sampled with the method of Lample and Conneau (2019). The model configuration and preprocessing are identical to XLM that we use a 12-layer transformer encoder and 80K BPE. For the classification objective, we deploy a linear classification layer on top of the encoder. To tokenize {*zh, th*}, we use Stanford Word Segmenter⋄ and PyThaiNLP⋄ respectively. For the others, we use the Moses tokenizer⋄ with default rules. Similar to the Cross-lingual Embedding experiment, we use fastBPE⋄ and the sampling strategy to learn BPE. Fine-tuning After pre-training on the corpora, we deploy a randomly initialized linear classifier and fine-tune the encoder and the linear classifier on the En NLI dataset with mini-batch size 16. We 5In the literature, this setup also refers to XLM-15. use Adam optimizer (Kingma and Ba, 2015) with lr = 5e − 4 and linear decay of lr. After finetuning, we make zero-shot prediction for the other 14 languages. We use categorical cross-entropy with three labels: entailment, contradiction, and neutral. ## A.2.4 Cross-Lingual Question Answering We consider the MLQA⋄ (Lewis et al., 2020b) dataset for a cross-lingual question answering task. Given a question and a passage containing the answers, the aim is to predict the answer text span in the passage. This task requires identifying the answer to a question as a span in the corresponding paragraph. The evaluation data for English and 6 other languages are obtained by automatically mining target language sentences that are parallel to sentences in English from Wikipedia, crowdsourcing annotations in English, and translating the question and aligning the answer spans in the target languages. Similar to the cross-lingual classification task, the model is fine-tuned on the English dataset and then attempts zero-shot prediction for other languages. Setup The setup is similar to the experiment of cross-lingual classification. Fine-tuning We follow the instruction of SQuAD from BERT (Devlin et al., 2019), finetuning the model with a span extraction loss on the English dataset. We use Adam optimizer (Kingma and Ba, 2015) with lr = 5e − 5 and linear decay of lr. Meanwhile, as suggested, we fine-tune the model on the SQuAD v1.1 (Rajpurkar et al., 2016) dataset and then make zero-shot prediction for the 7 languages of MLQA. Given a sequence T, we only have a start vector S ∈ R*hidden* and an end vector E ∈ R*hidden* during fine-tuning. The probability of word i being the start of the answer span is computed as a dot product Ti and S d by a *sof tmax* over all of the words in the sequence pi =e P STi k∈T e ETk . Similarly, we can compute the end of the span. The score of a candidate span from position i to position j is defined as STi +ETj and the maximum scoring span where j ≥ i is used as a prediction. ## B Additional And Supportive Result B.1 Pre-Training For Monolingual Task Although we derive our method from the observation of multilingual models, MLM-GC pre-training ![13_image_0.png](13_image_0.png) is substantially better than MLM pre-training. We provide further experiments on monolingual tasks including SQuAD v1&v2 (Rajpurkar et al., 2016). setup For this monolingual task, our configuration is identical to 12-base-ALBERT (Lan et al., 2020). Specifically, We set the model dimension, word embedding dimension, and the maximum number of layers to 768, 128, and 12. As recommended, we generate a masked span for the MLM targets using the random strategy from Joshi et al. (2020), and we use LAMB optimizer⋄ with a learning rate of 0.00176 (You et al., 2020) instead of Adam optimizer. Following the instructions, we pre-train models on BooksCorpus⋄ (Zhu et al., 2015) and English Wikipedia⋄ (Devlin et al., 2019) for 140k steps. Fine-tuning Similar to the cross-lingual question answering task, we fine-tune the pre-trained model on SQuAD(v1.1 and v2.0)⋄ (Rajpurkar et al., 2016, 2018). Result Table 7 shows that MLM-GC pre-training is substantially better than MLM pre-training when pre-training 12-base-ALBERT for monolingual tasks. These observations confirm the effectiveness of MLM-GC pre-training on monolingual tasks. ## B.2 Impact Of Tokenization Method We are interested in how the tokenization method affects the performance because it potentially affects the token-token co-occurrence counts. For evaluation, we use all the configurations in UNMT and additionally configure a word-level vocabulary for the model. The word-level vocabulary has the same number of tokens as the BPE vocabulary. Table 8 shows that our method can work with different tokenization methods. Our method can generally improve the performance, regardless of the difference between the two baseline models in the same configuration. ## C Reimplementation We compare our reimplementation with reported results in Table 9. ![14_image_0.png](14_image_0.png) Table 8: Impact of Tokenization Method. ⋆ denotes reimplemented models. | Language pair | De ↔ En | | |-------------------------------------|-----------|------| | multi-BLEU.perl⋄ with default rules | | | | XLM(Lample et al., 2018b) reported | 34.3 | 26.4 | | XLM(Lample et al., 2018b) ⋆ | 33.9 | 26.3 | | XLM + OURS | 35.8 | 27.8 | | multi-BLEU.perl⋄ with default rules | | | | MASS(Song et al., 2019) reported | 35.2 | 28.3 | | MASS(Song et al., 2019)⋆ | 35.0 | 28.0 | | MASS + OURS | 36.5 | 28.7 | Table 9: Performance of UNMT. Baseline models (⋆) are reimplemented with our configurations. ## D Alternative In MLM pre-training, when wt at the position t is replaced by the artificial masking token [M]t, the output distribution for wtis obtained by applying a pre-softmax linear transformation O ∈ Rd×Vfrom the final hidden state or the contextualized representation H[M]t to the output vocabulary size V , followed by a *sof tmax* operation which generates an output matrix normalized over its rows. Specifically, Q[M]twt = exp(HT [M]t Owt ) PV k=1 exp(HT [M]t Owk ) is the model for the probability of wtin the context of H[M]t , where Owt and Owk are vectors factorized from O, i.e., self-recognizing. In this way, the probability of wn in the context H[M]t is similar to Q[M]twt in the global regression model. Specifically, for wn, Q[M]twt could be extended to: $$Q_{[\mathcal{M}]_{t}w_{n}}=\frac{e x p(H_{[\mathcal{M}]_{t}}^{T}O_{w_{n}})}{\sum_{k=1}^{V}e x p(H_{[\mathcal{M}]_{t}}^{T}O_{w_{k}})}.\qquad(5)$$ For all the neighboring tokens wt±n of the input sentence at position [t−n, ..., t)∪(*t, ..., t*+n], i.e., excluding position t, we have the model Q[M]twt±n . Then, we employ the new global log-bilinear regression model in MLM pre-training. Formally, given the factorized Owt±n and Xwtwt±n from O and X respectively, we have the model: $$\begin{split}\mathcal{L}_{GC}&=\frac{1}{2n}\sum_{n}f(X_{wt_{t}\pm n})\\ &(\frac{H_{[M]_{t}}^{T}O_{wt\pm n}}{\sqrt{d}}-\log X_{wt_{t}w_{t}\pm n})^{2},\end{split}\tag{6}$$ $${\mathrm{model~dimension}}.$$ where d is the model dimension. In Table 10 and Table 11, we show the experimental results (also see our previous revision https://openreview.net/forum?id=DswOSXvLfuy). In conclusion, the presented method Eq. 3 slightly outperforms the alternative Eq. 6 on sentence-level tasks. We explain that the alternative involves neighboring embeddings in the objective, which directly improves the quality of cross-lingual embeddings. Compared to that, the presented method of the main paper considers contextualized representations of the masked tokens and their neighboring tokens, which is better for cross-lingual transfer. ## E Source We list all the links of dataset, tools, and other sources in Table 12. Note that for multilingual tasks, datasets can be downloaded from the XTREME link except for UNMT and crossembeddings. | Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | Avg | |-----------------------------------------------------------------------------------------------------------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|-------| | baseline | 73.7 | 67.7 | 68.7 | 67.7 | 68.9 | 67.9 | 65.4 | 64.2 | 64.8 | 66.4 | 64.1 | 65.8 | 64.1 | 55.7 | 58.4 | 65.6 | | mBERT | 82.1 | 73.8 | 74.3 | 71.1 | 66.4 | 68.9 | 69.0 | 61.6 | 64.9 | 69.5 | 55.8 | 69.3 | 60.0 | 50.4 | 58.0 | 66.3 | | 12-layer Transformer encoder, 80K BPE, and 15 XNLI languages from Wikipedia dumps downloaded by WikiExtractor⋄. | | | | | | | | | | | | | | | | | | XLM | 83.2 | 76.5 | 76.3 | 74.2 | 73.1 | 74.0 | 73.1 | 67.8 | 68.5 | 71.2 | 69.2 | 71.9 | 65.7 | 64.6 | 63.4 | 71.5 | | XLM + PMI-Masking ⋆ | 84.1 | 78.4 | 77.8 | 76.6 | 75.1 | 75.5 | 74.9 | 69.7 | 70.8 | 73.0 | 70.7 | 73.4 | 68.1 | 66.1 | 65.3 | 73.3 | | XLM + OURSv2 | 84.9 | 78.6 | 78.7 | 77.1 | 76.2 | 77.0 | 75.2 | 72.5 | 72.6 | 75.1 | 73.0 | 74.2 | 68.2 | 67.2 | 67.1 | 74.5 | | + Parallel Sentences from OPUS⋄ | | | | | | | | | | | | | | | | | | XLM + TLM | 85.0 | 78.7 | 78.9 | 77.8 | 76.6 | 77.4 | 75.3 | 72.5 | 73.1 | 76.1 | 73.2 | 76.5 | 69.6 | 68.4 | 67.3 | 75.1 | | XLM + TLM + OURSv2 | 85.0 | 79.9 | 79.2 | 78.5 | 77.1 | 78.0 | 76.4 | 73.1 | 74.0 | 76.7 | 73.9 | 76.8 | 70.2 | 68.8 | 67.9 | 75.5 | Table 10: Results of cross-lingual classification on XNLI. ⋆ is reimplemented. | Model | en | es | de | ar | hi | vi | zh | Avg | |---------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | mBERT-102 | 77.7 / 65.2 | 64.3 / 46.6 | 57.9 / 44.3 | 45.7 / 29.8 | 43.8 / 29.7 | 57.1 / 38.6 | 57.5 / 37.3 | 57.7 / 41.6 | | 12-layer Transformer encoder, 80K BPE, and and 15 XNLI languages from Wikipedia dumps downloaded by WikiExtractor⋄. | | | | | | | | | | XLM | 74.9 / 62.4 | 68.0 / 49.8 | 62.2 / 47.6 | 54.8 / 36.3 | 48.8 / 27.3 | 61.4 / 41.8 | 61.1 / 39.6 | 61.6 / 43.5 | | XLM + PMI-Masking ⋆ | 76.0 / 63.9 | 69.2 / 50.2 | 64.1 / 48.0 | 55.8 / 38.0 | 49.8 / 28.5 | 62.9 / 42.2 | 63.3 / 40.5 | 63.1 / 44.4 | | XLM + OURSv2 | 77.5 / 65.6 | 71.4 / 50.9 | 65.3 / 48.6 | 57.1 / 39.6 | 51.1 / 29.9 | 64.1 / 43.0 | 64.5 / 41.7 | 64.4 / 45.7 | Table 12: Links of source. | Item | Links | |---------------------------------------------|----------------------------------------------------------------------------------------| | WMT 2016 | http://www.statmt.org/wmt16/translation-task.html | | WMT 2018 | http://www.statmt.org/wmt18/translation-task.html | | FLoRes | https://github.com/facebookresearch/flores | | Indic-NLP Library | https://github.com/anoopkunchukuttan/indic_nlp_library | | XLM | https://github.com/facebookresearch/XLM | | multi-BLEU.perl | https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-BLEU.perl | | Moses tokenizer | https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl | | Kytea | http://www.phontron.com/kytea/ | | XTREME | https://github.com/google-research/xtreme | | fastBPE | https://github.com/glample/fastBPE | | MUSE | https://github.com/facebookresearch/MUSE | | Cambridge Dictionary | https://dictionary.cambridge.org/ | | SemEval'17 | https://alt.qcri.org/semeval2017/task2/ | | WikiExtractor | https://github.com/attardi/wikiextractor | | PyThaiNLP | https://github.com/PyThaiNLP/pythainlp | | Stanford Word Segmenter Chang et al. (2008) | https://nlp.stanford.edu/software/segmenter.html | | Tensor2Tensor | https://github.com/tensorflow | | HuggingFace | https://huggingface.co | | ORPUS, Wikipedia v1.0 | https://opus.nlpl.eu | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 line 75 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 And 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3.3 line 320 and section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3.3 and Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
audibert-etal-2023-low
Low-Rank Updates of pre-trained Weights for Multi-Task Learning
https://aclanthology.org/2023.findings-acl.476
Multi-Task Learning used with pre-trained models has been quite popular in the field of Natural Language Processing in recent years. This framework remains still challenging due to the complexity of the tasks and the challenges associated with fine-tuning large pre-trained models. In this paper, we propose a new approach for Multi-task learning which is based on stacking the weights of Neural Networks as a tensor. We show that low-rank updates in the canonical polyadic tensor decomposition of this tensor of weights lead to a simple, yet efficient algorithm, which without loss of performance allows to reduce considerably the model parameters. We investigate the interactions between tasks inside the model as well as the inclusion of sparsity to find the best tensor rank and to increase the compression rate. Our strategy is consistent with recent efforts that attempt to use constraints to fine-tune some model components. More precisely, we achieve equivalent performance as the state-of-the-art on the General Language Understanding Evaluation benchmark by training only 0.3 of the parameters per task while not modifying the baseline weights.
## Low-Rank Updates Of Pre-Trained Weights For Multi-Task Learning Alexandre Audibert†, Massih-Reza Amini†, Konstantin Usevich‡, and Marianne Clausel⋆ †Université Grenoble Alpes, Computer Science Laboratory, Grenoble, France {firstname.lastname}@univ-grenoble-alpes.fr ‡CNRS, Université de Lorraine, CRAN, Nancy, France konstantin.usevich@univ-lorraine.fr ⋆Université de Lorraine, Institut Elie Cartan de Lorraine, Nancy, France marianne.clausel@univ-lorraine.fr ## Abstract Multi-Task Learning used with pre-trained models has been quite popular in the field of Natural Language Processing in recent years. This framework remains still challenging due to the complexity of the tasks and the challenges associated with fine-tuning large pre-trained models. In this paper, we propose a new approach for Multi-task learning which is based on stacking the weights of Neural Networks as a tensor. We show that low-rank updates in the canonical polyadic tensor decomposition of this tensor of weights lead to a simple, yet efficient algorithm, which without loss of performance allows to reduce considerably the model parameters. We investigate the interactions between tasks inside the model as well as the inclusion of sparsity to find the best tensor rank and to increase the compression rate. Our strategy is consistent with recent efforts that attempt to use constraints to fine-tune some model components. More precisely, we achieve equivalent performance as the state-of-the-art on the General Language Understanding Evaluation benchmark by training only 0.3% of the parameters per task while not modifying the baseline weights. ## 1 Introduction Multi-task learning (MTL) aims in exploiting simultaneously similarities and differences between related tasks (Caruana, 1997). Compared to training the models separately, this can lead to enhance learning efficiency and prediction accuracy for the task-specific models. In addition to certain similarities to transfer learning and data augmentation, MTL has a regularizing effect in practice (Caruana, 1997). MTL also has the advantage of storage efficiency, which is advantageous for devices with less memory. On the other hand, MTL performance may be impacted by task covariance (Wu et al., 2020), various loss functions, and difference between dataset sizes (Pilault et al., 2021). Additionally, there are still some MTL-related limitations, including negative transfer, in which learning two tasks at once lowers the model's performance on both tasks (Crawshaw, 2020; Wu et al., 2020), and catastrophic forgetting in which one some tasks features can be overlooked during the training process (Serra et al., 2018). In this study, we aim to decrease the amount of language model trainable parameters in the MTL framework. To achieve this, we suggest stacking weight matrices corresponding to several tasks in a 3-way tensor and performing a tensor low-rank update, which is similar to the LoRA technique in the single-task case (Hu et al., 2021). One of the main advantages of the tensor approach is that it allows for splitting the weight updates into shared and task-specific parts. Moreover we extend our approach to the bias term which showed remarkable results in Ben Zaken et al. (2022). We test our method using the General Language Understanding Evaluation (GLUE) Benchmark. Thus we demonstrate that low-rank update for both matrix and bias successfully strikes a balance between preserving positive transfer and minimizing negative transfer by only training 0.3% of the initial parameters per task. We also look into how different model factors affect the way tasks interact with one another. ## 2 Related Work Multi-Task Learning for NLP. Training the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) in hard-sharing Multi-Task Learning may be subject to negative transfer (Liu et al., 2019; Glover and Hokamp, 2019). To tackle this problem, some studies propose to use a shared hyper-network or to do conditional learning (Pilault et al., 2021; Mahabadi et al., 2021; He et al., 2022). This method consists in creating a task embedding which is used to build model's layers. Another approach to circumvent the negative transfer problem, consists to use Knowledge Distillation (KD) where single-task 7544 teachers transfer their knowledge to one multi-task student (Clark et al., 2019; Wei et al., 2021). Tensor methods. The use of tensor methods mainly focused on applying tensor approximations for compression of pretrained models (compression of fully-connected (Oseledets, 2011) and convolutional networks Lebedev et al. (2014); Kim et al. (2015)). Ren et al. (2022) utilized tensor decomposition for compressing Pre-trained Language Models and presented a formal framework with defined nomenclature to thoroughly explore tensor decomposition approaches to compress Transformerbased language models. In multi-task learning, tensor methods have been used to introduce sharing between weights across different tasks (RomeraParedes et al., 2013; Wimalawarne et al., 2014; Yang and Hospedales, 2017). Recent work considered splitting in task-agnostic (shared) and taskspecific parts. However, the cited works were mostly focused on learning compressed representation or tensor completion, mostly with so-called Tucker tensor decomposition. The weight representation in our work is much simpler and more efficient in terms of parameters: the same frozen weight matrix is shared and task-specific updates use canonical polyadic decomposition (CPD). It is compact and have an additional interpretation with shared and task-specific factors. We can even do it with tucker (see more details on CPD in Appendix A.1. ## 3 **Morris: Multi-Task Learning Based On** Low-Rank Updates Of Pre-Trained Weights In the following, we designate vectors, matrices, and tensors, respectively, with bold lowercase letters, bold capital letters, and calligraphy letters. We assume that there are T tasks and T associated datasets Di = {(x (i) j, y (i) j) | j ∈ {1*, . . . , N*i}} where Niis the size of the i th collection. We denote by lithe loss function, ϕithe specific parameters of the i th task, and Θ the shared parameters between tasks. The Multi-Task Learning objective function is: $$\begin{array}{c}{{\operatorname*{min}_{\Theta,\{\phi_{i}\}_{i=1}^{T}}L(\Theta,\{\phi_{i}\}_{i=1}^{T},\{D_{i}\}_{i=1}^{T})=}}\\ {{\sum_{i=1}^{T}\sum_{(x_{j}^{(i)},y_{j}^{(i)})\in D_{i}}l_{i}(f(\Theta,\phi_{i},x_{j}^{(i)}),y_{j}^{(i)})}}\end{array}\quad(1)$$ By adopting a low-rank tensor update for the ![1_image_0.png](1_image_0.png) weights tensor and a low-rank matrix update for the biases, we suggest extending the approaches proposed by Hu et al. (2021) and Ben Zaken et al. (2022) to multi-task learning and updating the weights and biases for several tasks concurrently. ## 3.1 The Proposed Framework Our proposal is to update the output of the transformer layer for the i th task as follows: if the dense layer is the query or value matrix; Oi = (W + Qi)X + b + bi, otherwise we have Oi = WX +b+bi; where W and b are frozen weights of BERT, Qi and bi are the updates and X, Oi are respectively the input, output of the layer (Figure 1). Weights assumption. The weight updates for T tasks can be stacked in a single d × T × d tensor, so each matrix is a slice is Q[:,i,:] = Qi of the tensor. Then our assumption is that Q has lowrank; Q =Prw r=1 A[:,r] ⊗B[:,r] ⊗C[:,r], where A, C ∈ R d×Rw , B ∈ R T ×Rw and rw represents the rank. Bias assumption. The variation of the original bias for each task can be represented by a matrix Bˆ ∈ R d×T where the i th column bi represents the bias of the task i ∈ {1*, . . . , T*}. We assume that the matrix Bˆ is a low-rank matrix and can be written by the product of two matrices D ∈ Rd×rb and 7545 E ∈ Rrb×T where rb represented the rank, i.e., ${\mathbf{b}_i=\sum_{t=1}^{r_b}\mathbf{D}_{[:,t]}\times\mathbf{E}_{[t,i]}}$ × ## 3.2 Motivation The straightforward MTL extension of LoRA (Hu et al., 2021) combined with BitFit (Ben Zaken et al., 2022) would be to train the same low-rank matrix Qi and bias bi for all tasks. This approach will be called **LoRA_Bitfit_MTL** in the rest of the paper. However we argue that our approach is more flexible than **LoRA_Bitfit_MTL** because it is a particular case of **MORRIS** where the entries of matrices B, E are all set to 1. Moreover our approach is quite natural because the concatenation of low rank matrices creates a tensor with a rank at most equal to the sum of the rank of the matrices. ## 3.3 Interpretation As Shared And Task-Specific Weights The underlying assumptions allow the following interpretations. The slices of the weight tensor with low-rank tensor structure factorize as follows; Qi = A × *diag*(B[i,:]) × C⊤, where *diag*() is the diagonal matrix built from a given vector. The matrix A and C are then shared between task whereas rows of B are task specific parameters. Similarly, for biases the matrix D is shared between each task and the column of E are task-specific (Figure 1). ## 3.4 Apply L0 **To Find The Optimal Rank** The rank of the tensor must be at most the sum of the ranks of the preceding matrices. Decreasing this rank will reduce the number of model parameters, however there is no straightforward manner to fix this rank, as well as the bias rank which is similarly tough to be defined. Following, Louizos et al. (2018)'s work, we propose to use L0 regularisation on the rows and columns of respectively B and E to define the ranks. In this case, the binary mask associated to α called z can estimated as: $$\begin{array}{c}{{u\sim U(0,1)}}\\ {{s_{j}=\sigma(l o g(u)-l o g(1-u)+\alpha_{j})}}\\ {{\bar{s}_{j}=j\cdot(r-l)+l}}\\ {{z_{j}=\operatorname*{min}(1,\operatorname*{max}(0,\bar{s}_{j}))}}\end{array}\quad(3)$$ Based on this definition, Equation (1) can then be written in the following form: $$\begin{array}{c}\min\limits_{\Theta,\{\phi_{i}\}_{i=1}^{T},\alpha}L(\Theta,\{\phi_{i}\}_{i=1}^{T},\{D_{i}\}_{i=1}^{T})=\\ \\ E(u)\sum\limits_{i=1}^{T}\sum\limits_{(x_{j}^{(i)},y_{j}^{(i)})\in D_{i}}l_{i}(f(\Theta,\phi_{i}\circ z,x_{j}^{(i)}),y_{j}^{(i)})\\ \\ +\lambda\sum\limits_{j=0}^{d}\sigma(\alpha_{j}-l\alpha g(\frac{-l}{d}))\end{array}\tag{4}$$ $${\mathrm{(2)}}$$ Where l and d are two stretching constants, λ controls the strength of the L0 regularisation and σ is the the sigmoid function. More details of this regularisation can be found in Louizos et al. (2018); Guo et al. (2021). ## 4 Experiments And Analyse We shall now present our experimental results. ## 4.1 Implementation Details We use BERT as the base model in Morris, that we implemented using Pytorch1. Furthermore, We use a fully connected layer on the [CLS] token as decoder for each task; with the cross entropy loss and the mean squared error for respectively the classification and the regression tasks. The values of the hyperparameters were fixed as the ones in LoRA (Hu et al., 2021). We select a batch size of 32 for all experiments, learning rate in {4e−4, 1e−4, 5e−5} for single task approaches, {1e−3, 4e−4, 1e−4, 5e−5} for Multi-task approaches and dropout equal to 0.1 with AdamW (Loshchilov and Hutter, 2017) as the optimizer. For single task approaches the rank was set to 8, and, {8, 16, 32, 64} for **LoRA_BitFit_MTL**. For Morris the rank of the bias was set to 4 in all experiments and the rank of the tensor corresponding to the weights to 64. For the L0 regularisation, λ was found in the interval {1e−5, 5e−6, 2e−6 which corresponds to a rate of sparsity equals to {60%, 40%, 20%}. In Multi-Task learning one of the major influencing factor is the choice of the data sampling policy (Glover and Hokamp, 2019). We picked the same as Mahabadi et al. (2021) and the same number of training steps equal to 2 18 since our objective is not to research the impacts of sampling policy. In our experiments, we did a short pre-training of 10000 step with a learning rate equals to 4e−4, after that all αj lower than 0.5 were pushed to 0 and the 1https://pytorch.org Model Total Params Trained params/ task QNLI RTE QQP MNLImMNLImmSST-2 MRPC COLA STSBAvg Single Task BERT[a] x9 100% 90.5 66.4 71.2 84.6 83.4 **93.5** 88.9 52.1 85.8 79.6 Bitfit[b] x1.008 **0.09%** 89.7 65.5 67.8 80.8 80.9 92.4 87.4 47.2 **87.6** 77.7 Multi-task BERT MTL[c] x1 11.1% 90.5 74.5 70.4 83.5 83.1 93.1 88.0 48.5 80.6 79.1 BERT MTL[d] x1 11.1% 89.3 **76.6** 70.8 84.0 83.4 93.4 86.7 51.2 83.6 79.9 PALs[d] x1.13 12.5% 90.0 76.0 **71.5** 84.3 83.5 92.6 88.7 51.2 85.8 80.4 CA-MTL[d] x1.12 5.6% 90.5 76.4 69.2 **85.9** 85.8 93.2 88.6 **53.1** 85.3 **80.9** Our approach Morris† x1.024 0.27% 91.1 73.7 70.6 83.8 83.5 92.8 **90.2** 52.0 85.8 80.4 Morris L0† x1.014 0.16% **91.6** 73.7 70.5 84.1 83.1 92.1 89.6 49.9 86.3 80.1 others were set to pushed to 1. The pseudo-code of our approach is provided in the Appendix A.6). ## 4.2 Metrics And Baselines We considered the General Language Understanding Evaluation (GLUE) benchmark in our experiments (benchmark details are given in Appendix A.4). As metrics, we considered standard measures that are Matthews Correlation for COLA and Spearman Correlation for STS-B, F1 score for MRPC/QQP, as well as accuracy. As baselines, we compared Morris to our implementations of the following approaches: **LoRA** (Hu et al., 2021), Lora_BitFit which combines **LoRA** and **BitFit** in (Hu et al., 2021; Ben Zaken et al., 2022), as well as LoRA_BitFit_MTL presented above. All experiments are done on 3 seeds and the results are the average value of the performances. For this part, we chose to not use the test online but we split each dev set into dev/test set. We also compared Morris to single task models: BERT (Devlin et al., 2018), **Bitfit** (Ben Zaken et al., 2022), as well as, **Multi-task models** which are two extensions of **BERT** to this case (Glover and Hokamp, 2019), PALs (Stickland and Murray, 2019) and CA-MTL (Pilault et al., 2021). This comparison is done on the online test set, the best model on three seeds was kept for the comparison. ## 4.3 Results We first begin to compare our approach with **LoRA** and its extensions. Performances are shown in Table 2. As a result, the average performance appears to be improved by a factor of 0.5 when the **LoRA** and **Bitfit** techniques are combined. Moreover, this approach seems to be efficient in a Multi-task Learning setting, as **LoRA_BitFit_MTL** increases the general performance by 0.6. Finally, Morris outperforms all our baseline. In addition, the use of the L0 regularisation enables the model to decrease the number of parameters by a factor of 0.6 with no loss of performance. Given that all approaches employ training parameters of the same order of magnitude, comparisons in this situation are straightforward. In a more general case, when Morrisis compared to the other methods in Table 1, this is not the case. We first notice that Morris performs better than all single task approaches. Furthermore, our approach trains the model with less parameters than the other approaches by a higher factor in the multi-target case. Only the **CA-MTL** (Pilault et al., 2021) seems to be competitive with Morris. We justify this by pointing out that, in contrast to our sampling approach, the sampling strategy used in (Pilault et al., 2021) is highly extensive. In the general case, our approach is equivalent or better than the most of the baselines in term of average | Trained | | | | |-----------------|--------|---------|------| | Model | Total | params/ | Avg | | Params task | | | | | Single Task | | | | | LoRA | x1.024 | 0.27% | 80.7 | | Lora_BitFit | x1.03 | 0.34% | 81.2 | | Multi Task | | | | | LoRA_BitFit_MTL | x1.022 | 0.246% | 81.8 | | Morris | x1.024 | 0.27% | 82.2 | | Morris L0 | x1.014 | 0.16% | 82.4 | ![4_image_0.png](4_image_0.png) ## 4.4 Interaction Between Tasks We assume that tasks are similar if their weight variations are similar. As a measure, we compare biases and slices using the cosine similarity: $$S^{bias}_{[i,j]}=\frac{\langle\hat{B}_{[i,:]}\hat{B}_{[j,:]}\rangle}{\|\hat{B}_{[i,:]}\|\|\hat{B}_{[j,:]}\|},\quad S^{weight}_{[i,j]}=\frac{\langle\mathcal{Q}_{[:,i,:]}\mathcal{Q}_{[:,j,:]}\rangle}{\|\mathcal{Q}_{[:,i,:]}\|\|\mathcal{Q}_{[:,j,:]}\|}\tag{5}$$ We will explore the weights (wq) and (bm2) due to their significant variance in light of prior works (Ben Zaken et al., 2022; Hu et al., 2021). For this, we examine the relationship between Morris and Morris with the L0 regularization. In order to analyze more broad interactions, we often create NB similarity matrices by using Equation (5). We then decide to average these NB similarity matrices. These findings are presented in Figure 2 where it is clear that the diagonal block reflecting the kind of task has the greatest similarity score. Additionally, the coefficients are not close to 1, indicating that task-specific weights enable successful task differentiation. The bias similarity seeks to distinguish the **CoLA** and **SST-2** tasks from the other tasks, which are known to be uncorrelated one from another and regularization reduces task similarity. ## 5 Conclusion In this paper we presented, a novel method for multi-task learning that relies on stacking the neural network weights into a tensor. We demonstrated that low-rank updates in the conventional polyadic tensor decomposition of this tensor of weights result in an efficient technique that allows for a significant reduction in model parameters without sacrificing performance. On the GLUE Benchmark, we showed that our proposed approach successfully ![4_image_1.png](4_image_1.png) achieves a compromise between maintaining positive transfer and reducing negative transfer by only using 0.3% of the initial model's parameters. ## 6 Limitations The drawbacks of our method are the same as those of **LoRA**: it is tricky to batch inputs to many tasks with varying A and B in a single forward pass, and the rank may be greater for tasks that are more challenging. Moreover, we believe that weights obtained during a single task may be used for a better initialisation. Finally, the use of a different sampling policy on a different dataset may also be appropriate, however this choice is not obvious. ## 7 Acknowledgement This work was supported by the ANR (Agence Nationale de Recherche) grants Lawbot (ANR-20- CE38-0013) and LeaFleT (ANR-19-CE23-0021). ## References Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1–9, Dublin, Ireland. Association for Computational Linguistics. Rich Caruana. 1997. Multitask learning. *Machine* learning, 28(1):41–75. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937, Florence, Italy. Association for Computational Linguistics. Michael Crawshaw. 2020. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine Learning Challenges Workshop*, pages 177–190. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *In North American Association for Com- putational Linguistics (NAACL)*. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hungyi Lee. 2022. AdapterBias: Parameter-efficient token-dependent representation shift for adapters in NLP tasks. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2608–2621, Seattle, United States. Association for Computational Linguistics. John Glover and Chris Hokamp. 2019. Task selection policies for multitask learning. Demi Guo, Alexander Rush, and Yoon Kim. 2021. Parameter-efficient transfer learning with diff pruning. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4884–4896, Online. Association for Computational Linguistics. Yun He, Huaixiu Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, YaGuang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, and Ed H. Chi. 2022. Hyperprompt: Prompt-based taskconditioning of transformers. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. 2017. First quora dataset release: Question pairs. Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. 2015. Compression of deep convolutional neural networks for fast and low power mobile applications. *In International* Conference on Learning Representations. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. 2014. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. *Computer Science*. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In *Thirteenth International Conference on the Principles of* Knowledge Representation and Reasoning. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496, Florence, Italy. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l0 regularization. Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021. Parameterefficient multi-task fine-tuning for transformers via shared hypernetworks. In ACL. I. V. Oseledets. 2011. Tensor-train decomposition. volume 33, pages 2295–2317. Jonathan Pilault, Amine Elhattami, and Christopher Joseph Pal. 2021. Conditionally adaptive multitask learning: Improving transfer learning in nlp using fewer parameters & less data. International Conference on Representation Learning - ICLR, abs/2009.09139. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Yuxin Ren, Benyou Wang, Lifeng Shang, Xin Jiang, and Qun Liu. 2022. Exploring extreme parameter compression for pre-trained language models. Bernardino Romera-Paredes, Hane Aung, Nadia Bianchi-Berthouze, and Massimiliano Pontil. 2013. Multilinear multitask learning. In *Proceedings of the* 30th International Conference on Machine Learning, volume 28 of *Proceedings of Machine Learning Research*, pages 1444–1452, Atlanta, Georgia, USA. PMLR. Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. Overcoming catastrophic forgetting with hard attention to the task. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning* Research, pages 4548–4557. PMLR. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642. Asa Cooolicyer Stickland and Iain Murray. 2019. BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. *CoRR*, abs/1902.02671. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471. Tianwen Wei, Jianwei Qi, and Shenghuan He. 2021. A flexible multi-task model for BERT serving. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Kishan Wimalawarne, Masashi Sugiyama, and Ryota Tomioka. 2014. Multitask learning meets tensor factorization: task imputation via convex optimization. In *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc. Sen Wu, Hongyang R Zhang, and Christopher Ré. 2020. Understanding and improving information transfer in multi-task learning. In *International Conference on* Representation Learning - ICLR. Yongxin Yang and Timothy M. Hospedales. 2017. Deep multi-task representation learning: A tensor factorisation approach. In *International Conference on Learning Representations*. ## A Appendix A.1 Cp Decomposition Tensor compression can be carried out using the potent technique of canonical polyadic (CP) decomposition. The aim here is to approximate an N-way tensor where N ⩾ 3 by low-rank tensor, that can be written as a sum of rank-one tensors. For the sake of presentation and without loss of generality, let N = 3 and X ∈ R I1×I2×I3 be a 3-way tensor. This tensor has rank one if and only if there exists three vectors a ∈ R I1, b ∈ R I2 and c ∈ R I3 such that: $${\mathcal{X}}_{o n e s}=\mathbf{a}\otimes\mathbf{b}\otimes\mathbf{c}$$ X*ones* = a ⊗ b ⊗ c (6) where ⊗ is the tensor (outer) product operation. A general CP representation of a 3-way tensor X is of the form: $${\mathcal{X}}=\sum_{r=1}^{R}\mathbf{a}^{(r)}\otimes\mathbf{b}^{(r)}\otimes\mathbf{c}^{(r)},$$ where R is an integer. The smallest R such that Equation: 7 is verified, is called the rank of the tensor X . For convenience, the vectors in the previous expression (7) can be stacked into the factor matrix A ∈ R I1×R, B ∈ R I2×R and C ∈ R I3×R such as the i th columns of A is the vector a (i)(same reasoning for B and C). In the sequel, the goal is to approximate neural network tensors by low-rank tensors of the form. $$\mathcal{X}=\sum_{r=1}^{R}\mathbf{A}_{[:,r]}\otimes\mathbf{B}_{[:,r]}\otimes\mathbf{C}_{[:,r]}\tag{8}$$ **Training and initialisation** $${\mathrm{initialisation}}$$ A crucial aspect of deep learning is initialization. Inspired by general approach, our adding part has to be equal to zero at the beginning of our training. The natural choice is to initialize the specific task matrix of each layer B and E at zero. The rest of our adding parameters are initialized randomly. ## A.3 Parameter Efficiency In this section, we investigate the number of training parameters. We note θ ⋆the parameters of the frozen BERT, and we note the set of our adding matrix index by the number of blocks NB, and by the number of bias per block $nb_{bias}$: $\{\mathbf{A}^{j},\mathbf{B}^{j},\mathbf{C}^{j}\}_{j=1}^{2\times NB},\{\mathbf{D}^{j},\mathbf{E}^{j}\}_{j=1}^{nb_{bias}\times NB}$. In this case the model parameters are: $\Theta=(\theta^{*},\{\mathbf{A}^{j}\}_{j=1}^{2\times NB},\{\mathbf{C}^{j}\}_{j=1}^{2\times NB},\{\mathbf{D}^{j}\}_{j=1}^{nb_{bias}\times NB})$ and $\{\phi_{i}\}_{i=1}^{T}=(\{\mathbf{B}^{j}_{[i,:]}\}_{j=1}^{2\times NB},\{\mathbf{E}^{j}_{[i,:]}\}_{j=1}^{NB})$. The number of trainable shared parameters is equal to |Θ\θ ⋆| = NB × d × (4rw + rb × nb*bias*) and the number of specific task parameters is equal to |{ϕi} T i=1| = NB × T(2rw + rb × nb*bias*). In the case where the regularisation L0 is applied, we considered that these parameters are negligible. The number of added parameters depends linearly on the number of tasks but does not depend on the dimension of the hidden space d which makes our approach efficient in terms of parameters. ## A.4 Dataset $$\mathbf{\Pi}(7)$$ We considered the General Language Understanding Evaluation (GLUE) benchmark in our experiments. This benchmark is composed of a large variety of task like *Single-Sentence Classification*: CoLA (Warstadt et al., 2018), SST-2(Socher et al., 2013), *Similarity and Paraphrase tasks*: MRPC (Dolan and Brockett, 2005), STS-B (Cer et al., 2017), QPP (Iyer et al., 2017) and *Inference* Tasks: MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), RTE (Dagan et al., 2005) and WNLI(Levesque et al., 2012). Following other studies, we did not take into account WNLI (Levesque et al., 2012) and we considered MNLI to be composed of two different tasks: MNLI-m (matched) and Mnli-mm (mismatched) more details can be found in Table 3. ## A.5 Interaction Between Task When it comes to mode-2 unfolding, the cosine similarity between slices is comparable. With our tensor representation, the following may be efficiently computed: $$S^{w e i g h t}=n o r m a l i z e({\bf Q}_{(2)}{\bf Q}_{(2)}^{T})\qquad(9)$$ $$S^{weight}=normalize(\mathbf{B}(\mathbf{C}\odot\mathbf{A})^{T}(\mathbf{C}\odot\mathbf{A})\mathbf{B}^{T})\tag{10}$$ $$S^{w e i g h t}=n o r m a l i z e(\mathbf{B}(\mathbf{C}^{T}\mathbf{C}*\mathbf{A}^{T}\mathbf{A})\mathbf{B}^{T})\,\,\,(11)$$ | Tasks | Corpus | |Train| | |Test| | |---------------------------------|---------------------------------------|-----------|----------| | CoLA (Warstadt et al., 2018) | Corpus of Linguistic Acceptability | 8.5K | 1K | | SST-2 (Socher et al., 2013) | Stanford Sentiment Treebank | 67K | 1.8K | | MRPC (Dolan and Brockett, 2005) | Microsoft Research Paraphrase Corpus | 3.7K | 1.7K | | STS-B (Cer et al., 2017) | Semantic Textual Similarity Benchmark | 7K | 1.4K | | QQP (Iyer et al., 2017) | Quora Question Pairs | 364K | 391K | | MNLI (Williams et al., 2018) | Multi-Genre NLI | 393K | 20K | | QNLI (Rajpurkar et al., 2016) | Question NLI | 105K | 5.4K | | RTE (Dagan et al., 2005) | Recognition Textual Entailment | 2.5K | 3K | | WNLI (Levesque et al., 2012) | Winograd NLI | 634 | 146 | Table 3: Presentation of tasks in GLUE (Wang et al., 2018) with their corresponding training and test sets sizes. | Algorithm 1: Training of Morris T Input: Dataset {Di} i=1; Loss function per task {li} T i=1 Task sampling policy: P; if Apply Regularisation L0 then for step ← 1 to 104 do Select one task 't' according to P; Select one batch bt = (Xt , Yt) ∈ Dt ; Compute the loss lt(f(Θ, ϕt ◦ z, Xt), YT ); Update Θ, ϕt and α; end Threshold on α:; α[α < 0.5] = −10; α[α > 0.5] = 10 end 18 do for step ← 1 to 2 Select one task 't' according to P; Select one batch bt = (Xt , Yt) ∈ Dt ; Compute the loss lt(f(Θ, ϕt , Xt), YT ); Update Θ and ϕt ; end Output: Trained Parameters Θ and {ϕi} T i=1 | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## A.6 Implementation Details We utilized the pre-trained BERT base uncased offered by Hugging Face2. For the optimizer AdamW Loshchilov and Hutter (2017), we use a linear decay with a warmup of 0.06 and gradient clipping for all experiments. Our model is evaluated each 2000 steps for a total of 2 18 training steps. Only the best checkpoint on average is kept. For LoRA approach we choose a rank equal to eight which seems to be very efficient, for number of epochs we also followed the instruction in LoRA (Hu et al., 2021) used for the Roberta model. Our model is 2https://huggingface.co/bert-base-uncased training like in (Liu et al., 2019) . ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✗ A2. Did you discuss any potential risks of your work? Our work does not involve any risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** ✗ B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 And A (Appendix) ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We report the type of GPU used in appendix the rest of the computation budget is academic. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 and A (appendix) ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? The main results are obtained online (https://gluebenchmark.com) with the test set which has a limited number of submission per day. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
enguehard-2023-sequential
Sequential Integrated Gradients: a simple but effective method for explaining language models
https://aclanthology.org/2023.findings-acl.477
Several explanation methods such as Integrated Gradients (IG) can be characterised as path-based methods, as they rely on a straight line between the data and an uninformative baseline. However, when applied to language models, these methods produce a path for each word of a sentence simultaneously, which could lead to creating sentences from interpolated words either having no clear meaning, or having a significantly different meaning compared to the original sentence. In order to keep the meaning of these sentences as close as possible to the original one, we propose Sequential Integrated Gradients (SIG), which computes the importance of each word in a sentence by keeping fixed every other words, only creating interpolations between the baseline and the word of interest. Moreover, inspired by the training procedure of language models, we also propose to replace the baseline token {``}pad{''} with the trained token {``}mask{''}. While being a simple improvement over the original IG method, we show on various models and datasets that SIG proves to be a very effective method for explaining language models.
# Sequential Integrated Gradients: A Simple But Effective Method For Explaining Language Models Joseph Enguehard ![0_image_0.png](0_image_0.png) Babylon Health Skippr joseph@skippr.com ## Abstract Several explanation methods such as Integrated Gradients (IG) can be characterised as pathbased methods, as they rely on a straight line between the data and an uninformative baseline. However, when applied to language models, these methods produce a path for each word of a sentence simultaneously, which could lead to creating sentences from interpolated words either having no clear meaning, or having a significantly different meaning compared to the original sentence. In order to keep the meaning of these sentences as close as possible to the original one, we propose Sequential Integrated Gradients (SIG), which computes the importance of each word in a sentence by keeping fixed every other words, only creating interpolations between the baseline and the word of interest. Moreover, inspired by the training procedure of several language models, we also propose to replace the baseline token "pad" with the trained token "mask". While being a simple improvement over the original IG method, we show on various models and datasets that SIG proves to be a very effective method for explaining language models.1 ## 1 Introduction Language models such as BERT (Devlin et al., 2018) have demonstrated to be effective on various tasks, for instance on sentiment analysis (Hoang et al., 2019), machine translation (Zhu et al., 2020), text summarization (Liu, 2019) or intent classification (Chen et al., 2019). However, with the increased performance and usage of such models, there has been a parallel drive to develop methods to explain predictions made by these models. Indeed, BERT and its variations are complex models which do not allow a user to easily understand why a certain prediction has been produced. On the other hand, it is important to be able to explain a 1An implementation of this work can be found at https: //github.com/josephenguehard/time_interpret Figure 1: **Comparison between IG, DIG, and our** method: SIG. While DIG improves on IG by creating discretized paths between the data and the baseline, it can produce sentences with a different meaning compared to the original one. Our method tackles this issue by fixing every word to their true value except one, and moving the remaining word along a straight path (SIG) model's predictions, especially when this model is used to make high-stake decisions, or when there is a risk of a discriminating bias, for instance when detecting hate speech on social media (Sap et al., 2019). As a result, developing effective methods to explain not only language models, but also machine learning models in general, has recently gained significant attention. Many different methods have therefore been proposed such as: LIME (Ribeiro et al., 2016), Grad*Inp (Shrikumar et al., 2016), Integrated Gradients (IG) (Sundararajan et al., 2017), DeepLift (Shrikumar et al., 2017) or GradientShap (Lundberg and Lee, 2017). Among these methods, some can be characterised as path-based, which means that they rely on a straight line between the data and an uninformative baseline. For instance, IG computes gradients on interpolated points along such a path, while DeepLift and GradientShap can be seen as approximations of IG (Ancona et al., 2017; Lundberg and Lee, 2017). While these methods aim to be used on any type of models and data, some have been tailored to the specificity of language models. For instance, Sanyal and Ren (2021) challenge the use of continuous paths on a word embedding space which is inherently discrete. They propose as a result Discretized Integrated Gradient (DIG), which replaces the continuous straight path with a discretized one, where interpolated points are words. In our work, we suggest another potential issue when applying path-based explanation methods on language models. These models are usually designed to be used on individual or multiple sentences, in order to perform for instance sentiment analysis or question answering. However, a path-based method applied on such models creates straight lines between each word and a baseline simultaneously. When interpolated points are grouped together to form a sentence, this sentence could have a very different meaning compared with the original one. As a result, we propose a simple method to alleviate this potential issue: computing the importance of each word in a sentence or a text by keeping fixed every other word and only creating interpolations between the baseline and the word of interest. After computing the importance of each word in this way, we normalise these attributions across the sentence or text we aim to explain. We call this method Sequential Integrated Gradients (SIG), as, although we focus in this work on language models, such a method could be used on any sequential modelling. We also propose to use the token "mask" as a baseline, when possible, as its embedding has been trained to replace part of sentences when training language models. As a result, our method follows closely the training procedure of these models. ## 2 Method SIG formulation Let's define a language model as a function F(x) : R m×n → R. The input x is here modelled as a sequence of m words, each having n features. These features are usually constructed by an embedding layer. We denote xithe i th word of a sentence (or of a text, depending on the input of the model), and xij the j th feature of the i th word. The output of F is a value in R, which is, in our experiments, a measure of the sentiment for a given sentence. We now define the baseline for each word xi as x i = (x1*, ...,* <mask>*, ...,* xm). The baseline is therefore identical to x except at the i th position, where the word xiis replaced by the embedding of the word "mask"2, a token used in many language model to replace part of the sentence during training. Moreover, we use the notation x iinstead of xi as x icorresponds to an entire sentence, not to be mistaken with a single word like xi. In this setting, we keep the baseline as similar to the original sentence as possible, only changing the word of interest. This method of explaining a word is also kept similar to the way these language models are usually pre-trained, by randomly masking part of sentences. Let's now define our Sequential Integrated Gradients (SIG) method. For a word xi and a feature j, SIG is defined as: $$\mathbf{SIG}_{i j}(\mathbf{x}):=(x_{i j}-{\overline{{x}}}_{i j})\times$$ $$\int_{0}^{1}{\frac{\partial\mathbf{F}({\overline{{\mathbf{x}}}}^{i}+\alpha\times(\mathbf{x}-{\overline{{\mathbf{x}}}}^{i}))}{\partial x_{i j}}}\,d\alpha$$ Similar to the original IG (Sundararajan et al., 2017), we compute the gradient of F along a straight line between x iand x for each word xi, the main difference being that the baseline differs for each word. Also similar to the original IG, we approximate in practice the integral with Riemann summation. Finally, we compute the overall attribution of a word by computing the sum over the feature dimension j, and normalising the result: $$\mathbf{SIG}_{i}(\mathbf{x}):={\frac{\sum_{j}\mathbf{SIG}_{i j}}{||\mathbf{SIG}||}}$$ Axioms satisfied by SIG The original Integrated Gradients method satisfies a few axioms that are considered desirable for any explanation methods to have. Among these axioms, SIG follows implementation invariance, which states that attributions should be identical if two models are functionally equivalent. Moreover, SIG follows completeness 2Certain language models, such as GPT-2 (Radford et al., 2019), do not have a "mask" token. A "pad" token should be therefore used for such models. | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|--------|--------|--------|-------|--------|--------|-------| | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | | | Grad*Inp | -0.412 | 0.112 | 0.375 | -0.199 | 0.0760 | 0.426 | -0.263 | 0.0923 | 0.439 | | DeepLift | -0.624 | 0.170 | 0.271 | -0.261 | 0.0932 | 0.408 | -0.244 | 0.0898 | 0.438 | | GradientShap | -1.32 | 0.303 | 0.258 | -0.896 | 0.261 | 0.314 | -0.622 | 0.219 | 0.388 | | IG | -1.96 | 0.445 | 0.151 | -1.44 | 0.405 | 0.226 | -0.981 | 0.345 | 0.352 | | DIG | -1.69 | 0.384 | 0.167 | -0.824 | 0.263 | 0.278 | -0.777 | 0.287 | 0.345 | | SIG | -2.02 | 0.473 | 0.0992 | -1.62 | 0.440 | 0.216 | -1.19 | 0.392 | 0.312 | Table 1: Comparison of SIG with several feature attribution methods on three language models fine-tuned on the SST2 dataset. For ↑ metrics, the higher the better, while for ↓ ones, the lower the better. | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|---------|---------|--------|-----------|--------|--------|-------| | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | | | Grad*Inp | -0.153 | 0.0766 | 0.209 | -0.0892 | 0.0432 | 0.300 | -0.291 | 0.0887 | 0.298 | | DeepLift | -0.269 | 0.117 | 0.159 | -0.124 | 0.0557 | 0.269 | -0.285 | 0.0701 | 0.366 | | GradientShap | -0.832 | 0.289 | 0.137 | -0.606 | 0.204 | 0.144 | -0.874 | 0.172 | 0.308 | | IG | -1.50 | 0.534 | 0.0428 | -1.35 | 0.441 | 0.0327 | -1.58 | 0.302 | 0.224 | | DIG | -0.779 | 0.304 | 0.133 | -0.663 | 0.186 | 0.108 | -1.06 | 0.207 | 0.232 | | SIG | -1.95 | 0.564 | 0.00409 | -1.37 | 0.404 | -3.31E-05 | -2.12 | 0.364 | 0.124 | Table 2: Comparison of SIG with several feature attribution methods on three language models fine-tuned on the IMDB dataset. in a specific way: for each word xi, we have the following result: $$\sum_{i}S I G_{i j}(\mathbf{x})=\operatorname{F}(\mathbf{x})-\operatorname{F}({\overline{{\mathbf{x}}}}^{i})$$ j This means that for each word, the sum of its attribution across all features j is equal to the difference between the output of the model as x and at its corresponding baseline x i. However, it does not entail that Pij SIGij (x) = F(x) − F(x), where x would be an overall baseline filled with <mask>. Moreover, this last axiom entail another one called sensitivity, which here means that if, for a certain word, the input x has the same influence on the output of F as its corresponding baseline x i, then Pj SIGij (x) = 0. Finally, we show in Appendix A that SIG preserves symmetry for each word on the embedding dimension, but that this axiom is not true in general. Using mask instead of pad as a baseline We propose in this study to replace, as the baseline, the commonly used "pad" token with the "mask" token, on language models having such token. This seems to go against the intuition that the baseline should be uninformative, as "mask" is a trained token. To support the usage of "mask", we argue that, because <PAD> (denoting the embedding of "pad") is untrained, it could be arbitrarily close to some words, and far from others. Oh the other hand, <MASK> has been trained to replace random words, making it ideally as close to one word as to any other. Another way to see it is to compare it with images. It is natural for images to choose the baseline as a black image, as this baseline has no information. However, there is no such guarantee in NLP. For instance, the embedding of "pad": <0, 0, 0, . . . , 0> could perfectly be very close to an embedding of a word with a specific meaning, which would harm the explanations. On the other hand, <MASK> has been trained to replace any word, and therefore seems more suited to be the baseline. ## 3 Experiments 3.1 Experiments Design We evaluate SIG against various explanation methods by closely following the experimental setup of Sanyal and Ren (2021). As such, we use the following language models: BERT (Devlin et al., 2018), DistilBERT (Sanh et al., 2019) and RoBERTa (Liu | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|--------|--------|--------|-------|--------|--------|-------| | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | | | Grad*Inp | -0.257 | 0.0681 | 0.315 | -0.121 | 0.0617 | 0.363 | -0.438 | 0.143 | 0.438 | | DeepLift | -0.332 | 0.101 | 0.260 | -0.163 | 0.0804 | 0.348 | -0.452 | 0.123 | 0.450 | | GradientShap | -0.452 | 0.237 | 0.212 | -0.389 | 0.194 | 0.299 | -0.715 | 0.204 | 0.438 | | IG | -0.540 | 0.341 | 0.163 | -0.787 | 0.354 | 0.242 | -1.19 | 0.307 | 0.410 | | DIG | -0.487 | 0.273 | 0.181 | -0.426 | 0.223 | 0.286 | -1.05 | 0.293 | 0.414 | | SIG | -0.533 | 0.331 | 0.134 | -0.869 | 0.361 | 0.251 | -1.52 | 0.390 | 0.349 | | steps | LO ↓ | Comp ↑ | Suff ↓ | Delta | Time | | |---------|--------|----------|----------|---------|--------|--------------| | IG | 50 | -0.981 | 0.345 | 0.352 | 0.304 | t | | SIG | 50 | -1.19 | 0.392 | 0.312 | 4.82 | N × t | | IG | 250 | -0.999 | 0.352 | 0.355 | 0.055 | t ′ | | IG | 10 × N | -0.998 | 0.351 | 0.352 | 0.066 | N × t ′ / 25 | | SIG | 10 | -1.14 | 0.373 | 0.322 | 4.93 | N × t ′ / 25 | Table 4: Comparison of IG and SIG with different numbers of interpolations on BERT fine-tuned on the SST2 dataset. t and t′represent the amount of time to calculate IG with 50 and 250 steps respectively, and N represents the number of words on the input data (for instance in one sentence). On the SST2 dataset, we have an average of: N ≈ 25 words per sentence. On top of the table, we compare IG and SIG using a fixed number of steps. On the bottom of the table, we compare IG with 250 steps against SIG with 10 steps. Since N ≈ 25, we have N × t′/ 25 ≈ t′. For a fairer comparison, we also compare IG with a variable number of steps: 10 × N for each sentence, against SIG with 10 steps. These two methods have the same time complexity. Delta is defined as Pij *Attr*ij (x)−(F(x)−F(x)). Contrary to IG, SIG has a high delta value, as in general Pij SIGij (x) ̸= F(x) − F(x). et al., 2019). We also use the following datasets: SST2 (Socher et al., 2013), IMDB (Maas et al., 2011) and Rotten Tomatoes (Pang and Lee, 2005), which classify sentences into positive or negative sentiments or reviews. Moreover, we use the HuggingFace library to recover processed data and pretrained models (Wolf et al., 2019). Following (Sanyal and Ren, 2021), we use the following evaluation metrics: Log-Odds (Shrikumar et al., 2017), Comprehensiveness (DeYoung et al., 2019) and Sufficiency (DeYoung et al., 2019). These metrics mask the top or bottom 20 % important features, according to an attribution method, and measure by how much the prediction of the language model changes using this masked data, compared with the original one. For more details on these metrics, please see Sanyal and Ren (2021). Finally, we use the following feature attribution methods to compare our methods against: Grad*Inp (Shrikumar et al., 2016), Integrated Gradients (Sundararajan et al., 2017), DeepLift (Shrikumar et al., 2017), GradientShap (Lundberg and Lee, 2017) and Discretized IG (DIG) (Sanyal and Ren, 2021) using the GREEDY heuristics. Moreover, as in Sanyal and Ren (2021), we use 50 interpolation steps for all methods expect from DIG, for which we use 30 steps. ## 3.2 Results Comparison with other feature attribution methods We present of Tables 1, 2 and 3 a comparison of the performance of SIG with the attribution methods listed in 3.1. We observe that SIG significantly outperforms all other methods across most datasets and language models we used. This tends to confirm that the change of overall meaning of a sentence by combining interpolations simultaneously is an important issue which needs to be tackled. Comparison between IG and DIG Although results in Sanyal and Ren (2021) show that DIG outperforms other methods, including IG, this is not the case when using "mask" as a token. This result seems to undermine the intuition of Sanyal and Ren (2021) that the discrete nature of the embedding space is an important factor when explaining a language model. We also show in Appendix C that the requirement of having a monotonic path, stressed by Sanyal and Ren (2021), is not necessary. | Method | Example | |----------|------------------------------------------------------------------------------------------------------------------| | IG | "a well-made and often lovely depiction of the mysteries of friendship. | | SIG | "a well-made and often lovely depiction of the mysteries of friendship. | | IG | "a hideous , confusing spectacle , one that may well put the nail in the coffin of any future rice adaptations." | | SIG | "a hideous , confusing spectacle , one that may well put the nail in the coffin of any future rice adaptations." | | IG | "this is junk food cinema at its greasiest." | | SIG | "this is junk food cinema at its greasiest." | | IG | "a remarkable 179-minute meditation on the nature of revolution." | | SIG | "a remarkable 179-minute meditation on the nature of revolution." | Choice of the baseline token We also provide in Appendix B results using "pad" as a baseline. Comparison between Tables 1, 2 and 3 on one hand, and Tables 6, 7, 8 on the other hand show that IG greatly improves using the "mask" token as a baseline. This seems to confirm our intuition of using this token instead of "pad". Moreover, SIG performs similarly using either token, which demonstrates the robustness of this method across these two baseline tokens. Time complexity of SIG One important drawback of SIG is its time complexity, which is dependent on the number of words in the input data. In Table 4, we compare the original IG with SIG, using different numbers of steps. We define t and t′as the time complexity of computing IG with respectively 50 and 250 steps, and N the number of words in the input data. This table shows that, although reducing the number of steps results in a decrease of performance, SIG with 10 steps still performs better than both IG with 250 steps and IG with 10 × N steps, while having the same time complexity. Moreover, as noted in Sanyal and Ren (2021), using IG with a large number of steps decreases Delta = Pij IGij (x) − (F(x) − F(x)), while not improving performance. As a result, when computing attributions on long sentences or large texts, we recommend using SIG with a reduced number of steps instead of IG. Comparison of IG and SIG on several examples We provide on Table 5 several examples of explained sentences, using IG and SIG. Both methods tend to agree on short sentences, while more disagreements appear on larger ones. For each example, we display in underlined bold the most important token, and in bold the top 20 % most important tokens, according to each method. ## 4 Conclusion In this work, we have defined an attribution method specific to text data: Sequential Integrated Gradients (SIG). We have shown that SIG yields significantly better results than the original Integrated Gradients (IG), as well as other methods specific to language models, such as Discretized Integrated Gradients (DIG). This suggests that keeping the meaning of interpolated sentences close to the original one is key to producing good explanations. We have also shown that, although SIG can be computationally intensive, reducing the number of interpolations still yields better results than IG with a greater number of interpolations. We have also highlighted in this work the benefit of using the token "mask" as a baseline, instead of "pad". Although SIG seems to be robust across both tokens, this is especially important when using IG, as it significantly improves the quality of explanations. Using the trainable token "mask" is indeed closer to the training procedure of language models, and should yield better interpolations as a result. We recommend therefore using this token as a baseline, when possible, when explaining predictions made by a language model. Moreover, while this study was conducted on bidirectional language models such as BERT, SIG could also be used on auto-regressive models such as GPT-2 (Radford et al., 2019), by iteratively computing the attribution of a token, while keeping previous tokens fixed, and masking future tokens if any has been already computed. ## Limitations We see two main limitations of this work. The first one concerns the diversity of the language models and datasets used. BERT, DistilBERT and RoBERTa have similar architecture, and SST2, IMDB and Rotten Tomatoes are datasets designed to evaluate the sentiment of English text. It would therefore be interesting to validate the robustness of our results on more diverse languages, tasks and language models. In this short paper, we decided for brevity to follow the experiment design of Sanyal and Ren (2021), while being aware of its inherent limitations. The second limitation of this work concerns the time complexity of SIG. As it needs to compute explanations for each word individually, this method can become very computationally expensive when applied on large text data. To alleviate this issue, we first made it possible to compute gradients in parallel, using an internal batch size similar to how Captum (Kokhlikyan et al., 2020) implemented the Integrated Gradients method. Secondly, as discussed in 3.2, it is possible to reduce the number of interpolated points, which makes the computation faster while retaining better performance than the original IG. In this work, we ran our experiments on a machine with 16 CPUs, and one Nvidia Tesla T4 GPU. With this setting, computing SIG on SST2 and Rotten Tomatoes takes around one hour for each model. On the larger IMDB, computing SIG, on 2000 randomly sampled inputs, takes around 5 days for BERT and RoBERTa, and 2 days for DistilBERT. ## Ethics Statement The methods presented in this work aim to explain language models, and can as such present ethical issues related to this task. Discriminating biases can indeed be present in text data on which a language model is trained, and such a model can acquire and propagate these biases (Sap et al., 2019). As the presented methods aim to explain a language model without additional knowledge, these methods could also display discriminating biases learnt by a language model. Moreover, common explanation methods such as Integrated Gradients has proved to be prone to adversarial attacks (Dombrowski et al., 2019), and can be misleading when used on out of sample data (Slack et al., 2021). There is no reason to believe our methods would be more robust compared to existing methods such as IG. The proposed methods can also be characterised as gradient-based, as they rely on computing gradients on the input data, an uninformative baseline, or on interpolated points between them. As noted by (Mittelstadt et al., 2019), such methods are only local and may not give a clear explanation of the model globally. ## Acknowledgement The author would like to thank Vitalii Zhelezniak for his thoughtful comments and suggestions, including using the "mask" token as a baseline. We also thank Anthony Hu for his detailed initial review of this paper. ## References Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. *arXiv preprint arXiv:1711.06104*. Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. *arXiv* preprint arXiv:1902.10909. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. *arXiv preprint* arXiv:1911.03429. Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. 2019. Explanations can be manipulated and geometry is to blame. *Advances in* Neural Information Processing Systems, 32. Mickel Hoang, Oskar Alija Bihorac, and Jacobo Rouces. 2019. Aspect-based sentiment analysis using BERT. In *Proceedings of the 22nd Nordic Conference on* Computational Linguistics, pages 187–196, Turku, Finland. Linköping University Electronic Press. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model interpretability library for pytorch. Yang Liu. 2019. Fine-tune bert for extractive summarization. *arXiv preprint arXiv:1903.10318*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Daniel D Lundstrom, Tianjian Huang, and Meisam Razaviyayn. 2022. A rigorous study of integrated gradients method and extensions to internal neuron attributions. In International Conference on Machine Learning, pages 14485–14508. PMLR. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150. Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pages 279–288. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. *arXiv preprint cs/0506075*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135– 1144. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108. Soumya Sanyal and Xiang Ren. 2021. Discretized integrated gradients for explaining language models. arXiv preprint arXiv:2108.13654. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1668–1678. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In *International* conference on machine learning, pages 3145–3153. PMLR. Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating activation differences. *arXiv preprint arXiv:1605.01713*. Dylan Slack, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. Counterfactual explanations can be manipulated. Advances in Neural Information Processing Systems, 34:62–75. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In *International conference on machine learning*, pages 3319– 3328. PMLR. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tie-Yan Liu. 2020. Incorporating bert into neural machine translation. arXiv preprint arXiv:2002.06823. ## A On The Symmetry-Preserving Axiom Of Sequential Integrated Gradients This section is divided into two parts. First, we show that SIG preserves symmetry for each word along the embedding dimension. Second, we provide a counterexample to show that symmetry does not hold in general. Symmetry of SIGi Let us use the same notations as in Section 2. We want to compute the attribution of a word xi on a model F, using the baseline x i. Let's define the function: Fi(x) := F(x1, ..., x*, ...,* xm) Fi corresponds to F where only the i th word is not fixed. Here, x corresponds to a word, not a sentence. For such a function, SIG corresponds to the regular IG method: the baseline is < mask > and SIG constructs a straight line between this baseline and xi. As a result, if Fiis symmetric on two embedding features j1 and j2, SIG preserves this symmetry: SIGij1 (x) = SIGij2 (x). Non symmetry of SIG The fact that SIG does not preserve symmetry in general is due to the choice of the baseline. As a counterexample, let's define a language F which takes as an input two words x1 and x2. This language model is moreover symmetric: F(x1, x2) = F(x2, x1). Here, the original IG method would preserve the symmetry: as the baseline is (<mask>, <mask>), when x1 = x2, we have IG(x)1 = IG(x)2. However, SIG doesn't preserve the symmetry due to its baseline: we would have: x 1 = (<mask>, x2) and x 2 = (x1, <mask>). As a result, SIG(x)1 = SIG(x)2 only if x1 = x2 = <mask>. ## B Additional Results Using The "Pad" Token We present in this section results using the "pad" token instead of the "mask" one. These results for the three datasets: SST2, IMDB and Rotten Tomatoes can be found respectively on Tables 6, 7 and 8. When using the "pad" token as a baseline, SIG seems to perform similarly compared with using the "mask" one, while other methods perform significantly worse. This demonstrates both the need to use "mask" as a token, and the robustness of the SIG method across different baselines. ## C **Challenge Of The Monotonic Assumption** Of The Path Sanyal and Ren (2021) stipulate that the path between a baseline and an input needs to be monotonic to allow approximating the integral in IG using Riemann summation. However, while this is true for a Riemann integral, it is also possible to approximate the Riemann–Stieltjes integral, which is a generalisation of Riemann integral, and does not need a monotonic path. We define the Riemann–Stieltjes integral of f : [a, b] → R as: $$\int_{\mathbf{x}=\mathbf{a}}^{\mathbf{b}}f(\mathbf{x})\,d g(\mathbf{x})$$ where g : [0, 1] → [a, b] designates a path. Let us define a partition over [0, 1] as tk such as 0 ≤ t1 ≤ ... ≤ tn ≤ 1. We can then approximate the integral with the sum: nX−1 $\mathbf{a}$ i=0 $$\sum_{i}f(g(c_{i}))\times[g(t_{i+1})-g(t_{i})]$$ where ci ∈ [ti, ti+1]. As such, while the partition ti, i ∈ {1*, ..., n*} needs to be monotonic, the function g does not need to have this constraint. As a result, we could define a path-based IG method as: $$I G_{\gamma}(\mathbf{x})_{i}:=\int_{\gamma}{\frac{\partial\mathbf{F}(\mathbf{x})}{\partial x_{i}}}\,d x_{i}$$ where γ is not necessarily monotonic. (Lundstrom et al., 2022) provide more insights on this topic, and in particular show that the implementation invariance, completeness and sensitivity axioms hold for non-monotonic paths. For this reason, we decided not to include a combination of DIG and SIG in this study. However, an implementation of this method and the corresponding results can be found in the repository published with this paper. | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|--------|--------|--------|-------|--------|--------|-------| | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | | | Grad*Inp | -0.402 | 0.112 | 0.375 | -0.318 | 0.085 | 0.398 | -0.454 | 0.092 | 0.439 | | DeepLift | -0.196 | 0.053 | 0.489 | -0.270 | 0.0784 | 0.439 | -0.283 | 0.061 | 0.463 | | GradientShap | -0.753 | 0.191 | 0.328 | -0.514 | 0.146 | 0.386 | -0.471 | 0.146 | 0.425 | | IG | -0.954 | 0.251 | 0.273 | -0.726 | 0.227 | 0.315 | -0.658 | 0.235 | 0.398 | | DIG | -1.222 | 0.310 | 0.237 | -0.812 | 0.249 | 0.287 | -0.879 | 0.292 | 0.374 | | SIG | -1.993 | 0.466 | 0.108 | -1.346 | 0.398 | 0.244 | -1.30 | 0.393 | 0.331 | Table 6: Comparison of SIG with several baselines on three language models fine-tuned on the SST2 dataset. For ↑ metrics, the higher the better, while for ↓ ones, the lower the better. Table 7: Comparison of SIG with several baselines on three language models fine-tuned on the IMDB dataset. | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|--------|--------|--------|--------|--------|--------|-------| | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | LO ↓ | Comp ↑ | Suff ↓ | | | Grad*Inp | -0.189 | 0.082 | 0.209 | -0.216 | 0.047 | 0.315 | -0.654 | 0.087 | 0.299 | | DeepLift | -0.032 | -0.005 | 0.515 | -0.149 | 0.031 | 0.374 | -0.519 | 0.027 | 0.465 | | GradientShap | -0.315 | 0.117 | 0.302 | -0.351 | 0.110 | 0.213 | -0.622 | 0.088 | 0.358 | | IG | -0.474 | 0.186 | 0.201 | -0.499 | 0.169 | 0.114 | -0.577 | 0.117 | 0.288 | | DIG | -0.812 | 0.297 | 0.153 | -0.626 | 0.187 | 0.099 | -0.971 | 0.192 | 0.229 | | SIG | -2.157 | 0.585 | 0.0062 | -0.856 | 0.291 | 0.0207 | -1.96 | 0.352 | 0.152 | Table 8: Comparison of SIG with several baselines on three language models fine-tuned on the Rotten Tomatoes dataset. | Method | DistilBERT | RoBERTa | BERT | | | | | | | |--------------|--------------|-----------|--------|--------|-------|-------|--------|-------|-------| | Grad*Inp | -0.152 | 0.068 | 0.315 | -0.211 | 0.062 | 0.363 | -0.806 | 0.143 | 0.438 | | DeepLift | -0.077 | 0.017 | 0.372 | -0.198 | 0.056 | 0.370 | -0.457 | 0.076 | 0.474 | | GradientShap | -0.326 | 0.147 | 0.250 | -0.264 | 0.103 | 0.348 | -0.697 | 0.161 | 0.429 | | IG | -0.424 | 0.208 | 0.190 | -0.360 | 0.151 | 0.312 | -0.795 | 0.201 | 0.414 | | DIG | -0.501 | 0.257 | 0.184 | -0.346 | 0.153 | 0.310 | -1.06 | 0.267 | 0.416 | | SIG | -0.753 | 0.378 | 0.109 | -0.771 | 0.318 | 0.266 | -1.55 | 0.360 | 0.393 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section at the end of the paper, before references ✓ A2. Did you discuss any potential risks of your work? In the introduction and limitation sections ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? I created my own and used code from https://github.com/INK-USC/DIG B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Open source ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In the appendix B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In the experiment section ## C ✓ **Did You Run Computational Experiments?** In The Experiment Section ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The models used are pre-trained, standard language models. The computation budget and infrastructure used is discussed in the ethics statement. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. The is no training and hyperparameter search ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? It is just a single run - the results are not stochastic C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
floto-etal-2023-diffudetox
{D}iffu{D}etox: A Mixed Diffusion Model for Text Detoxification
https://aclanthology.org/2023.findings-acl.478
Text detoxification is a conditional text generation task aiming to remove offensive content from toxic text. It is highly useful for online forums and social media, where offensive content is frequently encountered. Intuitively, there are diverse ways to detoxify sentences while preserving their meanings, and we can select from detoxified sentences before displaying text to users. Conditional diffusion models are particularly suitable for this task given their demonstrated higher generative diversity than existing conditional text generation models based on language models. Nonetheless, text fluency declines when they are trained with insufficient data, which is the case for this task. In this work, we propose DiffuDetox, a mixed conditional and unconditional diffusion model for text detoxification. The conditional model takes toxic text as the condition and reduces its toxicity, yielding a diverse set of detoxified sentences. The unconditional model is trained to recover the input text, which allows the introduction of additional fluent text for training and thus ensures text fluency. Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed DiffuDetox.
# Diffudetox: A Mixed Diffusion Model For Text Detoxification Griffin Floto1, Mohammad Mahdi Abdollah Pour1, Parsa Farinneya1**, Zhenwei Tang**1, Ali Pesaranghader2**, Manasa Bharadwaj**2, and **Scott Sanner**1 1 University of Toronto, Canada {griffin.floto,m.abdollahpour,parsa.farinneya,zhenwei.tang}@mail.utoronto.ca ssanner@mie.utoronto.ca 2 LG Electronics, Toronto AI Lab {ali.pesaranghader, manasa.bharadwaj}@lge.com ## Abstract Text detoxification is a conditional text generation task aiming to remove offensive content from toxic text. It is highly useful for online forums and social media, where offensive content is frequently encountered. Intuitively, there are diverse ways to detoxify sentences while preserving their meanings, and we can select from detoxified sentences before displaying text to users. Conditional diffusion models are particularly suitable for this task given their demonstrated higher generative diversity than existing conditional text generation models based on language models. Nonetheless, text fluency declines when they are trained with insufficient data, which is the case for this task. In this work, we propose DiffuDetox1, a mixed conditional and *unconditional* diffusion model for text detoxification. The conditional model takes toxic text as the condition and reduces its toxicity, yielding a diverse set of detoxified sentences. The unconditional model is trained to recover the input text, which allows the introduction of additional fluent text for training and thus ensures text fluency. Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed DiffuDetox. ## 1 Introduction Toxic texts with offensive and abusive words are frequently encountered in online forums and social media. Such a harmful online environment can lead to mental health problems (Viner et al., 2019; Wijesiriwardene et al., 2020), which motivates considerable research efforts (dos Santos et al., 2018; Laugier et al., 2021; Logacheva et al., 2022) in text detoxification, i.e., a conditional text generation task aiming to remove offensive content from sentences while preserving their meanings. Intuitively, there exist diverse ways to detoxify a given sentence. As shown in Table 1, some detoxified sentences are the results of simply removing 1https://github.com/D3Mlab/diffu-detox Table 1: A diverse collection of detoxified sentences helps to approach human-level text detoxification. | Toxic | The country doesn't really have to give a shit about international laws. | |--------------|----------------------------------------------------------------------------| | Detoxified 1 | The country doesn't really have to give [· · · ] about international laws. | | Detoxified 2 | The country doesn't really have care about international laws. | | Detoxified 3 | The country doesn't really need to care about international laws. | | Human | The country doesn't need to care about international laws. | or replacing the toxic word, e.g., Detoxified 1 and 2, which may cause loss of information or lower text fluency. While other candidates, e.g., Detoxified 3, can reach human-level text detoxification performance with satisfactory fluency and content preservation. Therefore, if a diverse collection of detoxified sentences are given, we can select the most fluent and preservative one to maximize user experience. To do so, we resort to textual conditional diffusion models (Li et al., 2022; Gong et al., 2022) because they are shown to be capable of generating more diverse sets of candidates compared to existing solutions based on transformers (Vaswani et al., 2017), e.g., GPT2 (Radford et al., 2019). Given their demonstrated high generative diversity, diffusion models are particularly suitable for this task. Nevertheless, previous textual conditional diffusion models (Li et al., 2022; Gong et al., 2022) are not directly applicable to text detoxification due to the scarcity of text detoxification data. Given that text detoxification is a relatively new field and the high cost of human annotations, the available text detoxification data is on the order of 1e−1to 1e−2 of datasets used for other tasks with textual conditional diffusion models (Gong et al., 2022). To this end, we introduce DiffuDetox, a mixed conditional and *unconditional* diffusion model for text detoxification. In particular, the conditional ![1_image_0.png](1_image_0.png) model takes toxic text as a condition and through a Markov chain of diffusion steps, yields a diverse set of detoxified sentences. On the other hand, the unconditional model is trained to recover any given input text exactly. That allows us to introduce additional fluent text to be reconstructed by the unconditional model, which is used to improve the fluency of the conditionally generated detoxified sentences. In this way, the resulting diffusion model can maintain a diverse collection of detoxified candidates with satisfactory sentence fluency and content preservation. Extensive experimental results and in-depth discussions demonstrate the effectiveness of DiffuDetox for text detoxification. Our main contributions are summarized in two folds: 1) To the best of our knowledge, we are the first to approach text detoxification with diffusion models, which can maintain a rich collection of detoxified sentences by their high generative diversity; 2) We propose a mixed diffusion model for text detoxification, where the conditional model reduces text toxicity and the unconditional model improves text fluency. ## 2 Related Work 2.1 Text Detoxification Previous text detoxification efforts fall into two main categories, *supervised* and *unsupervised*. The unsupervised methods are built on a set of toxic and a set of non-toxic texts without one-to-one mappings between them. Representative methods include Mask&Infill (Wu et al., 2019), DRGTemplate/Retrieve (Li et al., 2018), DLSM (He et al., 2020), SST (Lee, 2020), CondBERT and ParaGeDi (Dale et al., 2021). In contrast, the supervised methods are built on parallel datasets in which one-to-one mappings between toxic and non-toxic texts are explicitly provided. ParaDetox (Logacheva et al., 2022) is a well-established method within this category, which fine-tunes BART (Lewis et al., 2020) on their parallel data. ## 2.2 Textual Diffusion Models Diffusion probabilistic models are deep generative models with Markov chains of diffusion steps to recover the noise slowly added to data (SohlDickstein et al., 2015). Recently, diffusion models have shown impressive performance on *continuous* domains such as image and audio generation (Ho et al., 2020; Kong et al., 2020), sparking interest in using these models in *discrete* spaces like text. Some textual diffusion models use a discrete diffusion process that operates on word tokens (Savinov et al., 2022; Reid et al., 2022), whereas other methods convert text to embeddings, and then treat text as continuous variables (Li et al., 2022; Strudel et al., 2022). Although textual diffusion models have proved to be effective in various text generation tasks with rich data (Gong et al., 2022), they have not yet been applied to tasks with fewer training samples, such as text detoxification in our case. Ho and Salimans (2021) are the first to exploit unconditional diffusion models for conditional generation, while their method is limited to images and is not aiming for introducing additional data under the low-data setting. ## 3 Methodology As the overall framework of DiffuDetox shown in Figure 1 details, our proposed diffusion model for text detoxification improves text fluency in the low-training data regime by using a mixture of a conditional and unconditional diffusion model. We overview diffusion models before discussing DiffuDetox in detail. ## 3.1 Diffusion Models Diffusion is a generative modeling paradigm that can be understood as a denoising algorithm (SohlDickstein et al., 2015; Song and Ermon, 2019; Song et al., 2021). Noise is gradually added to data samples, while the diffusion model is trained to reverse the process and recover the original data. The framework can be described as a Markov process with T steps, where the original data exist at t = 0. Given a sample x0, the so-called forward process gradually adds noise to the data points, i.e., the blue arrows in Figure 1. The noisy sample can be described by: $$q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):={\mathcal{N}}(\mathbf{x}_{t};{\sqrt{1-\beta_{t}}}\mathbf{x_{t}},\beta_{t}\mathbf{I})$$ where the variance schedule parameters β1, · · · , βT are selected such that βt ∈ [0, 1] and β0 is close to 0 and βT is close to 1 (Ho et al., 2020). This ensures that when t ≈ 0, the data has little noise added to it, while when t ≈ T, the data is identical to a sample from a standard Gaussian distribution. The reverse process then attempts to remove the noise that was added in the forward process and is parameterized by θ as: $$p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}):={\mathcal{N}}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\sigma_{t}\mathbf{I})$$ where the predictive model µθ is: $$\mu_{\theta}:=\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t))\qquad(3)$$ which depends on time-dependent coefficients α := 1 − βt, α¯t:= Qts=1 αs. In Eq. (3), ϵθ is interpreted as predicting the noise that was added to xt. To optimize the log-likelihood of this model, a simplified training objective is used which reduces the problem to: $${\mathcal{L}}=\mathbb{E}_{t,{\mathbf{x}}_{0},\epsilon}[\left\|\epsilon-\epsilon_{\theta}({\sqrt{\bar{\alpha}_{t}}}{\mathbf{x}}_{0}+{\sqrt{1-\bar{\alpha}_{t}}}\epsilon,t)\right\|^{2}]\quad0$$ 2] (4) After training, samples are generated by beginning with pure noise from a standard Gaussian distribution, which is then gradually denoised T times by the learned reverse process. ## 3.2 Diffudetox: A Mixed Diffusion Model For Text Detoxification The task of text detoxification can be viewed as generating a non-toxic sentence, conditioned on a toxic input sentence. The goal is to ensure that the semantics and content of the text are preserved after detoxification, while ensuring that the generated text is fluent. With this interpretation (Gong et al., 2022), we can apply a conditional diffusion model that generated non-toxic text, when conditioned on a toxic sentence. A conditional diffusion model is modified such that the reverse process is now pθ(xt−1|xt, c), and the predictive model is ϵθ(xt, c, t). This model can be interpreted as mapping sequences to sequences in a non-autoregressive manner. To apply this model to textual data, sentences are tokenized and converted to a stack of embeddings which are then taken to be x0 in the diffusion process. When sampling, embeddings that are generated by the diffusion model are converted to tokens by a shallow single-layer decoder. While diffusion models have high sample diversity which can be used to generate a large number of candidate items, the fluency of the samples is degraded when trained on a smaller dataset. We propose to use a combination of the conditional model diffusion model as well as an unconditional model to tackle this problem. The conditional model is used to detoxify text, whereas the unconditional model can be used to guide the sampling process towards higher quality samples (Ho and Salimans, 2021). The models are combined in a manner that is inspired by the gradient of an implicit classifier p i(c|x) ∝ p(x|c)/p(x) such that the following linear combination of the models is used for sampling: $${\mathrm{(2)}}$$ ϵ¯θ(x, c) = (1 + w)ϵθ(x, c) − wϵθ(x) (5) ## 4 Experiments 4.1 Experimental Settings Datasets. We conduct our experiments upon a well-established benchmarking dataset ParaDetox2 (Logacheva et al., 2022), which provides humanannotated one-to-one mappings of toxic and nontoxic sentence pairs from 20,437 paraphrases of 12,610 toxic sentences. We use the same data split of Logacheva et al. (2022) with 671 testing sentences for fair performance comparisons. We further consider the BookCorpus (Zhu et al., 2015), 2https://huggingface.co/datasets/ SkolkovoInstitute/paradetox MNLI (Wang et al., 2019), and WikiAuto (Jiang et al., 2020), datasets as additional data for unconditional diffusion model training. Evaluation Metrics. We follow the wellestablished text detoxification work (Logacheva et al., 2022) to evaluate DiffuDetox with BLEU, Style Accuracy (STA), Content Preservation (SIM), Fluency (FL), and J score. In particular, STA and FL are computed with pre-trained classifiers (Warstadt et al., 2019) to measure the non-toxicity and fluency of a given sentence, respectively. And we compute SIM using cosine similarity between the input and the generated detoxified text with the model of Wieting et al. (2019). Moreover, we compute J score (Krishna et al., 2020) as the averaged multiplication of STA, SIM, and FL, which is highly correlated with human evaluation as shown by Logacheva et al. (2022). Implementation Details. We implement our mixed conditional and unconditional models with a single diffusion model where c = ∅ for the unconditional case. During training, the conditional model is selected with probability φ = 0.8, and the unconditional model is trained using the non-toxic sentences sampled from the ParaDetox dataset and the additional dataset with equal probabilities. We use the union of the BookCorpus, WikiAuto, and MNLI as the additional dataset. In the test stage, we select the best samples from a candidate set of 20 using the J score. The reported results are from a model trained for 1e 5steps with a batch size of 32, and the mixture weighting parameter w in Eq. (5) is set to 5. We use the text detoxification methods listed in Section 2.1 as baselines. ## 4.2 Experimental Results Performance Comparison. We have two key observations from the results shown in Table 2. Firstly, our proposed DiffuDetox outperforms most baseline methods on most evaluation metrics, and it is reaching state-of-the-art performance by outperforming ParaDetox on two metrics, demonstrating the effectiveness of our proposed method. Another observation is that DiffuDetox achieves a higher J score than human-level text detoxification. Note that the J score has been shown to be highly correlated with human annotations (Logacheva et al., 2022). This human-level performance of DiffuDetox shows its promise to be deployed in real-world text detoxification scenarios to facilitate users in online forums and social media. BLEU STA SIM FL J Human 100.0 0.96 0.77 0.88 0.66 DRG-Template 53.86 0.90 0.82 0.69 0.51 DRG-Retrieve 4.74 0.97 0.36 0.86 0.31 Mask&Infill 52.47 0.91 0.82 0.63 0.48 CondBERT 42.45 0.98 0.77 0.88 0.62 SST 30.20 0.86 0.57 0.19 0.10 ParaGeDi 25.39 **0.99** 0.71 0.88 0.62 DLSM 21.13 0.76 0.76 0.52 0.25 ParaDetox **64.53** 0.89 0.86 0.89 *0.68* Conditional 61.43 0.91 0.87 0.78 0.64 DiffuDetox 62.13 0.92 **0.88** 0.80 *0.67* Moreover, such results are achieved by selecting from the diverse collection of detoxified sentences generated by diffusion models, which reveals their high generative diversity and the suitability of being applied to text detoxification. Examples of detoxified sentences generated by DiffuDetox can be found in Appendix A. Ablation Study. We conduct ablations study to investigate the effectiveness of the unconditional model. Since the unconditional model allows the introduction of the additional fluent text, the ablation study can provide insights into the effect of both the unconditional model and the introduced additional data. As shown in Table 2, the model named *Conditional* represents DiffuDetox without the unconditional component. We observe that the addition of the unconditional model improves all the metrics. In particular, text fluency achieves the most significant performance gain. More importantly, the addition of the unconditional model pushes the diffusion model over the human baseline for the J score. Such results demonstrate the effectiveness of the unconditional model and the introduced additional fluent text in improving text fluency and overall performance. ## 5 Conclusion In this paper, we approach the text detoxification task with diffusion models for their demonstrated high generative diversity. We introduced DiffuDetox, a mixed conditional and unconditional diffusion model, where the conditional part reduces toxicity whereas the unconditional part ensures fluency. Experimental results show DiffuDetox achieves human-level text detoxification performance, making it promising to be applied in realworld text detoxification systems to benefit users. ## Limitations And Future Work One limitation of our method is that sampling requires sampling both a conditional and a unconditional model, which results in slower inference times. On the other hand, progressive distillation (Meng et al., 2022) provides an attractive solution to this problem. Another limitation is that Ho and Salimans (2021) show that the diversity of generative models is degraded as w increases. Ideally we would be able to have a model that improves upon the fluency as well as the model diversity. As for future work, we will leverage advanced large language models as the base architecture for training diffusion models to compete with high performance auto-regressive models. Additionally, we will investigate modifications to diffusion models that are inherent to discrete data. ## Ethics Statement Potential Misuse: DiffuDetox can hypothetically be used to obtain toxic sentences from non-toxic sentences. However, the effectiveness of such a scenario should be investigated. Environmental Cost: We note that while our work required extensive experiments to draw sound conclusions, future work will be able to draw on these insights and need not run as many large-scale comparisons. Models in production may be trained once using the most promising settings. ## Acknowledgements We would like to acknowledge that this work was supported by LG Electronics, Toronto AI Lab Grant Ref No. 2022-1473. ## References David Dale, Anton Voronov, Daryna Dementieva, Varvara Logacheva, Olga Kozlova, Nikita Semenov, and Alexander Panchenko. 2021. Text detoxification using large pre-trained neural models. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7979–7996. Cicero dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 189–194. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and LingPeng Kong. 2022. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933. Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. arXiv preprint arXiv:2002.03912. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems. Jonathan Ho and Tim Salimans. 2021. Classifier-free diffusion guidance. In *NeurIPS 2021 Workshop on* Deep Generative Models and Downstream Applications. Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7943–7960. Association for Computational Linguistics. Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020. Diffwave: A versatile diffusion model for audio synthesis. Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 737–762. Léo Laugier, John Pavlopoulos, Jeffrey Sorensen, and Lucas Dixon. 2021. Civil rephrases of toxic texts with self-supervised transformers. In *Proceedings* of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1442–1461. Joosung Lee. 2020. Stable style transformer: Delete and generate approach with encoder-decoder for text style transfer. In Proceedings of the 13th International Conference on Natural Language Generation, pages 195–204. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865–1874. Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori Hashimoto. 2022. DiffusionLM improves controllable text generation. In *Advances in Neural Information Processing Systems*. Varvara Logacheva, Daryna Dementieva, Sergey Ustyantsev, Daniil Moskovskiy, David Dale, Irina Krotova, Nikita Semenov, and Alexander Panchenko. 2022. Paradetox: Detoxification with parallel data. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6804–6818. Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. 2022. On distillation of guided diffusion models. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Machel Reid, Vincent J. Hellendoorn, and Graham Neubig. 2022. Diffuser: Discrete diffusion via edit-based reconstruction. Nikolay Savinov, Junyoung Chung, Mikolaj Binkowski, Erich Elsen, and Aaron van den Oord. 2022. Stepunrolled denoising autoencoders for text generation. In *International Conference on Learning Representations*. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In *Proceedings of the 32nd International* Conference on Machine Learning, Proceedings of Machine Learning Research. Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradients of the data distribution. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-based generative modeling through stochastic differential equations. In *International* Conference on Learning Representations. Robin Strudel, Corentin Tallec, Florent Altché, Yilun Du, Yaroslav Ganin, Arthur Mensch, Will Grathwohl, Nikolay Savinov, Sander Dieleman, Laurent Sifre, and Rémi Leblond. 2022. Self-conditioned embedding diffusion for text generation. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Russell M Viner, Aswathikutty Gireesh, Neza Stiglic, Lee D Hudson, Anne-Lise Goddings, Joseph L Ward, and Dasha E Nicholls. 2019. Roles of cyberbullying, sleep, and physical activity in mediating the effects of social media use on mental health and wellbeing among young people in england: a secondary analysis of longitudinal data. The Lancet Child & Adolescent Health, 3(10):685–696. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In the Proceedings of ICLR. Alex Warstadt, Amanpreet Singh, and Samuel Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond bleu: Training neural machine translation with semantic similarity. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4344– 4355. Thilini Wijesiriwardene, Hale Inan, Ugur Kursuncu, Manas Gaur, Valerie L Shalin, Krishnaprasad Thirunarayan, Amit Sheth, and I Budak Arpinar. 2020. Alone: A dataset for toxic behavior among adolescents on twitter. In *International Conference* on Social Informatics, pages 427–439. Springer. Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. " mask and infill": Applying masked language model to sentiment transfer. arXiv preprint arXiv:1908.08039. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*. | Example | | |------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Toxic Sentence | that guy used to be a real dick. | | DiffuDetox | that guy used to be a real bad man. | | Human Reference | That guy used to be a mean person. | | Toxic Sentence | i didn t say shit! | | DiffuDetox | i didn t say anything! | | Human Reference | I did not say that. | | Toxic Sentence | one isn t dead but just totally fucked up. | | DiffuDetox | one isn t dead but just totally messed up. | | Human Reference | One is not dead but just completely destroyed. | | Toxic Sentence | kid, you should be watching the crap they call cartoons now made for your generation . | | DiffuDetox | kid, you should be watching the stuff they call cartoons now made for your generation. | | Human Reference | kid, you should be watching the cartoons now made for your generation. | | Toxic Sentence | spoiler russia doesn t give a fuck what canada thinks. | | DiffuDetox | spoiler russia doesn t care what canada thinks. | | Human Reference | Russia doesn't care what Canada thinks. | | Toxic Sentence | how the fuck do you support the economy when you can barely support yourself. | | DiffuDetox | how do you support the economy when you can barely support yourself. | | Human Reference | How to support the economy when you can barely support yourself | | Toxic Sentence | harper needs to get the fuck out of office. | | DiffuDetox | harper needs to get out of office. | | Human Reference | Harper needs to get out of office | | Toxic Sentence | again , give me the name of the store or fuck off, liar. | | DiffuDetox | again, give me the name of the store or go away. | | Human Reference | again, give me the name of the store. | | Toxic Sentence | now that is just a fucking dumb thing to say. | | DiffuDetox | now that is just a bad thing to say. | | Human Reference | now that is just a useless thing to say. | | Table 3: Examples for performance comparison of DiffuDetox against human reference | | ## A Appendix Table 3 shows examples of toxic texts with DiffuDetox paraphrases and human references. DiffuDetox is able to achieve human-level paraphrasing performance as evaluated quantitively in Section 4.2. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
yuan-etal-2023-separating
Separating Context and Pattern: Learning Disentangled Sentence Representations for Low-Resource Extractive Summarization
https://aclanthology.org/2023.findings-acl.479
Extractive summarization aims to select a set of salient sentences from the source document to form a summary. Context information has been considered one of the key factors for this task. Meanwhile, there also exist other pattern factors that can identify sentence importance, such as sentence position or certain n-gram tokens. However, such pattern information is only effective in specific datasets or domains and can not be generalized like the context information when there only exists limited data. In this case, current extractive summarization models may suffer from a performance drop when transferring to a new dataset. In this paper, we attempt to apply disentangled representation learning on extractive summarization, and separate the two key factors for the task, context and pattern, for a better generalization ability in the low-resource setting. To achieve this, we propose two groups of losses for encoding and disentangling sentence representations into context representations and pattern representations. In this case, we can either use only the context information in the zero-shot setting or fine-tune the pattern information in the few-shot setting. Experimental results on three summarization datasets from different domains show the effectiveness of our proposed approach.
## Separating Context And Pattern: Learning Disentangled Sentence Representations For Low-Resource Extractive Summarization Ruifeng Yuan1, Shichao Sun1, Zili Wang2, Ziqiang Cao3**, Wenjie Li**1 1The Hong Kong Polytechnic University, 2Xiaohongshu Inc, 3Soochow University csryuan@comp.polyu.edu.hk, bruce.sun@connect.polyu.hk wangzili@xiaohongshu.com, zqcao@suda.edu.cn, cswjli@comp.polyu.edu.hk ## Abstract Extractive summarization aims to select a set of salient sentences from the source document to form a summary. Context information has been considered one of the key factors for this task. Meanwhile, there also exist other pattern factors that can identify sentence importance, such as sentence position or certain n-gram tokens. However, such pattern information is only effective in specific datasets or domains and can not be generalized like the context information when there only exists limited data. In this case, current extractive summarization models may suffer from a performance drop when transferring to a new dataset. In this paper, we attempt to apply disentangled representation learning on extractive summarization, and separate the two key factors for the task, context and pattern, for a better generalization ability in the lowresource setting. To achieve this, we propose two groups of losses for encoding and disentangling sentence representations into context representations and pattern representations. In this case, we can either use only the context information in the zero-shot setting or fine-tune the pattern information in the few-shot setting. Experimental results on three summarization datasets from different domains show the effectiveness of our proposed approach. ## 1 Introduction The glob of text summarization is to generate a concise highlight of a source document, which covers the crucial information conveyed in the source text. In this paper, we focus on extractive summarization. It aims to produce summaries by selecting and combining the salient sentences that are directly taken from the source text. It is widely agreed that extractive summarization is mainly based on context information to select the important sentences. Meanwhile, there also exist other factors that can be used to identify these sentences, such as sentence position or certain ngram tokens. As shown in Figure 1, in the news ![0_image_0.png](0_image_0.png) | CNNDM | Num | arXiv | Num | |------------------|-------|---------------|-------| | ( cnn ) - | 21k | in this paper | 11k | | according to the | 3.5k | as a function | 6.4k | | the first time | 2.4k | in the case | 4.8k | | the end of | 1.3k | we find that | 3.7k | Table 1: Examples about the high-frequency n-grams in oracle sentences from CNN/DailyMail and arXiv. summarization dataset, lead sentences always have a much higher possibility to become crucial sentences. Meanwhile, Table 1 shows that sentences with certain n-gram tokens like "in this paper" or "we find that" are also considered to be important in science paper summarization. Here, we collectively called these factors pattern information, since they are context-independent and can decide the sentence importance solely by themselves. However, as we displayed in Figure 1 and Table 1, pattern information varies from dataset to dataset. In this case, such information is only effective in its corresponding dataset or domain and can not be generalized like the context information. Although both context information and pattern information are crucial for the task, it is hard to tell whether the improvement of the current extractive summarization models stems from a better understanding of the context information or overfitting the pattern information on specific data. Hence, the existing models may fail to achieve good performance when transferring to other domains or datasets with limited data due to the intermingling of domain-specific pattern information. In this paper, we aim to apply disentangled representation learning to extractive summarization, and separate the two key factors for the task, context information and pattern information, for a better generalization ability in low-resource settings (zero-shot and few-shot). Our model is built on a pretraining-based extractive summarization model (Liu and Lapata, 2019) that uses a BERT to encode each sentence with its context to the latent representation. We would like the latent representation to be disentangled with respect to the context and pattern information. Following the previous works (John et al., 2018; Cheng et al., 2020), we combine the multitask objectives and adversarial objectives/mutual information (MI) minimizing objectives to accomplish this. The multitask objectives aim to encourage the two latent spaces to learn its corresponding information. For the context information, we propose to approximate it by predicting the high-frequency non-stop word appearing in a sentence and its context. For the pattern information, we divide it into two parts: the position pattern feature and the n-gram pattern feature. The former one can be transferred into a sentence position predicting problem, while the latter one is approximated by predicting whether the target sentence contains any high-frequency n-gram patterns. Then we try two commonly used disentangled representation learning approaches, adversarial objectives/MI minimizing objectives, to further ensure the independence between the two latent spaces. After the model is trained on a source dataset, it can be transferred to a target dataset for lowresource extractive summarization. In the zero-shot setting, we only utilize the context representation to do the extractive summarization. In the fewshot setting, we choose to fine-tune the patternrelated parameters with a few training instances to automatically select useful patterns for the target dataset. To evaluate our proposed model, we conduct the experiments on three datasets from different domains: CNN/DaliyMail from the news summarization domain, arXiv from the science article summarization domain, and QMSum from the dialogue summarization domain. These experiments suggest the effectiveness of our model by disentangling context and pattern information. ## 2 Related Work 2.1 Text Summarization Extractive summarization is an important sub-topic for text summarization. Early works (Nallapati et al., 2017; Narayan et al., 2018; Zhou et al., 2018; Zhang et al., 2018) formulated it as a sentence binary classification problem and further extend it with different techniques. With the development of the pretrained model, using a transformerbased pretrained model as encoder (Liu and Lapata, 2019; Bae et al., 2019; Zhang et al., 2019) leads to a huge improvement in the task. Recently, MATCHSUM (Zhong et al., 2020) has achieved a state-of-the-art performance by combining contrastive learning with extractive summarization. These models mainly focus on improving the performance on a certain dataset or domain. Research on low-resource text summarization is also increasing. AdaptSum (Yu et al., 2021) propose a pre-train and then fine-tune strategy for low-resource domain adaptation for abstractive summarization. Other researchers (Fabbri et al., 2020) present a similar idea but further enhance it with a data augmentation method using the large corpus from Wikipedia. (Zhao et al., 2022) combines domain words and a prompt-based language model to achieve zero-shot domain adaption in dialogue abstractive summarization. In this work, we aim to explore the lowresource extractive summarization by disentangling context and pattern information. ## 2.2 **Disentanglement Representation Learning** Disentanglement representation has first been explored in computer vision to disentangle features such as color or rotation. Recently, a growing amount of work has been proposed to investigate learning disentangled representations in NLP tasks. Early works (Hu et al., 2017; Shen et al., 2017; John et al., 2018) follow a similar idea, and applied disentanglement representation learning on style/sentiment transferring. Later, researchers further extend its application to different topics such as cross-lingual transfer (Wu et al., 2022), negation and uncertainty learning (Vasilakes et al., 2022), and fair classification(Park et al., 2021). Generally, there are mainly three types of approaches for disentanglement representation learning. A common approach (John et al., 2018) is to add an adversary that competes against the encoder trying to avoid learning certain types of attribute. Another approach (Cheng et al., 2020; Colombo et al., 2021) is to adopt the mutual information theory, and attempt to minimize the mutual information upper bound between two disentangle representations. Recently, some researchers (Colombo et al., 2022) propose a simpler approach by adding a set of regulizers to achieve disentanglement representation learning. Similar to cross-lingual transfer, in this work, we also aim to adopt disentanglement representation learning to domain transferring, but in the context of extractive summarization. ## 3 Model 3.1 Problem Statement In this work, we disentangle the sentence representation for extractive summarization into two parts: context representation and pattern representation. To achieve this, we need to satisfy the following requirements for an effective disentanglement. - The context and pattern representation need to have the ability to predict sentence importance and contribute to the extractive summarization. - The context and pattern representation should be predictive of the corresponding groundtruth information. For example, the pattern representation of a sentence can predict its pattern feature such as its position. - The context and pattern representation should lie in independent vector space, and one representation can not predict the corresponding ground-truth information of the other one. ## 3.2 Extractive Summarization Model Given an input document containing n sentences x = {s1, s2*, .., s*n}, we adopt a BERT to generate contextualized representations for each sentence. Since the output of BERT is grounded to tokens, we use a similar strategy with (Liu and Lapata, 2019) to modify the input sequence of BERT. We insert a [cls] token at the beginning of each sentence and use the embedding of the [cls] token to represent its corresponding sentence. Considering our glob is to disentangle it to context and pattern representation, we add two additional multilayer perceptrons (MLP) that map the sentence representations generated by BERT to context representations c and pattern representations p. Here, we collectively called the BERT and two MLP mappers encoder E. Then a sigmoid classifier Fext takes the concatenation of both representations as input to predict a score y e i for sentence si, and the loss of the whole model is the binary classification loss of y e i against gold label t e i . Note that the gold label refers to the one-hot distribution of the oracle sentences (the sentence set that has the highest similarity with the reference summary). The loss is shown in the following: $$y_{i}^{e}=F_{e x t}(c_{i};p_{i})\qquad\qquad(1)$$ $ l_{ext}=-\frac{1}{n}\sum_{i}^{n}t_{i}^{e}log(y_{i}^{e})+(1-t_{i}^{e})log(1-y_{i}^{e})$ (2) ... This classification loss serves as our primary training objective for extractive summarization. Meanwhile, to better utilize the context representation and pattern representation in the low-resource setting, we expect the two disentangled representations can do extractive summarization independently. Hence, we add two similar classifiers that directly take context representation or pattern representation as input, and their losses are denoted as lext(c) and lext(p). Note that the gradients of the two classifiers are detached from the main model. ## 3.3 Learning Context Representation The context representation c is expected to do extractive summarization using the context information. In addition to the extractive summarization loss, we add a multitask objective to ensure the context information is contained in it. The question that lies ahead is to define what "context" actually refers to. A widely accepted idea is that the effective context information in extractive summarization is salient words/phrases that repeat multiple times in the context. Inspired by this, given a sentence si, we propose to approximate the context information by predicting the non-stop words existing in both si and its adjacent sentences. The distribution of these words on the vocabulary is considered as the context feature t c i for si. ![3_image_0.png](3_image_0.png) We build a two-layer MLP classifier Fmul(c) on the context representation c to predict the context feature, and the classifier is trained with crossentropy loss against the ground-truth distribution: $$l_{m u l(c)}=-\frac{1}{n}\sum_{i}^{n}\sum_{j\in v o c}t_{i j}^{c}l o g(y_{i j}^{c})\qquad\mathrm{(3)}$$ where the voc stands for the vocabulary and y c i = Fmul(c)(ci) is the predicted context feature. ## 3.4 Learning Pattern Representation The pattern representation p needs to predict both sentence importance and pattern-related features. In this paper, we mainly focus on the two types of pattern, position pattern and n-gram pattern, that contribute the most to extractive summarization. Position pattern refers to the position of the sentence in the document, which plays an important role in the news article summarization. We add a multitask objective that predicts the position of a sentence. In this case, the position pattern feature t o i is a one-hot vector with a length that is the same as the sentence number. N-gram pattern is another crucial factor that influences sentence importance, which represents the expressions/phrases that are commonly used for summaries. Inspired by (Salkar et al., 2022), We count the frequencies of all n-grams that appear in the oracle sentences and select the top 500 as the n-gram pattern set. The glob of pattern representation is to predict whether a sentence contains any pattern from the pattern set, which is a binary classification problem. Similarly, we also use two MLP classifiers on the pattern representation p to predict the pattern related feature: $$l_{m u l(p)}=-\frac{1}{n}\sum_{i}^{n}t_{i}^{p}l o g(y_{i}^{p})+(1-t_{i}^{p})l o g(1-y_{i}^{p})\tag{4}$$ $$l_{mul(o)}=-\frac{1}{n}\sum_{i}^{n}\sum_{j}^{n}t_{ij}^{o}log(y_{ij}^{o})\tag{5}$$ where $y_{i}^{p}=F_{mul(p)}(p_{i})$ is the predicted n-gram pattern feature and y o i = Fmul(o)(pi) is the predicted position pattern feature. ## 3.5 Learning Disentangled Representation Although the multitask objectives assist the model to learn context and pattern information in different latent spaces, they are not effective enough to ensure the independence between c and p. As shown in the Figure 2, we adopt two commonly used objectives for learning disentangled representation in this paper. Adversarial Objective Considering one representation should be predictive of their corresponding information only, following (John et al., 2018), we add adversarial classifiers that try to predict the information related to the other one on both latent spaces, and the model is forced to structure the latent spaces such that the outputs of these adversarial classifiers are non-predictive. The adversarial objective is composed of two parts. The first part is the adversarial classifiers on each latent space for each type of non-target information. The second part is the adversarial loss aiming to maximize the entropy of the predicted distribution of the adversarial classifiers. Taking the adversarial objective on the pattern space for example, we train a two-layer MLP classifier, context discriminator Fdis(c), to predict whether it contains any context information. One thing that is worth noticing is that the gradients of these classifiers are not back-propagated to the encoder. In this case, the training of the context discriminator will not influence the encoder. Similar to equation (3) and (5), a cross-entropy loss is shown as follow, but with different input and parameters: $$l_{d i s(c)}=-\frac{1}{n}\sum_{i}^{n}\sum_{j\in v o c}t_{i j}^{c}l o g(y_{i j}^{c})\qquad\mathrm{(6)}$$ where y c i = Fdis(c)(pi) refers to the predicted context feature using pattern representation. Then an adversarial loss is used to maximize the entropy of the output of context discriminator. Here, we only train the encoder with such adversarial loss and the parameters of the context discriminator are excluded. $$l_{a d v(c)}=-\frac{1}{n}\sum_{i}^{n}\sum_{j\in v o c}y_{i j}^{c}l o g(y_{i j}^{c})\qquad(7)$$ We also impose the n-gram pattern discriminator and position pattern discriminator to disentangle the pattern information from the context space. These two adversarial objectives follow nearly the same way as the mentioned one and their corresponding loss are denoted as ldis(p), ldis(o), ladv(p) and ladv(o). MI Minimization Objective Mutual information (MI) is a natural measure of the independence between two variables. Inspired by the previous works (Cheng et al., 2020), minimizing the upperbound estimate of the mutual information (MI) between two latent spaces is an effective way to disentangle them. Following the Contrastive Learning Upper-Bound (CLUB) estimate of the MI (Cheng et al., 2020), we firstly train a neural network M that aims to estimate pattern representation by taking context representation as input: $$l_{m a p}=\frac{1}{n}\sum_{i}^{n}k l(M(c_{i}),p_{i})\qquad\qquad(8)$$ where kl stands for the Kullback–Leibler divergence. Just like the discriminator in the adversarial objective, we fix the parameters of the encoder when we train the neural network M with this loss. We minimize the Mutual information between the two latent spaces by minimizing the following equation: lmi = 1 n Xn i kl(M(pi), ci) − kl(M(pi), ck) (9) where k is selected uniformly from indices {1*, ..., n*}. Here, the optimization is only performed with parameters of the encoder E. ## 3.6 Training Strategy The loss of our model mainly consists of two parts, the losses that update the discriminator (for MI Objective, it is M) and the main loss (all the other losses). In the training process, for each batch, we first optimize the discriminator by ldis(c), ldis(p) and ldis(o) with a weight λdis (for MI Objective, it is lmap), and then optimize the encoder and all other classifiers with the main loss. The main loss Lall for our model comprises three types of terms: the extractive summarization objectives, the context/pattern feature learning objectives and adversarial objectives (for MI Objective, it is lmi), given by $$\begin{array}{c}{{l_{a l l}=l_{e x t}+l_{e x t(c)}+l_{e x t(p)}+}}\\ {{\qquad\lambda_{m u l}l_{m u l(c)}-\lambda_{a d v}l_{a d v(c)}+}}\\ {{\qquad\lambda_{m u l}l_{m u l(p)}-\lambda_{a d v}l_{a d v(p)}+}}\\ {{\qquad\lambda_{m u l}l_{m u l(o)}-\lambda_{a d v}l_{a d v(o)}}}\end{array}\qquad\mathrm{(10)}$$ The checkpoint selection strategy and hyperparameter searching are also crucial for model training. Considering the glob of our model is to effectively utilize the context information in the target dataset rather than achieve the best performance on the source dataset, we follow two rules: (1) The disentanglement is successful (based on the training log); (2) We select the checkpoint with the best performance when using context representation on the validation set. In the experiment, the weights are λmul = 1, λadv = 1, λdis = 3 ## 3.7 Application In Low-Resource Setting After we train the model on a source dataset, we can transfer it to a target dataset with limited data. Considering the pattern information in the source dataset may be misleading in a target dataset, we use the context representation to do the extractive summarization in the zero-shot setting. As for the few-shot setting, the data samples from the target | Dataset | Type | Domain | Size | Source length | Target length | |---------------|----------|----------|--------------------|-----------------|-----------------| | QMsum | dialogue | meeting | 1257/272/281 | 9070 | 70 | | arXiv | document | science | 202914/6436/6440 | 6030 | 273 | | CNN/DailyMail | document | news | 287227/13368/11490 | 766 | 53 | Table 2: The statistics and comparison of the datasets. dataset provide the model a chance to accomplish a quick adjustment on its pattern information. In this case, we choose to fine-tune the pattern-related parameters with the given samples to select useful patterns for the target dataset. ## 4 Experiment 4.1 Experiment Details Dataset: We evaluate our proposed methods in three English datasets from different domains. The detailed information and comparison are shown in Table 2. **arXiv** (Cohan et al., 2018) collects academic articles from arXiv.org as source documents and uses the abstracts of these articles as the target summaries. **QMSum** (Zhong et al., 2021) is one of the benchmark datasets for dialogue summarization. Considering the QMSum dataset contains both data samples for normal text summarization and query-focused summarization, we only use the data samples that contain no query. Meanwhile, the number of training data in QMSum is relatively small, so we only use it for testing. **CNN/DailyMail** (Nallapati et al., 2016) is the classic dataset for news summarization. It is also known for suffering from lead bias, where the summaries that consist of the lead three sentences can achieve a relatively good performance. Model Details: In this work, we adopt BERT-base as the encoder of our model. Our implementation is based on Transformers from Hugging Face. In the training, the learning rate is set to 2e-5, and the batch size is set to 16. We conduct the validation for every 2000 steps and train the model for a maximum of 30000 steps. We truncate all the input documents to 500 tokens. For the long-input summarization dataset such as arXiv and QMSum, we split the original document into multiple chunks and generate extractive summarization scores for the sentences in each chunk independently. In all experiments, we select 3 sentences for CNN/DM and 6 sentences for arXiv and QMSum. Following previous works, we also adopt the trigram blocking trick during inference. Evaluation Metric: We adopt Rouge as our evalu- | To arXiv | R-1 | R-2 | R-L | |------------|-------|-------|-------| | Lead* | 33.66 | 8.94 | 22.19 | | TextRank* | 24.38 | 10.57 | 22.18 | | LexRank* | 33.85 | 10.73 | 28.99 | | AdaptSum | 36.28 | 9.17 | 32.26 | | Our_adv | 37.03 | 9.64 | 33.03 | | Our_mi | 36.89 | 9.44 | 32.75 | | BERT(full) | 41.04 | 13.92 | 36.61 | | To QMSum | R-1 | R-2 | R-L | | Lead-5* | 12.84 | 1.69 | 9.17 | | TextRank* | 16.27 | 2.69 | 15.41 | | AdaptSum | 26.41 | 4.67 | 23.80 | | Our_adv | 27.27 | 5.11 | 24.91 | | Our_mi | 26.71 | 4.49 | 24.18 | ation metric (Lin, 2004) including Rouge-1 (R-1), Rouge-2 (R-2), and Rouge-L (R-L) as evaluation metrics. In practice, we use a python wrapper pyrouge to apply the classic Rouge 1.5.5. ## 4.2 Comparison We compare our method with some commonly used baselines and previous state-of-the-art methods designed for low-resource text summarization. There are three types of methods: unsupervised baselines, comparable unsupervised models based on domain transferring or pretraining, and other reference models that are not directly comparable. Unsupervised Baselines Lead-n aims to select the lead sentences in the document as the summaries, and it always plays an important role in the news summarization dataset that heavily relies on the position pattern information such as CNN/DailyMail. We also show the result of two strong unsupervised baselines TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004). Comparable Models AdaptSum (Yu et al., 2021) focuses on one-to-one domain adaption in text summarization. It proposes a Source Domain PreTraining (SDPT) strategy that first fine-tunes a pretrained model on the source domain and then ap- To CNN/DM R-1 R-2 R-L Lead* 40.49 17.66 36.75 TextRank* 33.85 13.61 30.14 LexRank* 34.68 12.82 31.12 AdaptSum 37.21 15.07 33.64 Our_adv **38.37 15.81 34.64** Our_mi 38.05 15.74 34.37 BERT(full) 42.83 19.82 39.13 To QMSum R-1 R-2 R-L Lead-5* 12.84 1.69 9.17 TextRank* 16.27 2.69 15.41 AdaptSum **28.28 4.78 25.28** Our_adv 28.01 4.74 24.94 Our_mi 27.63 4.66 25.13 plies it to the target domain. Another research (Fabbri et al., 2020) also proposes a similar method with it and further extends with a data augmentation method. However, this data augmentation method requires the pattern information from the target dataset and is not comparable with our model. Other Reference Models We display the result of BERTSum (Liu and Lapata, 2019) training on the full target dataset, which can be considered as the upper bound of our model. ## 4.3 Experiment Results Zero-shot application We first evaluate the performance of our model in the zero-shot setting in Table 3 and Table 4, where the information of the target dataset is totally unknown. Here, we display the two variants of the model, Our_adv using the adversarial objective and Our_mi adopting the MI minimization objective. Based on the results, we have the following observation. Firstly, Our_adv achieves the best result in most cases. This indicates the effectiveness of context information in the zero-shot setting. Meanwhile, we also observe that Our_mi obtains a lower performance compared to Our_adv. Further investigation of the training process shows that using the MI minimization objective is more difficult to disentangle pattern and context information. We think the reason is that the two types of information are not naturally disentangled and are optimized by the same extractive summarization objectives. In this case, the model requires more clear guidance to achieve the disentanglement. Table 5: The results on CNN/DM when using context/pattern representation. ![6_image_0.png](6_image_0.png) ## Analysis Of Context And Pattern Information To understand the influence of both context and pattern information on the target dataset, we compare the performance of using context representation, using pattern representation, and using both representations in Table 5. Considering the huge gap in the pattern between the two datasets, it is not surprising that using the pattern representation achieves the worst result. Meanwhile, its misleading information also pulls down the results of using both representations. We also display the position distribution of extracted sentences on arXiv using the model trained on CNN/DM in Figure 3. Since CNN/DM is known for its lead bias, the pattern latent space learned on it inevitably tend to select the lead sentences. This trend further dominates the situation when using both representations. As for using context representation alone, the lead bias is relatively weaker. Few-shot application Directly using the pattern information in an unsuitable dataset leads to a decrease in the model performance. However, this does not mean the pattern representation is completely useless. In the few-shot setting, we can obtain some information from the target dataset and fine-tune the pattern latent space. To simulate this situation, for each target dataset, we build its fewshot version by randomly taking 50 data samples from its original training set and splitting it into 25 | arXiv to CNN/DM | R-1 | R-2 | R-L | |-------------------|-------|-------|-------| | Both | 37.71 | 15.28 | 33.98 | | Context | 38.37 | 15.81 | 34.64 | | Pattern | 36.65 | 14.39 | 32.95 | | arXiv to CNN/DM | CNN/DM to arXiv | | | | | | |-------------------|-------------------|---------|---------|---------|---------|-------| | Rouge-1 | Rouge-2 | Rouge-L | Rouge-1 | Rouge-2 | Rouge-L | | | BERT | 37.36 | 15.21 | 33.86 | 32.55 | 7.68 | 28.92 | | AdaptSum | 38.21 | 15.91 | 34.60 | 39.12 | 11.25 | 34.78 | | Our_adv | 39.27 | 16.56 | 35.47 | 39.39 | 11.35 | 34.97 | | arXiv to CNNDM | R-1 | R-2 | R-L | |------------------|-------|-------|-------| | Our_adv | 38.37 | 15.81 | 34.64 | | –adv loss | 37.72 | 15.44 | 34.05 | | –aux loss | 37.11 | 14.75 | 33.44 | training data and 25 validation data. Here, despite our proposed model and AdaptSum, we also show the result of directly fine-tuning a BERTSum model on the limited data. In Table 6, the performance of all models is improved with the help of the limited data, while the gap between Our_adv and AdaptSum still exists. This shows our model is capable of selecting the effective pattern information for the target dataset and preserving its advantages on context information. Ablation study We further conduct an ablation study. Firstly, we remove the adversary objectives from our model (–adv loss), which means the model can only learn the disentangled representation by approximating context/pattern features. Then we further remove the multitask objectives (– aux loss). In this case, the main difference between this model and the AdaptSum is that our classifier contains more parameters. Here we compare the result of only using context representations in the zero-shot setting. As shown in Table 7, we find that removing the adversary objectives leads to a clear performance drop. This suggests that using the adversary objectives alone is far enough to disentangle the context and pattern information. We also find that the result of model "–aux loss" is similar to the result of AdaptSum in Table 4, which shows the improvement of our model is not brought by the additional parameters. ## 4.4 Visualization To have a more direct observation, we visualize the context and pattern representations by using the t-SNE algorithm (Van der Maaten and Hinton, 2008) to reduce them to two dimensions in Figure 4. These representations are taken from 1000 ![7_image_0.png](7_image_0.png) randomly sampled examples from CNN/DM using the model trained on arXiv. Each point refers to a context/pattern representation of a sentence from the source document. The figure shows that the context latent space and pattern latent space are well separated into two parts, which supports the effectiveness of our model in disentangling context and pattern information. ## 5 Conclusion In this paper, we propose a novel extractive summarization model that aims to improve the generalization ability in low-resource setting. It disentangles the sentence representation to context and pattern representation and utilize the context information to reduce the influence of domain-specific pattern information during model transferring. The experiment suggests the ability of our model in the disentanglement, and it also supports the claim that the context information tends to have better generalization ability facing the dataset from a different domain. In the future, we plan to extend this idea by learning a more generalized context latent space from multiple summarization datasets. ## Limitations Firstly, we adopt two types of representative pattern information, position pattern, and n-gram pattern, but it does not mean they cover all effective pattern information. In this case, the way to efficiently include all types of pattern information is still an important problem. Secondly, we do not put too much effort into investigating the influence of different feature forms (pattern feature and context feature) for the multitask objectives. Thirdly, due to the limitation of time and paper length, we only evaluate our method in three representative domains. Other domains such as review summarization (Reddit (Völske et al., 2017)) and legislation document summarization (BillSum (Kornilova and Eidelman, 2019)) are also worth exploring. ## Ethics Statement Our experimental datasets, CNN/DailyMail, arXiv, and QMSum, are well-established and publicly available. Datasets construction and annotation are consistent with the intellectual property and privacy rights of the original authors. The scientific artifacts we used are available for research with permissive licenses, including ROUGE and Transformers from HuggingFace. The use of these artifacts is consistent with their intended use. The task of our work is a classic NLP task, text summarization. Considering all the datasets are public available, we think there are no potential risks for this work. ## Acknowledgements The work described in this paper was supported by Research Grants Council of Hong Kong (PolyU/15203617 and PolyU/5210919), National Natural Science Foundation of China (61672445, 62076212, 62106165). ## References Sanghwan Bae, Taeuk Kim, Jihoon Kim, and Sanggoo Lee. 2019. Summary level training of sentence rewriting for abstractive summarization. *arXiv* preprint arXiv:1909.08752. Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, and Lawrence Carin. 2020. Improving disentangled text representation learning with information-theoretic guidance. *arXiv preprint arXiv:2006.00693*. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. arXiv preprint arXiv:1804.05685. Pierre Colombo, Chloe Clavel, and Pablo Piantanida. 2021. A novel estimator of mutual information for learning to disentangle textual representations. arXiv preprint arXiv:2105.02685. Pierre Colombo, Guillaume Staerman, Nathan Noiry, and Pablo Piantanida. 2022. Learning disentangled textual representations via statistical measures of similarity. *arXiv preprint arXiv:2205.03589*. Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. *Journal of artificial intelligence research*, 22:457–479. Alexander R Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, and Yashar Mehdad. 2020. Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation. *arXiv* preprint arXiv:2010.12836. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In International conference on machine learning, pages 1587–1596. PMLR. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for non-parallel text style transfer. *arXiv* preprint arXiv:1808.04339. Anastassia Kornilova and Vlad Eidelman. 2019. Billsum: A corpus for automatic summarization of us legislation. *arXiv preprint arXiv:1910.00523*. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. *arXiv preprint* arXiv:1908.08345. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In *Proceedings of the 2004 conference on empirical methods in natural language* processing, pages 404–411. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In *Thirty-First AAAI Conference on Artificial* Intelligence. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. *arXiv preprint* arXiv:1802.08636. Sungho Park, Sunhee Hwang, Dohyung Kim, and Hyeran Byun. 2021. Learning disentangled representation for fair facial attribute classification via fairnessaware information alignment. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 2403–2411. Nikita Salkar, Thomas Trikalinos, Byron C Wallace, and Ani Nenkova. 2022. Self-repetition in abstractive neural summarizers. *arXiv preprint* arXiv:2210.08145. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. Advances in neural information processing systems, 30. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. *Journal of machine* learning research, 9(11). Jake Vasilakes, Chrysoula Zerva, Makoto Miwa, and Sophia Ananiadou. 2022. Learning disentangled representations of negation and uncertainty. arXiv preprint arXiv:2204.00511. Michael Völske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. Tl; dr: Mining reddit to learn automatic summarization. In *Proceedings of the Workshop on New Frontiers in Summarization*, pages 59– 63. Shaojuan Wu, Xiaowang Zhang, Deyi Xiong, Shizhan Chen, Zhiqiang Zhuang, Zhiyong Feng, et al. 2022. Learning disentangled semantic representations for zero-shot cross-lingual transfer in multilingual machine reading comprehension. arXiv preprint arXiv:2204.00996. Tiezheng Yu, Zihan Liu, and Pascale Fung. 2021. Adaptsum: Towards low-resource domain adaptation for abstractive summarization. *arXiv preprint* arXiv:2103.11332. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. *arXiv preprint arXiv:1808.07187*. Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. Hibert: Document level pre-training of hierarchical bidirectional transformers for document summarization. arXiv preprint arXiv:1905.06566. Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, and Yanan Wu. 2022. Domain-oriented prefix-tuning: Towards efficient and generalizable fine-tuning for zero-shot dialogue summarization. *arXiv preprint* arXiv:2204.04362. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. arXiv preprint arXiv:2004.08795. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. *arXiv preprint* arXiv:2104.05938. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. *arXiv preprint arXiv:1807.02305*. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitation ✓ A2. Did you discuss any potential risks of your work? Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 and the Abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1 the used scientific artifacts are public datasets ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. the used scientific artifacts are public datasets ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4.1 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section Ethics Statement, all the data used in this paper are from public datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1, Section 3.6 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? The experiments are conducted based on single run. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhong-etal-2023-disentangling
Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
https://aclanthology.org/2023.findings-acl.480
This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making. Inspired by dual-process theory in cognitive science, the representation module (automatic thinking) and reasoning modules (controlled thinking) are decoupled to capture different levels of cognition. Upon the top of the representation module, the pre-trained reasoning modules are modular and professional in specific and fundamental reasoning skills (e.g., logic, simple QA, etc). To mimic the controlled compositional thinking process, different reasoning modules are dynamically activated and composed in both parallel and cascaded manners to control what reasoning skills are activated and how deep the reasoning process will be reached to solve the current problems. The unified reasoning framework solves multiple tasks with a single model, and is trained and inferred in an end-to-end manner. Evaluated on 11 datasets requiring different reasoning skills and complexity, ReasonFormer demonstrates substantial performance boosts, revealing the compositional reasoning ability. Few-shot experiments exhibit better generalization ability by learning to compose pre-trained skills for new tasks with limited data, and decoupling the representation module and the reasoning modules. Further analysis shows the modularity of reasoning modules as different tasks activate distinct reasoning skills at different reasoning depths.
# Disentangling Reasoning Capabilities From Language Models With Compositional Reasoning Transformers Wanjun Zhong1∗ , Tingting Ma2∗, Jiahai Wang1**, Jian Yin**1, Tiejun Zhao2, Chin-Yew Lin3 **and Nan Duan**3 1 The School of Computer Science and Engineering, Sun Yat-sen University Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2 Harbin Institute of Technology 3 Microsoft zhongwj25@mail2.sysu.edu.cn, hittingtingma@gmail.com {wangjiah,issjyin}@mail.sysu.edu.cn, tjzhao@hit.edu.cn {cyl, nanduan}@microsoft.com; ## Abstract This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making. Inspired by dual-process theory in cognitive science, the representation module (automatic thinking) and reasoning modules (controlled thinking) are decoupled to capture different levels of cognition. Upon the top of the representation module, the pre-trained reasoning modules are modular and professional in specific and fundamental reasoning skills (e.g., logic, simple QA, etc). To mimic the controlled compositional thinking process, different reasoning modules are dynamically activated and composed in both parallel and cascaded manners to control what reasoning skills are activated and how deep the reasoning process will be reached to solve the current problems. The unified reasoning framework solves multiple tasks with a single model, and is trained and inferred in an end-to-end manner. Evaluated on 11 datasets requiring different reasoning skills and complexity, ReasonFormer demonstrates substantial performance boosts, revealing the compositional reasoning ability. Few-shot experiments exhibit better generalization ability by learning to compose pre-trained skills for new tasks with limited data, and decoupling the representation module and the reasoning modules. Further analysis shows the modularity of reasoning modules as different tasks activate distinct reasoning skills at different reasoning depths. 1 ## 1 Introduction Prevailing language models (LMs) (Devlin et al., 2018; Brown et al., 2020) demonstrate impressive performance in natural language processing tasks, Question: What cause car accident? Semantic Understanding ![0_image_0.png](0_image_0.png) (Intuitive) System 1 (Controlled) System 2 Step 1: Memorizing Fact Knowledge Driving relates to {speed, attention, rule following} Alcohol hurts attention … Step 2: Logical Deduction alcohol → affect attention → driving accident … Step 3: Answering Question alcohol, over-speeding, distraction … Figure 1: Compositional reasoning process of humans in complex decision-making. Humans solve the problems by cascaded executions of fundamental skills. and have ushered in a new trend in AI research. Despite the emerging fervor, the homogeneous LMs relying on a single call of the model are less modular and are hard to explicitly model the complex reasoning process (Helwe et al., 2021) like humans. In the dual-process theory (Daniel, 2017) in cognitive psychology, there are two cognitive systems interacted to form a whole reasoning process. System 1 (automatic thinking) generates intuitive patterns of ideas, and System 2 (controlled thinking) constructs reasoning in an orderly logical series of compositional reasoning processes. Besides, in the process of System 2, different functional brain areas could be modular and interact with each other. System 2 can decide how to compose different reasoning skills and when to stop thinking. As the example shown in Fig. 1, when finding the cause of a car accident, humans intuitively comprehend the question (System 1), and then conduct compositional reasoning (System 2: recalling fact → logical deduction → answering question). We would like to incorporate this mechanism into AI models in decision-making, and make the following assumptions: (1) the representation module (System 1) and reasoning module (System 2) can be decoupled and (2) the "complicated" reasoning process can be disentangled into multi-step executions of compositional "fundamental" reasoning modules, whose compositionality can be learnt with limited data. Also, the "fundamental" nature of basic reasoning skills allows them to have rich training instances for reliable skill pre-training. Under these motivations, this paper proposes the modular and compositional reasoning framework - ReasonFormer, to mirror human's compositional reasoning process, with the following characteristics: (1) the representation module and reasoning modules are decoupled; (2) reasoning modules are modular and professional in fundamental reasoning skills; (3) reasoning modules are compositional in parallel and cascaded manner, to dynamically decide the activated reasoning skills and the reasoning complexity; (4) the general-purpose reasoning framework is end-to-end and unified in solving multiple tasks with one model. Specifically, the representation module learns contextual representations of problems. Upon the top of the it, there are cascaded reasoning modules to perform compositional multi-step reasoning. The reasoning modules are pre-trained to expert in specific reasoning skills (e.g., logic, QA, fact, etc.). These pre-trained reasoning skills are considered relatively fundamental and have rich resources. Two additional blocks complete the whole framework: the reasoning router and the reasoning adapter. The reasoning router decides which reasoning skills are activated in each reasoning step, and when to stop the reasoning process. The adapter adapts the reused reasoning modules to different steps of the reasoning process. We comprehensively evaluate the framework on 11 datasets emphasizing different reasoning skills and complexity, and highlight the following findings: (1) Substantial performance boosts demonstrate models' harvest of compositional reasoning ability, and both the reasoning-centric pre-training and reasoning adapter bring compounding performance gains. (2) Results of few-shot experiments show that specialized modules enables better generalization by learning to compose pre-trained skills for low-resource tasks, and decoupling of representation module and reasoning modules. (3) Further analysis reveals the distinct reasoning skills required for different tasks at different reasoning depths, shoring up the modularity of reasoning modules. ## 2 Reasoning Skills Formulation The compositional reasoning process of LMs' relies on the pre-training of several fundamental reasoning skills and their compositionality. Hence, the selection of skills is critical. Selection Principles. There are two major principles in selecting skills: (1) **Fundamental**: Complex problems can be decomposed and solved by simpler basic skills. So the basic skills should be more fundamental, well-defined, and can be covered in the required skill set of as many tasks as possible; (2) **Resourceful**: Reliable skill pre-training requires large-scale pre-training data. However, in the real-world scenario, the annotated data is expensive to obtain for most reasoning tasks. So it is expected that there are already rich resource or data can be collected via self(semi)-supervised manner. Basic Skills Selection. Humans always solve complex problem with fundamental skills, like understanding key information (e.g., entity and its type) of events, recalling related facts, understanding causal relations between events, and extracting answers for the question. This motivates us to select the following basic skills: the **logic ability** to logically deduce the cause or consequence of events; **simple question answering (QA)** to understand the context and answer simple questions; named entity recognition (NER) to identify important entities in the context; **natural language** inference (NLI) to identify semantic relevance of two sentences and **factual knowledge** to memorize commonsense knowledge and understand daily events. There is an additional **general** skill to learn the commonly shared knowledge across selected skills. We keep this setting in our paper as they are relatively well defined and resourceful 2. We adopt self-supervised methods to construct pre-training corpus for {logic ability, factual knowledge, NER}, semi-supervised method to construct pre-training corpus for *simple QA*, and large-scale supervised data for NLI. Further details are given in § 4.2 and examples are given in Appendix A. ![2_image_0.png](2_image_0.png) ## 3 Reasonformer **Framework** As shown in Fig. 2, the general-purpose reasoning framework is built based on encoder-decoder architecture to process multiple tasks (i.e., all pretraining tasks and downstream tasks) with a unified model, where all tasks are tackled as unified text-totext generation tasks. We first reformat all the tasks into the same format using hard prompts (Sanh et al., 2021). For example, the question-answering task input can be prompted with the template: The question is {Question}. Please give the answer:", and the expected output is the answer text. Given the prompted task inputs, the modular and compositional framework consists of two components in its encoder: the representation module (System 1) and the reasoning modules (System 2). The **representation module** (§ 3.1) captures the intuitive understanding of problems by calculating initial contextual representations. Upon the top of the representation module, there are several pretrained **reasoning modules** (§ 3.2) with different reasoning skills, waiting for interaction to form a compositional reasoning process. For reasoning process organization, there are **reasoning routers** (§ 3.2.2) to decide the (parallel) activated skills and when to stop the (cascaded) reasoning process. ## 3.1 Representation Module Similar to the perceptive function of System 1, the representation module targets basic contextual understanding, and builds the foundation of the following-up reasoning process. As LMs exhibit impressive ability on contextual understanding, we build the representation module with cascaded Transformer layers. Given the tokenized input X with length m, the initial representations learnt from representation module are denoted as: $$\mathbf{H}^{0}=\{\mathbf{h}_{[\mathrm{CLS}]}^{0},\mathbf{h}_{1}^{0},\mathbf{h}_{2}^{0}...,\mathbf{h}_{m}^{0}\}\qquad\quad(1)$$ where [CLS] is a special token. ## 3.2 Reasoning Modules To simulate the cognitive process (System 2) formed by controlled interaction between various functional areas in human brains, the reasoning modules are modular and compositional. Reasoning modules (RMs) learn different reasoning skills specified during pre-training, and are automatically composed during downstream adaptation (§ 3.3) with reasoning router (§ 3.2.2). Compositionality is not only at the parallel level (different skills), but also at the cascaded level (multi-step reasoning) Since different reasoning steps intuitively model different levels of information, there are additional reasoning adapters to adapt the reused modules to different reasoning steps. ## 3.2.1 Reasoning Modules Architecture Each reasoning module is implemented by several Transformer layers. As shown in Fig.2(b), the shared reasoning modules with the same skill at different reasoning depths have shared parameters (excluding the reasoning adapter). For example, Fact modules at steps {0, 1*, ..., n*} share major parameters. The output from the last reasoning step will be recursively taken as the inputs of the reused reasoning modules with step-specific adapters. Reasoning Adapter. To adapt the reused reasoning module to different depths of the reasoning process, we add step-specific reasoning adapters to the reasoning modules. Inspired by Houlsby et al. (2019) on domain adaptation, as shown in Fig. 2, we add two reasoning adapters following the multi-head attention layer and FFN layer in the Transformer layer of reasoning modules. Besides, the reasoning adapters for different skills and different reasoning depths are non-shared. ## 3.2.2 Reasoning Router To compose the reasoning process, the reasoning router is critical in deciding which skills are activated per step, and how many reasoning steps are required for problem-solving. As the example in Fig. 1, problem-solving needs to recall facts, and make logical deductions, then answer questions. Therefore, the activated skills and reasoning depths may varied for every instance. At the parallel level of each step, the **skill router** calculates activating scores for reasoning modules. After each reasoning step, the **stop gate** decides whether executed reasoning steps are sufficient in problem-solving through a stop gating mechanism. Unlike Mixture-of-Experts (MoE) (Shazeer et al., 2017) that uses token-wise routing, we adopt an instance-level routing strategy, which can capture more comprehensive semantics of problems. Skill Router. Since the i th reasoning step has n reasoning modules: {R1, · · · , Rn} and a skill router S i, the output Hi of the i th reasoning step can be calculated by router-weighted averaged outputs from the k activated reasoning modules: $$\mathbf{H}^{i}=\sum_{j=1}^{k}S^{i}(\tilde{\mathbf{H}}^{i-1})_{j}R_{j}(\tilde{\mathbf{H}}^{i-1})\qquad(2)$$ where S i(H˜ i−1)j (scalar weight) and Rj (H˜ i−1) (updated hidden vectors) are the outputs from the router and the j th reasoning module, respectively. Since deciding the skills is a non-trivial task, we adopt a relatively complex router for deeper understanding. We use one Transformer layer T to project the original output for routing weight calculation. Then, we use an FFN layer followed by a Softmax function for weighted score calculation: $$S^{i}(\tilde{\mathbf{H}}^{i-1})=\mathrm{Softmax}(\mathrm{FFN}(T(\tilde{\mathbf{H}}^{i-1})))\quad(3)$$ Afterwards, we **sparsely activate** (Shazeer et al., 2017) k reasoning modules with top-k skill routing scores at each reasoning step. The router training objectives are detailed in § 3.3. Stop Gate. After each reasoning step, the stop gate decides whether the current reasoning depth is sufficient to solve the problem. Taking Hias the input, the stop gate uses a residual gating mechanism Gi*stop* to control the information flow from executed reasoning steps and calculate the final output H˜ ifor the i th reasoning step by: $$\tilde{\mathbf{H}}^{i}=\mathbf{H}^{i-1}+G_{s t o p}^{i}(\mathbf{H}^{i})\qquad\qquad(4)$$ An FFN layer is used as the stop gate Gi*stop*. When the reasoning process is sufficient, the following-up process will be softly stopped by Gi*stop*. ## 3.3 Pre-Training And Adaptation The unified model enables multi-task learning for both pre-training and downstream tasks. The major difference between pre-training and adaptation is that only in the pre-training stage we have the supervision for the activated skills. Pre-training. Before reasoning pre-training, the model weights of ReasonFormer are initialized with pre-trained weights from T5 (Raffel et al., 2020). The details of model initialization and pretraining corpus collection are introduced in § 4.3.1 and § 4.2, respectively. Since model acknowledges which skill it is learning, we add **skill routing loss** Lr in addition to the teacher-forcing loss, to guide the routers in activating skills. For example, if the current instance focuses on logic ability, it should activate {*logic ability, general*} skills. Lr can be set as the cross-entropy loss for the multi-skill classification, where the activated skill has label 1 and 0 otherwise. During pre-training, all the reasoning steps activate the same skill for one instance. Adaptation. During downstream adaptation, we have no prior knowledge about the required skills for different tasks, so we expect the model can automatically learn which skills are essential for each specific task. Therefore, we adopt standard teacher-forcing loss for generative training. ## 4 Experiment Setup 4.1 Datasets To verify the effectiveness of ReasonFormer, we extensively conduct experiments on 11 datasets emphasizing different reasoning types and complexity. Specifically, ReClor (Yu et al., 2020) emphasizes on logical reasoning. Commonsense QA (CSQA) (Talmor et al., 2018), ARC (Clark et al., 2018), PIQA (Bisk et al., 2020) and HellaSwag (Zellers et al., 2019) stress commonsense knowledge. Abductive NLI (aNLI) (Bhagavatula et al., 2019) is a natural language inference dataset. HotpotQA (Yang et al., 2018a) and WikiHop (Welbl et al., 2018) focus on multi-hop question answering. MuTual (Cui et al., 2020), DREAM (Sun et al., 2019) focus on reasoning over dialogue. RACE (Lai et al., 2017) is a general QA dataset. These datasets are related to the fundamental reasoning skills (§ 2) and fit nicely for analyzing the compositional reasoning process modeled by ReasonFormer. During **Evaluation**, the Hotpot QA adopts *Exact Match (EM)* as the metric, while the rest tasks use *accuracy* as the metric. The answer for multichoice QA and classification tasks are selected by the highest log-likelihood scores of options. ## 4.2 Pre-Training Corpus To reduce the manual efforts in data collection, we mainly select self(semi)-supervised pre-training corpus construction methods. To improve LMs' **logic** ability, we adopt the self-supervised logical pre-training corpus built by LogiGAN (Pi et al., 2022), which uses logical indicators to identify logical phenomena in a general text corpus. For **QA-centric** pre-training, we adopt the semi-supervised pre-training corpus construction method from ProQA (Lewis et al., 2021; Zhong et al., 2022a), which adopts a generationfiltering pipeline to build QA-centric corpus. To help the model in **identifying entities** from text, we use the self-supervised NER corpus (Chen et al., 2022) built from Wikidata and Wikipedia anchor link. To learn **factual knowledge**, we use Wikidata as a commonsense knowledge base to construct self-supervised pre-training corpus. Specifically, we sample 1 million fact triples from Wikidata and construct the KG completion task (Moiseev et al., 2022) by recovering the masked tailed entities with the head entities and relations given as inputs. Furthermore, since **natural language inference** task already have rich supervised data, we directly use MNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015) datasets as the pre-training corpus. Finally, 1 million instances are collected for each reasoning skill, and there are 5 millions pre-training instances in total for 5 reasoning skills. The examples and prompts for constructing inputs/outputs of the pre-training corpus are given in Appendix A. ## 4.3 Models 4.3.1 Model Initialization We adopt encoder-decoder framework. In the encoder, the representation module has 9 Transformer layers, each shared reasoning module has 3 Transformer layers and the maximum reasoning depths is 3. We initialize the major model parameters from pre-trained T5base (Raffel et al., 2020). Thus, the representation module is initialized by the 1 th → 9 th layers of T5 encoder, and the reasoning module is initialized by 9 th → 12th layers of T5 encoder. The decoder is the same with T5. ## 4.3.2 Compared Methods The major focuses of the experiment are to explore the effectiveness of ReasonFormer, and verify our hypotheses that complex problems can be disentangled and solved by compositional reasoning modules, and the decoupling of representation module and reasoning modules. We compare ReasonFormer with two series of methods. T5 series. (1) **Vanilla T5** is the released T5 model (Raffel et al., 2020) (*google/t5-v1_1-base*) pre-trained with C4 corpus excluding other supervised data; (2) **Reasoning Pre-Trained T5 (RPTT5)** is the T5 model continually pre-trained with our reasoning-centric pre-training corpus (§ 4.2). MoRM series. Inspired by Mixture-of-Experts (MoE) methods (Shazeer et al., 2017; Lepikhin et al., 2020a), we develop Mixture-of-Reasoning Modules (MoRM) methods for comparison. Unlike MoE that builds parallel experts in the FFN layer of Transformer Layers, MoRM builds parallel reasoning modules (RMs) on the top of the representation module, and sparsely activate these RMs. Specifically, after initialized with T5, the last | T5 Series | MoRM | ReasonFormer | | | | | | | |-------------------------|-------------|----------------|--------|-------------|---------|---------|------|------| | Datasets | Reasoning | T5 | RPT-T5 | w/o RPT (S) | RPT (S) | RPT (F) | S | F | | Activated Paramters (M) | 248 | 248 | 272 | 272 | 357 | 294 | 407 | | | ReClor | Logic | 35.2 | 36.8 | 35.4 | 36.8 | 35.4 | 39 | 39.4 | | ARC | Commonsense | 31.4 | 32.7 | 25.4 | 34.1 | 31.1 | 35.1 | 34.1 | | CSQA | Commonsense | 56.5 | 65.1 | 57.2 | 63 | 64.7 | 66.9 | 68.2 | | RACE | General | 63.8 | 67.4 | 66.4 | 68.8 | 70.9 | 72.5 | 73.5 | | DREAM | General | 59.3 | 64.5 | 56.6 | 61.8 | 67.7 | 70.5 | 70.5 | | aNLI | NLI | 66.9 | 66.3 | 68.2 | 68.8 | 69.8 | 69.6 | 69.5 | | MuTual | Dialog | 67.3 | 70.2 | 66.8 | 69.5 | 70.5 | 72.2 | 72.5 | | WikiHop | MultiHop | 63.6 | 66.1 | 63.5 | 66.1 | 66.9 | 67.1 | 67.4 | | HotpotQA | MultiHop | 61.1 | 63.3 | 63.1 | 63.3 | 63.8 | 65.2 | 65.5 | | Hellaswag | Commonsense | 31.5 | 33.7 | 34.2 | 37.9 | 43 | 53.9 | 54.9 | | PIQA | Commonsense | 61.4 | 63.3 | 64.6 | 65.4 | 67.6 | 67.5 | 67.9 | | Avg. | 54.3 | 57.2 | 54.6 | 57.7 | 59.2 | 61.8 | 62.1 | | 3 Transformer layers in the encoder are duplicated parallelly for Ns (numbers of skills) times, and the outputs of them are weighted average by the routing scores of the activated RMs. It increases the model size in the similar way with ReasonFormer, so it can verify whether the improvements are brought by the increased parameters. Besides, the major differences between ReasonFormer and MoRM are (1) MoRM involves no cascaded reasoning steps (depth=1); (2) Like MoE, RMs in MoRM are jointly trained for all instances without skill routing loss (§ 3.3), emphasizing no expertise of RMs. We also report the results of MoRM after reasoning-centric pre-training (RPT-MoRM). ## 5 Experiment Analysis 5.1 Main Results As presented in Table 1, ReasonFormer outperform T5 series and MoRM series across all tasks emphasizing the wide scope of different reasoning skills. Thus, we have the following findings: ReasonFormer **> MoRM & T5:** ReasonFormer surpasses other methods (even with more activated parameters) by a large margin, giving evidence to our primary hypothesis that the expertise of reasoning modules and the cascaded compositional reasoning process essentially help the model in solving complex reasoning problems. RPT-T5 > T5: The substantial performance boosts brought by RPT demonstrate that reasoningtargeted pre-training is essential in injecting various reasoning abilities into LMs. Sparse v.s. Full: Sparse activation of RMs leads to slightly reduced but comparable performance compared with full activation. It suggests that although activating more skills is beneficial, the most essential RM still plays the key role in problemsolving. The modularity of RMs can reduce the computation burden while keeping performance. These positive findings manifest that ReasonFormer can model compositional reasoning and verify our primary hypothesis that the complex problem can be decomposed and well solved with pre-trained basic skills, and the representation module can be decoupled with the reasoning modules. ## 5.2 Ablation Study We explore the truly functional components of ReasonFormer through ablation studies on 7 datasets. We evaluate the effectiveness of the following components: (1) reasoning pre-training; (2) cascaded reasoning mechanism; (3) expertise of reasoning skills (skill gating loss) and (4) reasoning adapter. Reasoning Pre-training. We assume that the first factor contributing to the improvements is the multi-task reasoning-centric pre-training. Since vanilla LMs mainly focus on learning contextual semantics, and don't emphasize higher-level reasoning ability (Pi et al., 2022; Helwe et al., 2021), it is intuitive that reasoning-driven pre-training can enhance the model in solving complex problems. Results in Table 2 suggest that the ablation of pre-training from all models leads to a substantial performance drop, showing the importance of reasoning-centric pre-training in helping the reasoning modules to learn fundamental skills. Cascaded Reasoning Mechanism. The second hypothesis is that the cascaded reasoning mechanism facilitates problem-solving with different | Modules | Models/Dataset | ARC | CSQA | DREAM | WikiHop | HotpotQA | Hellaswag | PIQA | Avg. | |----------------|------------------|-------|--------|---------|-----------|------------|-------------|--------|--------| | ReasonFormer | 35.1 | 66.9 | 70.5 | 67.1 | 65.2 | 53.9 | 67.5 | 60.9 | | | Cascaded (S) | w/o RPT | 24.1 | 56.9 | 59.5 | 64.2 | 64.4 | 34.7 | 65.6 | 52.8 | | w/o adapter | 33.1 | 64.2 | 68.4 | 66.8 | 64.8 | 47.1 | 67.4 | 58.8 | | | ReasonFormer | 34.8 | 63.7 | 65.9 | 66.6 | 63.9 | 39.7 | 66.3 | 57.3 | | | Single (S) | w/o RPT | 25.4 | 57.3 | 56.7 | 63.6 | 63.1 | 34.3 | 64.6 | 52.1 | | w/o modularity | 34.1 | 63.1 | 61.8 | 66.1 | 63.4 | 37.9 | 65.4 | 55.9 | | | Models | Freezed | #Tuned | ReClor | CSQA | RACE | ARC | MuTual | WikiHop | Avg. | |--------------|-----------|----------|----------|--------|--------|-------|----------|-----------|--------| | Modules | Para. (M) | Acc | Acc | Acc | Acc | Acc | Acc | | | | T5 | no | 248 | 28.2 | 23.2 | 26.2 | 25.1 | 32.6 | 18.3 | 25.6 | | no | 294 | 29.0 | 39.2 | 29.2 | 30.1 | 40.3 | 26.4 | 32.4 | | | RM | 251 | 29.4 | 39.1 | 29.2 | 31.1 | 38.7 | 26.9 | 32.4 | | | ReasonFormer | rep. | 230 | 29.2 | 38.9 | 29.2 | 30.1 | 38.0 | 26.5 | 32.0 | | RM+rep. | 188 | 29.2 | 37.8 | 28.4 | 28.8 | 31.6 | 25.4 | 30.2 | | Table 3: Few-shot experiments after freezing different modules. The representation module is abbreviated as rep. complexity and composition orders. **Single** is an ablated version of ReasonFormer in which the reasoning modules are not cascaded horizontally (depth=1) and the adapter is also eliminated. Comparison between performances of Cascaded and Single version of ReasonFormer (Line 1 v.s. Line 4) demonstrates that the cascaded reasoning mechanism brings notable improvements and reveals the effectiveness of multi-step reasoning process. Expertise of RMs. We assume that modularity and expertise of reasoning modules enables them to be flexibly composed. We ablate it from the Single version of ReasonFormer (Line 6) by pretraining all the RMs jointly without skill routing loss (§ 3.3) using the whole pre-training corpus. The apparent performance drop suggests that the expertise of RMs enables the model to discriminate the functionality of various skills and selectively compose them to form a whole reasoning process. Reasoning Adapter. The reasoning adapters adapt the shared RMs to different reasoning steps. It is intuitively important as different levels of cognition focus on the information at different granularity. From Table 2, eliminating the reasoning adapter (Line 3) from ReasonFormer (Cascaded) harms the overall performance, testifying the distinct mechanisms at different levels of reasoning and the importance of reasoning-centric adaptation. ## 5.3 Low-Resource Experiments It is interesting to know whether the fundamental skills of RMs can be easily composed to solve new tasks with limited training data, and whether the representation module and RMs can be decoupled during adaptation. If the answer is true, then the model's generalization ability will be greatly enhanced with easy composition of pre-trained skills. Under these motivations, we conduct few-shot experiments. We first examine the generalization of ReasonFormerand examine the decoupling of modules in ReasonFormer by freezing different modules during learning. We **freeze the RMs** (Line 3) to testify whether the skills can be directly reused without further fine-tuning. Then we **freeze the** representation module (Line 4) to verify the decoupling of representation and RMs. From Table 3, we highlight the following findings. (1) ReasonFormer outperforms T5, showing that the generalization ability of ReasonFormer is enhanced by reasoning pre-training and explicit modeling of the compositional reasoning process. (2) Freezing RMs (Line 3) achieves comparable and even slightly better performance than its fully tuned version, demonstrating that the learned skills can be easily composed with limited training data without further tuning RMs. (3) Freezing the representation module (Line 4) also leads to comparable performance, proving that the representation module and RMs can be decoupled in adaptation. It suggests that it is feasible to reduce computation burden during few-shot adaptation by freezing the well-trained representation module and only tuning the RMs for tasks, which is more efficient when the representation module (e.g., gigantic LM) is extremely large for tuning. (4) Freezing both modules (Line 5) hurts performance, showing that model adaptation to data | Commonsense QA | aNLI | Hotpot QA | | | |-------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------| | Question: Where would you find magazines along side other works? Options: A. Doctor B. Bookstore C. Market D. Train station E. Mortuary | Premise: Julie had a coworker named Barry who loved to make trouble for others. Julie was embarrassed. Hypothesis Choices: A. Barry didn't tell anyone that Julie farted B. Barry laughed at Julie's unzipped pants | Question: What is the name of the fight song of the university whose main campus is in Lawrence Passages: Kansas Song is a fight song of the University of Kansa, which is a public research university in the U.S.. The main campus in Lawrence … | | | | Activated Skills: | Activated Skills: | Activated Skills: | | | | ������� (0.99) | ���� | �� (0.35) | | | | (0.97) | ������� (0.65) | ��������� ����� ��� (0.3) | | | | ������� | ��� (0.3) | | | | | (0.99) | ������� (0.7) ������� (0.7) | ������� (0.97) | �� (0.7) | �� (0.4) | | ������� (0.3) ������� (0.6) | | | | | | ��������� ����� | ��������� ����� | | | | distribution of specific tasks is still essential. ## 5.4 Reasoning Skills Analysis Qualitative analyses are conducted to explore how the pre-trained skills are composed to solve different reasoning tasks, and how the skills changed at different reasoning depths. Therefore, we calculate the skill routing weights at every reasoning step (up to 3) for three tasks (i.e., {Commonsense QA, aNLI, Hotpot QA}. The case study provides examples and corresponding (top 2) activated skills at each step. As shown in Fig. 3, the activated skills are varied for different tasks, and are dynamically composed to form a series of reasoning steps. For commonsense reasoning, it emphasizes {*fact, QA*}. For NLI task, it emphasize {*NER, NLI*}. For multi-hop QA task, it executes the QA module for multiple steps. The statistical analysis of averaged routing scores on the whole evaluation set also demonstrate the same trend. These observations show improved interpretability of decision-making and give evidence to the hypothesis that the compositional cognitive process of humans can be transferred to AI model. ## 6 Related Works Multi-step Reasoning. Multi-step reasoning is a characteristic of human thinking. Multi-hop reasoning (Yang et al., 2018b; Yu et al., 2021) asks the system to logically switch attention to different contexts (Zhong et al., 2022b) or make a multistep deduction for a new conclusion (Dalvi et al., 2019; Zhong et al., 2021). Recently, *chain-ofthought* prompting (Wei et al., 2022) provides the model with manual prompts about the intermediate reasoning steps. Creswell and Shanahan (2022) use LMs to iteratively select evidence and generate inferences. However, they always require discrete manual-written reasoning traces. Dohan et al. (2022) is a position paper raising interest in modeling these cascaded inference processes of LMs with a probabilistic program. LM Modularity. Since human brains have various functional areas, it is inspiring to explore the modularity of LMs. Mixture-of-Experts (MoE) (Shazeer et al., 2017; Lepikhin et al., 2020b) use experts in FFN layers for sparse learning. However, their major motivation is to increase the model capacity while keeping efficiency, without emphasis on the speciality of expert. Recent works begin to explore domain-specific experts (Gururangan et al., 2021) and modality-specific experts (Wang et al., 2021). SkillNet proposes skill-specific experts (Zhang et al., 2022). However, the activated skills need to be manually specified, and do not explicitly model the cascaded reasoning process and disentangling of perception and cognition. Considering these directions in the whole picture, this paper targets to explore the modeling of modular and the compositional multi-step reasoning process of AI models in an end-to-end manner. ## 7 Conclusion This paper stimulates the compositional reasoning process of humans in decision-making, and makes the following hypotheses: (1) the intuitive perception system (System 1) and cognitive reasoning system (System 2) can be decoupled and (2) the complex decision-making can be disentangled into multi-step execution of fundamental reasoning skills. Correspondingly, we propose ReasonFormer, a compositional general-purpose reasoning framework. ReasonFormer decouples the representation module and reasoning modules, which are pre-trained to expert in fundamental reasoning skills. The reasoning modules are dynamically composed in parallel and cascaded manner to form a whole reasoning process. ReasonFormer is endto-end and unified in solving multiple tasks with one model. Extensive experiments on 11 tasks reveal the compositional reasoning ability of ReasonFormer and disentangling of representation and reasoning modules. ## Limitations As mentioned in Sec. 2, the current selection of fundamental reasoning skills for language models is limited by the availability of well-defined tasks and clear definitions of those tasks, as well as the availability of sufficient training data. As a result, some skills may overlap or may not be fundamental enough. For example, simple QA skill may overlap with NER skill to some extent. In the future, it would be worthwhile to explore self-supervised training tasks that can inject more fundamental abilities into language models. Additionally, the selection and combination of fundamental reasoning skills can be further explored. For example, the inclusion of numerical reasoning ability to solve mathematical problems. Additionally, methods for skill-centric pre-training corpus construction can also be explored to improve the effectiveness of these skills. ## Ethics Statement The present study was conducted in accordance with ethical principles. This study involved the analysis using publicly available data and knowledge sources (e.g., Wikipedia). Thus, this work did not involve any human participants and potential risks regarding credentials or privacy. Therefore, no ethical clearance was required and there were no potential risks associated with the conduct of this research. ## Acknowledgments Jian Yin is the corresponding author. Wanjun Zhong and Jian Yin are supported by the National Natural Science Foundation of China (U1911203, U2001211, U22B2060),Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key-Area Research and Development Program of Guangdong Province (2020B0101100001) ## References Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. *arXiv preprint arXiv:1908.05739*. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In *Proceedings of the* AAAI conference on artificial intelligence, volume 34, pages 7432–7439. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical* Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, and Le Sun. 2022. Few-shot named entity recognition with self-describing networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5711–5722, Dublin, Ireland. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. *ArXiv*, abs/1803.05457. Antonia Creswell and Murray Shanahan. 2022. Faithful reasoning using large language models. arXiv preprint arXiv:2208.14271. Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. Mutual: A dataset for multi-turn dialogue reasoning. *CoRR*, abs/2004.04494. Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen tau Yih, and Peter Clark. 2019. Everything happens for a reason: Discovering the purpose of actions in procedural text. *ArXiv*, abs/1909.04745. Kahneman Daniel. 2017. Thinking, fast and slow. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A Saurous, Jascha Sohl-dickstein, et al. 2022. Language model cascades. *arXiv preprint arXiv:2207.10342*. Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A Smith, and Luke Zettlemoyer. 2021. Demix layers: Disentangling domains for modular language modeling. *arXiv preprint arXiv:2108.05036*. Chadi Helwe, Chloé Clavel, and Fabian M Suchanek. 2021. Reasoning with transformer-based models: Deep learning, but shallow reasoning. In *3rd Conference on Automated Knowledge Base Construction*. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. *arXiv* preprint arXiv:1704.04683. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020a. Gshard: Scaling giant models with conditional computation and automatic sharding. *Learning*. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020b. Gshard: Scaling giant models with conditional computation and automatic sharding. *Learning*. Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 million probably-asked questions and what you can do with them. *Transactions of the Association for Computational Linguistics*, 9:1098–1115. Fedor Moiseev, Zhe Dong, Enrique Alfonseca, and Martin Jaggi. 2022. SKILL: Structured knowledge infusion for large language models. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1581–1588, Seattle, United States. Association for Computational Linguistics. Xinyu Pi, Wanjun Zhong, Yan Gao, Nan Duan, and Jian-Guang Lou. 2022. Logigan: Learning logical reasoning via adversarial pre-training. arXiv preprint arXiv:2205.08794. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. *arXiv* preprint arXiv:1701.06538. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*. Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. 2021. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. *arXiv preprint* arXiv:2111.02358. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. *Transactions of* the Association for Computational Linguistics, 6:287– 302. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018a. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. *CoRR*, abs/1809.09600. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018b. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Jianxing Yu, Qinliang Su, Xiaojun Quan, and Jian Yin. 2021. Multi-hop reasoning question generation and its application. IEEE Transactions on Knowledge and Data Engineering. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning. In *International Conference on Learning Representations (ICLR)*. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830. Fan Zhang, Duyu Tang, Yong Dai, Cong Zhou, Shuangzhi Wu, and Shuming Shi. 2022. Skillnet-nlu: A sparsely activated model for general-purpose natural language understanding. *arXiv e-prints*, pages arXiv–2203. Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022a. Proqa: Structural prompt-based pre-training for unified question answering. arXiv preprint arXiv:2205.04040. Wanjun Zhong, Junjie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022b. Reasoning over hybrid chain for table-and-text open domain qa. *arXiv preprint arXiv:2201.05880*. Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. 2021. Ar-lsat: Investigating analytical reasoning of text. *arXiv preprint arXiv:2104.06598*. ## A Example Of Pre-Training Tasks For the basic question answering skill, QA-centric pre-training uses a generation-filtering pipeline to build semi-supervised large-scale corpus (Lewis et al., 2021; Zhong et al., 2022a): (1) use annotated QA data to train a passage-to-question-answer generator (2) taking the wikipedia passages as inputs, and generates corresponding pseudo questions and answers (3) filtering passage, question, answer pairs with a QA model. For logic skill, we use the automatically constructed data from LogicGAN (Pi et al., 2022).It uses logical indicators (e.g., Therefore, as a result) to automatically identify logical inference phenomenon presented via natural language, and mask corresponding causes/results of events, and ask the pre-trained model to recover them to learn logical reasoning ability. For the natural language inference, we the public annotated corpus SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). Given a sentence as premise, the model is expected to predict whether the premise sentence entails the hypothesis sentence. For the named entity recognition skill, we use weakly-annotated data (Chen et al., 2022) obtained from Wikipedia and Wikidata. The mentions are the text with anchorlink and the types are obtained from Wikidata "instance of" or "subclass of" properties. We design three pretrain tasks similar to Chen et al. (2022): 1) given the sentence, identify all mentions in the sentence 2) given the sentence and interested types, output all mentions with these types in the sentence 3) given the sentence and mentions, predict all types of the mentions. For the fact skill, we use fact triples from Wikidata, and design a task that predict the tail entity given the head entity and relation as Moiseev et al. (2022). A summary of the examples for each tasks is presented in Fig 4. ## B Implementation Details Pretraining Details We use "google/t5-v1_1base" from HuggingFace (Wolf et al., 2020) implementation as base model for all our experiments. We use a learning rate of 5e-5 and train all models with 5 epochs. The warmup ratio is set to 0.1. The total batch size is set to 72 for shared model and 64 for private model. The down projection hidden size of adapter is set to 256. We use 8 V100 GPUs for model training. Downstream Adaptation Details For all the full data experiments, we use a learning rate of 1e-4 and training epoch of 10 with a batch size of 48 for our models. The model is validated at the end of each epoch. For all the few-shot experiments, we use a learning rate of 1e-5 and training epochs of 200 with a batch size of 8. The model is validated per 200 steps. | Skill | Corpus | Example | Prompt | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------| | QA | Wikipedia | Context: …appeared to Saint Bernadette Soubirous in 1858. At the end… | | | | Question:To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France? Answer: Saint Bernadette Soubirous Input: {context} [SEP] give me the answer of {question} Output: {answer} | | | | | | Logic | BookCorpus Context: All men are mortal, and Socrates is a man. Therefore, [MASK], …. | Input: {context} [SEP] give me the missing | | | | MASK: Socrates is mortal | statement Output: {mask} | | | | | NLI | SNLI, MNLI | Premise:Conceptually cream skimming has two basic dimensions - product and geography. Hypothesis:Product and geography are what make cream skimming work. Label:Neutral Input: {premise} [SEP] {hypothesis} give me the relation between the first and second sentences Output: {label} Context: …whose family had originally migrated from the state of Mysore. Input: {context} [SEP] give me the mentions Type_str:city with types {type_str} Mention_str: Mysore is city. Output: {mention_str} Context:… nationalist, communist and anarchist who was among the Input: {context} [SEP] give me all mentions founding members of the Communist Party of India (Tashkent group). Output: {mention_str} Mention_str:India, nationalist, Tashkent | | | | Wikipedia, | | | | | | NER | Wikidata | Context: …whose family had originally migrated from the state of Mysore. | Input: {context} [SEP] give me the types of | | | Mention_str: Mysore | mentions {mention_str} | | | | | Type_str: Mysore is city. | Output: {type_str} | | | | | Fact | Wikidata | Head: | Knut Wijkmark | Input: {head} {relation} [SEP] give me the | | Relation: child | missing entity: | | | | | Tail: Nils Wijkmark | Output: {tail} | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Last section after the conclusion section. ✓ A2. Did you discuss any potential risks of your work? I don't think of any risk. It is shown in the ethical statement section. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** The Applied Datasets Are Mentioned In Section 4. ✓ B1. Did you cite the creators of artifacts you used? Section 4. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. These datasets are suitable for academic research purpose. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. These datasets are suitable for academic research purpose. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4.1 ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We cover a wide scope of public evaluation datasets. The complete details can't be introduced in the main pages due to the page limits. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B and Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? The evaluation metrics are introduced in Section 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
elaraby-etal-2023-towards
Towards Argument-Aware Abstractive Summarization of Long Legal Opinions with Summary Reranking
https://aclanthology.org/2023.findings-acl.481
We propose a simple approach for the abstractive summarization of long legal opinions that takes into account the argument structure of the document. Legal opinions often contain complex and nuanced argumentation, making it challenging to generate a concise summary that accurately captures the main points of the legal opinion. Our approach involves using argument role information to generate multiple candidate summaries, then reranking these candidates based on alignment with the document{'}s argument structure. We demonstrate the effectiveness of our approach on a dataset of long legal opinions and show that it outperforms several strong baselines.
# Towards Argument-Aware Abstractive Summarization Of Long Legal Opinions With Summary Reranking Mohamed Elaraby, Yang Zhong, Diane Litman University of Pittsburgh Pittsburgh, PA, USA {mse30,yaz118,dlitman}@pitt.edu ## Abstract We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document. Legal opinions often contain complex and nuanced argumentation, making it challenging to generate a concise summary that accurately captures the main points of the legal opinion. Our approach involves using argument role information to generate multiple candidate summaries, then reranking these candidates based on alignment with the document's argument structure. We demonstrate the effectiveness of our approach on a dataset of long legal opinions and show that it outperforms several strong baselines. ## 1 Introduction Legal opinions contain implicit argument structure spreading across long texts. Existing summarization models often struggle to accurately capture the main arguments of such documents, leading to summaries that are suboptimal (Xu et al., 2021; Elaraby and Litman, 2022). We propose an approach for the abstractive summarization of long legal opinions that leverages argument structure. Legal opinions often follow a specific argumentative structure, with the main points of the argument being presented clearly and logically (Xu et al., 2021; Habernal et al., 2022; Xu and Ashley, 2022). Prior work has shown that by considering this structure during summarization, it is possible to generate extractive and abstractive summaries that more accurately reflect the original argumentation in the document (Elaraby and Litman, 2022; Zhong and Litman, 2022; Agarwal et al., 2022). In this paper, we present a framework for abstractive summarization of long legal opinions that extends this literature by *leveraging argument structure* during summary reranking to both generate and score candidates. Our method involves utilizing the Longformer-Encoder-Decoder (LED) (Beltagy et al., 2020) model to generate multiple candidate summaries by training it on various input formats. This allows for the consideration of different argument representations in the summary generation process. Additionally, we use beam search to further diversify the output. Finally, we rank the candidate summaries by measuring their lexical similarity to the input's main arguments. We evaluate our approach on a dataset of long legal opinions obtained from the Canadian Legal Information Institute (CanLII)1and demonstrate that our method outperforms competitive baselines. Our results with ROUGE and BERTScore (Lin, 2004; Zhang et al., 2019) suggest that considering the argumentative coverage of the original opinions can lead to a more effective selection of summaries. Our contributions are: (1) We propose a simple reranking approach that takes into account the argumentative structure of legal opinions to improve over the standard finetuning of generation models. (2) We demonstrate through empirical results and ablation analysis reasons for the effectiveness of our approach for summarizing long legal opinions. Our code can be accessed through this repository: https://github.com/ EngSalem/legalSummReranking ## 2 Related Work Long Legal Document Summarization Legal documents have a distinct format, with a hierarchical structure and specialized vocabulary that differs from that of other domains (Kanapala et al., 2019). They also tend to be longer in length (Kan et al., 2021; Huang et al., 2020; Moro and Ragazzi, 2022), which has led to the use of transformer models with sparse attention mechanisms (Michalopoulos et al., 2022; Guo et al., 2022; Beltagy et al., 2020) to reduce the complexity of encoding lengthy text. Legal *opinions*, in particular, have a complex argu-1Data was obtained through an agreement with CanLII (https://www.canlii.org/en/). mentative structure that spans across the text, making it crucial to address in summaries (Xu et al., 2021; Xu and Ashley, 2022; Elaraby and Litman, 2022). *We use prior legal opinion summarization* methods as evaluation baselines. Summarization and Argument Mining Using a dialogue summarization dataset with argument information, Fabbri et al. (2021b) converted an argument graph into a textual format to train a summarizer. For legal documents, Agarwal et al. (2022) used argument role labeling to improve extractive summarization using multitask learning. Elaraby and Litman (2022) blended argument role labeling and abstractive summarization using special markers, generating summaries that better aligned with legal argumentation. We incorporate the models of Elaraby and Litman (2022) into summary reranking and further improve performance. Second Stage Reranking Generating multiple outputs and reranking them according to certain criteria has been successfully applied in NLP downstream applications including abstractive summarization. Some methods use different input formats to generate multiple outputs. Oved and Levy (2021) perturbed input multi-opinion reviews to generate multiple candidate summaries, then ranked them using coherency. Ravaut et al. (2022) used a multitask mixture of experts to directly model the probability that a summary candidate is the best one. Liu and Liu (2021) ranked candidate summaries generated from 16 diverse beam searches to improve news summarization in terms of ROUGE score. Liu et al. (2022) presented a novel technique for summary reranking that involves a non-deterministic training objective. Their approach enables the model to directly rank the summaries that are probable from beam-search decoding according to their quality. We rely on distinct argument-aware input formats in addition to diverse beam decoding to develop our argument-aware reranking method. ## 3 Annotated Dataset We employ the annotated subset (Xu et al., 2021; Elaraby and Litman, 2022) of the **CanLII** dataset (Zhong and Litman, 2022) used in prior summarization research of legal opinions. This subset contains 1049 opinion/summary pairs annotated with sentence-level argument role labels for both input documents and reference summaries. The input opinions have mean/max lengths of 4375/62786 words, motivating us to use models for long text. Recent work has proposed argument role taxonomies aligned with structures commonly found in legal text (Habernal et al., 2022; Xu et al., 2021). The CanLII data was **annotated for argument** roles using the **IRC scheme** for legal opinions (Xu et al., 2021), which divides argument roles into Issues (legal questions which a court addressed in the document), **Reasons** (pieces of text which indicate why the court reached the specific conclusions), and **Conclusions** (court's decisions for the corresponding issues). We use these 3 fine-grained IRC labels, as well as collapse them into a single argumentative label, to incorporate argument structure into our models. An IRC-annotated opinion and summary pair can be found in Appendix A. ## 4 Model And Methods Our proposed method follows the generate and ranking paradigm and can be split into two parts. First, we explore techniques to utilize an argumentation augmented LED model to generate multiple candidate summaries S. Second, we propose a function µ that scores a summary S where S ∈ S based on its argumentative alignment with the input document. The best candidate S∗is selected such that S∗ = arg maxSi∈S{µ(S1), µ(S2)*, .., µ*(Sn)}. Figure 1 shows an overview of our approach. ## 4.1 **Generating Candidates: Argument-Aware** Training + Diverse Decoding Diverse decoding techniques such as beam-search can help diversify the *summary output*; however, it's only limited to the underlying language model used in the decoder and is completely isolated from the input format. Alternatively, we propose to complement the beam search via finetuning LED on three *different input formats*. We refer to this model as Marg−*augmented* such that the model parameter θ∗ arg−*augmented* is selected such that θ ∗ arg−*augmented* = arg max θ $$\operatorname{\operatorname{tx}}P(S|\mathbb{X})$$ $\operatorname{\text{Ignerated}}=\operatorname{\text{arg}}\limits_{\theta}\operatorname{\text{max}}$ During finetuning, S is the reference summary, θ represents the trainable model parameters, and X is a set of inputs X = {Xraw, Xarg_binary, Xarg_*f inegrained*}, where Xraw is the input without the argument markers, Xarg_*binary* is the input document with binary argument markers added to highlight argument role sentences, and Xarg_*f inegrained* is the input document with the fine-grained argumentative markers added to also delineate ![2_image_0.png](2_image_0.png) the roles (i.e., Issue, Reason, Conclusion). These three representations of the input share the same reference summary, meaning that we augmented the training data three times. Table 1 shows an example of the distinct representations of our new training data. At inference time, we use the predicted markers by adopting the argument mining code2from Elaraby and Litman (2022) instead of the manually labeled ones to construct Xˆarg_binary, Xˆarg_*f inegrained* of Xˆ where Xˆ = {Xraw, Xˆarg_binary, Xˆarg_*f inegrained*}. Our incentive is that different formats of the input would yield different generated summaries that take into account different representations of the argumentative structure in the input. ## 4.2 Scoring And Reranking Summaries We propose a scoring method to rank the candidate summaries based on their capability to capture the main argument points in the input. First, we employ a sentence-level argument role classifier to extract sentences with argument roles Xˆ*args*. The predicted sentences are used to construct an extractive summary. Then, we measure the lexical overlap between a generated candidate summary Sˆ and the constructed extracted one using ROUGE-1 F1-score3, to compute a score to each candidate | Input format | Example | |-------------------|-----------| | Xraw | S1|S2|...| Issue Sentence | Reason Sentence |... S1|S2|...| <IRC> Issue Sentence | | Xarg_binary | </IRC> | <IRC> Reason Sentence </IRC> |... S1|S2|...| <Issue> Issue Sentence </Issue> | <Reason> Reason Sentence </Reason> |... | | Xarg_f inegrained | | summary that represents its alignment with the legal opinion argument content. Our scoring function µ can be written as µ = ROUGE1(Xˆ*args*, Sˆ). ## 5 Experiments All models use *LED-base* checkpoint as a base model. *LED-base* encodes up to 16k tokens, which fits our long inputs. All experiments use 5-fold cross-validation, with the 4-fold documents split into 90% training and 10% validation; the validation split is used to select the best checkpoint.4 We compare all rank-based methods (baseline and proposed) to **abstractive baselines** previously explored in legal opinion summarization: *finetune* LED-base (which refers to vanilla model finetuning 4Full experimental details can be found in Appendix B. Experiments ID **Model R-1 R-2 R-L BS src. marker** Abstractive baselines 1 finetune LED-base 47.33 22.80 44.12 86.43 - 2 arg-LED-base (binary markers) 48.85 24.74 45.82 86.79 predicted 3 arg-LED-base (fine-grained markers) 49.02 24.92 45.92 86.86 4 arg-LED-base (binary markers) 50.64 26.62 47.48 86.90 oracle 5 arg-LED-base (fine-grained markers) 51.07 27.06 48.01 86.92 Ranking baselines 6 baseline ranking 49.79 25.13 46.63 86.87 predicted 7 arg-LED-base (fgrain) + diverse beams 50.92 26.06 47.74 86.87 8 baseline ranking 51.85 27.31 48.61 87.26 oracle 9 arg-LED-base (fgrain) + diverse beams 52.74 **27.93** 49.50 **87.46** Our framework 10 arg-augmented-LED 50.52 24.82 47.19 86.85 predicted 11* **arg-augmented-LED + diverse beams** *54.13 27.02 50.14 87.38* 12 arg-augmented-LED 51.96 25.69 48.56 87.03 oracle 13 **arg-augmented-LED + diverse beams 54.30** 27.00 **50.80** 87.35 Table 2: Summarization ROUGE (R1, R2, RL) and BertScore (BS) cross-validation results. Best results in each column are **bolded** when obtained with the oracle markers and *italicized* with predicted markers. For full framework (rows 11/13), * indicates results are statistically significant in all scores over best argument-aware baseline (row-3). using our dataset), and *arg-LED-base* (Elaraby and Litman, 2022) (which finetunes LED on the *dataset* blended with argument markers that mark the start and the end of each argument role in the input).5 We also compare our proposed rank-based approach from Section 4 with **ranking baselines** that use different input formats or diverse decoding alone. Specifically, we have employed ranking on top of the output of the three LED models outlined in Elaraby and Litman (2022) which are trained on distinct argument aware input formats (we refer to this model as "baseline ranking"). Additionally, for diverse decoding, we have employed different beam widths within the range of 1 and 5 6 on top of the model trained on the input with fine-grained markers (arg-LED-fine-grained), which achieved the best abstractive baseline ROUGE results. All models utilizing argument markers employed both *oracle* and *predicted* conditions during inference time, using human annotations or argument mining respectively, to produce the markers. ## 6 Results And Discussion Table 2 shows our results in terms of *ROUGE-score* (Lin, 2004) and *BERTScore* (Zhang et al., 2019), computed using *SummEval* (Fabbri et al., 2021a) 7. Utility of any Ranking The ranking-based methods (rows 6-13) consistently outperform the abstractive baselines8(rows 1-5) in both predicted and oracle conditions. Also, abstractive baseline results (rows 1-5) align with those of Elaraby and Litman (2022), where leveraging fine-grained markers in the input yields the highest scores. Utiliy of Proposed Ranking Framework and its Components In the predicted case, our proposed arg-augmented-LED (row 10) improves over the abstractive baselines (rows 1-3) with ranges 1.5 − 3.19 and 1.27 − 3.07 in ROUGE-1 and ROUGE-L respectively, while maintaining a limited drop of 0.1 and 0.01 in terms of ROUGE-2 and BS respectively. Similarly, compared to our ranking baselines, our proposed model improves over ROUGE-1 and ROUGE-L scores obtained by baseline ranking with ranges 0.56 − 0.73 while dropping in ROUGE-2 and BS by 0.31 and 0.02 points respectively. This indicates that incorporating argument information into the source inputs can lead to the generation of effective summary candidates. Our best predicted results were achieved by combining our proposed model with diverse beam decoding (row 11), which combines the strengths of various input formats and multiple beam decoding, resulting in statistically significant improvements over the previously proposed argument-aware abstractive baseline (row 3). Inference with Predicted versus Oracle Argument Roles For the same model, predicted markers can impact the summarization results. In prior baselines (rows 3 and 5), we observe a drop in ROUGE score with ranges 2.05 − 2.14, and 0.06 in terms of BS when switching from oracle to predicted markers. This observation is consistent among row 6 and 8; and row 10 and 12. With our proposed arg-augmented-LED and diverse beam decoding, this performance gap is mitigated and reduced to −0.02 − 0.66 and −0.03 in ROUGE and BS, respectively (rows 11 and 13). We believe this is due to the combination of distinct argumentative formats and diverse decoding, allowing more diverse candidates to be considered in the ranking and enhancing robustness to noisy predictions during inference. ## 7 Conclusion And Future Work We proposed a framework for improving the summarization of long legal opinions by combining distinct argument formats of the input with diverse decoding to generate candidate summaries. Our framework selects the summary with the highest lexical overlap with the input's argumentative content. Our results indicate that ranking alone can improve over abstractive baselines. Moreover, combining ranking with our proposed candidate generation method improves results while maintaining robustness to noisy predictions. In future research, we plan to incorporate human expert evaluations to compare automatic metrics with human ratings. Also, we aim to explore the impact of using noisier argument roles during training on a larger corpus by using the predicted markers obtained from our smaller dataset to experiment with the remaining unannotated portion of the CanLII dataset. ## Limitations The primary constraints encountered in our research result from our dependence on a single dataset for experimentation and computing resource limitations. Despite these, we postulate that our ranking-based methodology can be utilized for any summarization task that necessitates robust correspondence with a specific structure within the input. To validate this hypothesis, further experimentation is required to assess the generalizability of our technique to alternative datasets and domains. In addition, our limited computational resources prevented us from experimenting with other long document encoder-decoder models such as BigBird and LongT5 (Michalopoulos et al., 2022; Guo et al., 2022) as well as using higher beam widths during decoding. Furthermore, the cost and complexity of procuring expert evaluators within the legal domain resulted in using automatic metrics alone. ## Ethical Considerations The usage of the generated summary results from legal opinions remains important. Abstractive summarization models have been found to contain hallucinated artifacts that do not come from the source texts (Kryscinski et al., 2019; Zhao et al., 2020; Kryscinski et al., 2020). While our model incorporated the argument structure of the source article, the generation results may still carry certain levels of non-factual information and need to be utilized with extra care. Similarly, as mentioned in the prior line of works using CanLII (Elaraby and Litman, 2022; Zhong and Litman, 2022), CanLII has taken measures to limit the disclosure of defendants' identities (such as blocking search indexing). Abstractive approaches may cause user information leakage. Thus using the dataset needs to be cautious to avoid impacting those efforts. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. 2040490 and by Amazon. We would like to thank the members of both the Pitt AI Fairness and Law Project and the Pitt PETAL group, as well as the anonymous reviewers, for valuable comments in improving this work. ## References Abhishek Agarwal, Shanshan Xu, and Matthias Grabmair. 2022. Extractive summarization of legal decisions using multi-task learning and maximal marginal relevance. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 1857– 1872, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. *arXiv* preprint arXiv:2004.05150. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, and Lucy Wang. 2021. Ms^2: Multidocument summarization of medical studies. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7494– 7513. Yue Dong, Andrei Mircea, and Jackie Chi Kit Cheung. 2021. Discourse-aware unsupervised summarization for long scientific documents. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1089–1102. Mohamed Elaraby and Diane Litman. 2022. ArgLegalSumm: Improving abstractive summarization of legal documents with argument mining. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6187–6194, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021a. SummEval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics, 9:391–409. Alexander Richard Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir Radev. 2021b. Convosumm: Conversation summarization benchmark and improved abstractive summarization with argument mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6866–6880. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2022. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724– 736, Seattle, United States. Association for Computational Linguistics. Ivan Habernal, Daniel Faber, Nicola Recchia, Sebastian Bretthauer, Iryna Gurevych, Christoph Burchard, et al. 2022. Mining legal arguments in court decisions. *arXiv preprint arXiv:2208.06178*. Yuxin Huang, Zhengtao Yu, Junjun Guo, Zhiqiang Yu, and Yantuan Xian. 2020. Legal public opinion news abstractive summarization by incorporating topic information. *International Journal of Machine Learning and Cybernetics*, 11(9):2039–2050. Tai-Jung Kan, Chia-Hui Chang, and Hsiu-Min Chuang. 2021. Home appliance review research via adversarial reptile. In *Proceedings of the 33rd Conference* on Computational Linguistics and Speech Processing (ROCLING 2021), pages 183–191, Taoyuan, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP). Ambedkar Kanapala, Sukomal Pal, and Rajendra Pamula. 2019. Text summarization from legal documents: a survey. *Artificial Intelligence Review*, 51(3):371–402. Muhammad Khalifa, Miguel Ballesteros, and Kathleen Mckeown. 2021. A bag of tricks for dialogue summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8014–8022. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540–551, Hong Kong, China. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yixin Liu and Pengfei Liu. 2021. Simcls: A simple framework for contrastive learning of abstractive summarization. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072. Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. Brio: Bringing order to abstractive summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903. George Michalopoulos, Michal Malyska, Nicola Sahar, Alexander Wong, and Helen Chen. 2022. ICDBigBird: A contextual embedding model for ICD code classification. In *Proceedings of the 21st Workshop* on Biomedical Language Processing, pages 330–336, Dublin, Ireland. Association for Computational Linguistics. Gianluca Moro and Luca Ragazzi. 2022. Semantic self-segmentation for abstractive summarization of long legal documents in low-resource regimes. In Proceedings of the Thirty-Six AAAI Conference on Artificial Intelligence, Virtual, volume 22. Nadav Oved and Ran Levy. 2021. Pass: Perturb-andselect summarizer for product reviews. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 351–365. Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022. Summareranker: A multi-task mixture-of-experts reranking framework for abstractive summarization. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Huihui Xu and Kevin D. Ashley. 2022. Multigranularity argument mining in legal texts. In *International Conference on Legal Knowledge and Information Systems*. Huihui Xu, Jaromir Savelka, and Kevin D Ashley. 2021. Toward summarizing case decisions via extracting argument issues, reasons, and conclusions. In *Proceedings of the eighteenth international conference* on artificial intelligence and law, pages 250–254. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*. Zheng Zhao, Shay B. Cohen, and Bonnie Webber. 2020. Reducing quantity hallucinations in abstractive summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2237– 2249, Online. Association for Computational Linguistics. Hao Zheng and Mirella Lapata. 2019. Sentence centrality revisited for unsupervised summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236– 6247. Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021. When does pretraining help? assessing self-supervised learning for law and the casehold dataset. In *Proceedings* of the 18th International Conference on Artificial Intelligence and Law. Association for Computing Machinery. Yang Zhong and Diane Litman. 2022. Computing and exploiting document structure to improve unsupervised extractive summarization of legal case decisions. In Proceedings of the Natural Legal Language Processing Workshop 2022, pages 322–337, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. ## A Argument Role Labeling In Canlii Cases The concept of argument roles, specifically issues, reasons, and conclusions, is of paramount importance in legal case summarization. An illustration, presented in Figure 2, demonstrates the annotation of these roles in the input text of a legal opinion and its associated summary. This example shows that the issues, reasons, and conclusions can effectively encapsulate the critical points of discussion within the court, the ultimate decision reached, and the rationale for said decision. ## B Experimental Setup And Hyper-Parameters LED experiments For all of our LED-base experiments, we use the LED-base implementation by the *HuggingFace Library* (Wolf et al., 2020). We finetune the LED-base model for 10 epochs. We select our best model based on the *ROUGE* − 2 score on the validation set. We rely on the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 2e − 5 to update the LED-base weights. We also employ an early stopping with 3 epoch patience to avoid overfitting during training. Argument Role Classification Our argument role classifier leverages a finetuned *legalBERT* (Zheng et al., 2021) model due to its superiority to other contextualized embeddings-based models like BERT (Devlin et al., 2019) and ROBERTa (Liu et al., 2019) as shown in Elaraby and Litman (2022); Xu et al. (2021). We utilized the same training setting and hyperparameters described in Elaraby and Litman (2022) to train the *5-fold* crossvalidation sentence level argument classifiers used in our experiments. 9 ## C Argumentative Markers In abstractive summarization, special markers can indicate the most important parts of a text that 9Classifier code is available at https://github.com/ EngSalem/arglegalsumm/tree/master/src/argument_ classification ![7_image_0.png](7_image_0.png) | Example of using argument markers more appropriate measures to avoid the accident. <IRC> The plaintiful should have taken more appropriate measures to avoid the accident. </IRC> . <Reason> The plaintiful should have taken more appropriate measures to avoid the accident. </Reason>. | |---| ground the summary (Khalifa et al., 2021; DeYoung et al., 2021). These markers can be added to the text by a human annotator, or they can be generated automatically by a model. These markers can take many forms, such as highlighting certain words or phrases or adding special tags to certain sentences. A summarization model can use them to identify the key parts of the text that should be included in the summary while also considering the overall structure and coherence of the text. This can help to improve the accuracy and effectiveness of the summarization process, especially when the text is long or complex. In this work, we use marker sets proposed by Elaraby and Litman (2022) to distinguish between argumentative and non-argumentative sentences. Binary markers The binary markers aim to distinguish argumentative and non-argumentative sentences regardless of the type of the argument role (i.e, issues, reasons, or conclusions). In our work, we used the markers <IRC>,</IRC> to highlight the start and end of each argumentative sentence. Fine-grained markers We also used the markers designated to distinguish between each argument role type by using the markers <Issue>,</Issue>, <Reason>, </Reason>, <Conclusion>, </Conclusion>. Table 3 shows an example of using different argumentative markers to highlight the start and end of a "Reason" sentence. ## D Extractive Baselines In addition to the abstractive baselines, we compare our methods to graph-based unsupervised extractive baselines built on top of *HipoRank* (Dong et al., 2021) and extractive baselines based on *ExtractiveBERT* (Zheng and Lapata, 2019), which were leveraged before on the same dataset (Zhong and Litman, 2022). Table 4 shows our abstractive summarization results compared to the extractive baselines in cross-validation settings. Our ranking-based methods show consistent improvement over both the extractive and the abstractive baselines. ## E Rouge Based Ranking Results Table 5 shows a comparison between the usage of ROUGE-1, ROUGE-2, and ROUGE-L as potential ranking criteria to select the summary that aligns with the predicted argumentative content outlined in the input legal opinion. While there is no substantial differences between results with each ROUGE metric, ROUGE-L seems to have marginally lower scores compared to ROUGE-1, and ROUGE-2. | Experiments | ID | Model | R-1 | R-2 | R-L | BS | src. marker | |-----------------------|--------------------------------------|-------------------|-------|-------|-------|-----------|---------------| | 1 | sentence-level legalBERT | 49.66 | 28.42 | 46.72 | 86.54 | - | | | Extractive baselines | 2 | HipoRank | 41.24 | 17.19 | 38.54 | 81.67 | - | | 3 | HipoRank rewighted | 42.88 | 18.03 | 39.99 | 84.11 | - | | | 4 | Extractive BERT | 43.053 | 17.75 | 39.99 | 84.15 | - | | | 5 | finetune LED-base | 47.33 | 22.80 | 44.12 | 86.43 | - | | | 6 | arg-LED-base (binary markers) | 48.85 | 24.74 | 45.82 | 86.79 | predicted | | | 7 | arg-LED-base (fine-grained markers) | 49.02 | 24.92 | 45.92 | 86.86 | | | | 8 | arg-LED-base (binary markers) | 50.64 | 26.62 | 47.48 | 86.90 | oracle | | | 9 | arg-LED-base ( fine-grained markers) | 51.07 | 27.06 | 48.01 | 86.92 | | | | Abstractive baselines | 10 | baseline ranking | 49.79 | 25.13 | 46.63 | 86.87 | predicted | | 11 | arg-LED-base + diverse beams | 50.92 | 26.06 | 47.74 | 86.87 | | | | Ranking baselines | 12 | baseline ranking | 51.85 | 27.31 | 48.61 | 87.26 | oracle | | 13 | arg-LED-base + diverse beams | 52.74 | 27.93 | 49.50 | 87.46 | | | | 14 | arg-augmented-LED | 50.52 | 24.82 | 47.19 | 86.85 | predicted | | | 15 | arg-augmented-LED + diverse beams | 54.13 | 27.02 | 50.14 | 87.38 | | | | Our framework | 16 | arg-augmented-LED | 51.96 | 25.69 | 48.56 | 87.03 | oracle | | 17 | arg-augmented-LED + diverse | 54.30 | 27.00 | 50.80 | 87.35 | | | | beams | | | | | | | | Table 4: Full Extractive and Abstractive Results | Ranking metric | Model | R-1 | R-2 | R-L | BS | |-------------------------|---------|-------|-------|-------|------| | ROUGE-1 ROUGE-2 ROUGE-L | | | | | | Table 5: R1, R2, RL ranking scores with predicted argumentative markers ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? see section 8 (Limitations) after conclusion ✓ A2. Did you discuss any potential risks of your work? see section 9, Ethical Consideration after limitation section. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes, see Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Grammarly is used to help in checking grammar and writing style. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes, Please See Section 4 Models. ✓ B1. Did you cite the creators of artifacts you used? Please see section 3 and 4 dataset and models. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Please see footnote in section 1 on the license of the dataset used ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes, please refer to sections 1, 3, 4, and 5 discussing previous artifacts and how we use them. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Doesn't apply to our dataset owe used ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 3 and the appendix A. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Please refer to section 3 datasets for details ## C ✓ **Did You Run Computational Experiments?** Section 4 And 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We briefly discusses the computational infrastructure in the limitation and appendices. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Experimental details in the appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? results can be found in section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4, and 5 and appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wu-tu-2023-probabilistic
Probabilistic Transformer: A Probabilistic Dependency Model for Contextual Word Representation
https://aclanthology.org/2023.findings-acl.482
Syntactic structures used to play a vital role in natural language processing (NLP), but since the deep learning revolution, NLP has been gradually dominated by neural models that do not consider syntactic structures in their design. One vastly successful class of neural models is transformers. When used as an encoder, a transformer produces contextual representation of words in the input sentence. In this work, we propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective. Specifically, we design a conditional random field that models discrete latent representations of all words in a sentence as well as dependency arcs between them; and we use mean field variational inference for approximate inference. Strikingly, we find that the computation graph of our model resembles transformers, with correspondences between dependencies and self-attention and between distributions over latent representations and contextual embeddings of words. Experiments show that our model performs competitively to transformers on small to medium sized datasets. We hope that our work could help bridge the gap between traditional syntactic and probabilistic approaches and cutting-edge neural approaches to NLP, and inspire more linguistically-principled neural approaches in the future.
# Probabilistic Transformer: A Probabilistic Dependency Model For Contextual Word Representation Haoyi Wu and **Kewei Tu**∗ School of Information Science and Technology, ShanghaiTech University Shanghai Engineering Research Center of Intelligent Vision and Imaging {wuhy1, tukw}@shanghaitech.edu.cn ## Abstract Syntactic structures used to play a vital role in natural language processing (NLP), but since the deep learning revolution, NLP has been gradually dominated by neural models that do not consider syntactic structures in their design. One vastly successful class of neural models is transformers. When used as an encoder, a transformer produces contextual representation of words in the input sentence. In this work, we propose a new model of contextual word representation, not from a neural perspective, but from a purely syntactic and probabilistic perspective. Specifically, we design a conditional random field that models discrete latent representations of all words in a sentence as well as dependency arcs between them; and we use mean field variational inference for approximate inference. Strikingly, we find that the computation graph of our model resembles transformers, with correspondences between dependencies and self-attention and between distributions over latent representations and contextual embeddings of words. Experiments show that our model performs competitively to transformers on small to medium sized datasets. We hope that our work could help bridge the gap between traditional syntactic and probabilistic approaches and cutting-edge neural approaches to NLP, and inspire more linguistically-principled neural approaches in the future.1 ## 1 Introduction Once upon a time, syntactic structures were deemed essential in natural language processing (NLP). Modeling and inference about syntactic structures was an indispensable component in many NLP systems. That has all changed since the deep learning revolution started a decade ago. Modern NLP predominantly employs various neural models, most of which do not consider syntactic structures in their design. One type of neural models that are particularly successful is transformers (Vaswani et al., 2017). Given an input text, a transformer produces a vector representation for each word that captures the meaning as well as other properties of the word in its context. Such contextual word representations can then be served into downstream neural networks for solving various NLP tasks. The power of transformers in producing high-quality contextual word representations is further unleashed with large-scale pretraining (Devlin et al., 2019; Liu et al., 2020). Nowadays, a vast majority of NLP models and systems are built on top of contextual word representations produced by some variants of pretrained transformers. Like most other neural models, transformers were developed based on human insight and trial and error, without explicit design for incorporating syntactic structures. Nevertheless, there is evidence that contextual word representations produced by pretrained transformers encode certain syntactic structures (Hewitt and Manning, 2019; Tenney et al., 2019) and attention heads in pretrained transformers may reflect syntactic dependencies (Clark et al., 2019; Htut et al., 2019; Ravishankar et al., 2021). Because of the heuristic nature of the transformer model design, exactly how transformers acquire such syntactic capability remains unclear. In this paper, we propose *probabilistic transformers*, a very different approach to deriving contextual word representations that is based on classic nonneural probabilistic modeling with innate syntactic components. Specifically, we design a conditional random field that models discrete latent representations of all words as well as a syntactic dependency structure of the input sentence, and we define a potential function which evaluates the compatibility of the latent representations of any pair of words connected by a dependency arc. We use mean field variational inference for approximate inference, producing a marginal distribution for each latent word representation, the probability vector of which can then be used as a contextual vector representation of the word. While we propose our model from a purely syntactic and probabilistic perspective that is unrelated to transformers, we show that there is a striking resemblance between the computation graph of the inference procedure of our model and that of a transformer, with our intermediate distributions over dependency heads corresponding to self-attention scores and our intermediate distributions over latent word representations corresponding to intermediate word embeddings in a transformer. In short, we start with a probabilistic syntactic model but reach the transformer! We empirically compare our model with transformers when trained with either masked language modeling or downstream tasks. Our experimental results show that our model performs competitively to transformers on small to medium sized datasets. We hope that probabilistic transformers, instead of being a replacement of transformers, could benefit the analysis of the syntactic capability of transformers and at the same time inspire novel extensions of transformers. Furthermore, we hope our work would promote future research of neural models that are linguistically more principled, theoretically more well-founded, and empirically no less powerful than existing models. ## 2 Probabilistic Transformers We will first introduce the basic model, a conditional random field (CRF) as illustrated in Figure 1, then show the inference procedure, and finally introduce some variants to the basic model. ## 2.1 The Crf Model Given a sentence (a sequence of words), denote n as the sequence length. For the i-th word, we define Zi as a discrete latent label that represents the syntactic (and possibly semantic) property of the word in the sentence (i.e., it is a contextual representation) with a label set of size d. Such a discrete representation deviates from the common practice of representing a word with a continuous vector, but it is sufficient at least for syntactic processing (Kitaev et al., 2022) and it greatly simplifies our probabilistic model. For the i-th word, we also ![1_image_0.png](1_image_0.png) define Hi ∈ {1, 2, · · · , n} representing the syntactic dependency head of the word. So the set of variables {Hi} n i=1 specifies a dependency structure. We may also allow Hito point to a dummy root node, which will be discussed in Section 2.3.5. We follow the head-selection paradigm of dependency parsing and do not enforce the tree constraint, which again simplifies our model design. Next, we define two types of potential functions. For the i-th word wi, we define a unary potential function (corresponding to the unary factors in Figure 1) evaluating the compatibility of the word and its label Zi: $$\phi_{u}(Z_{i})=\exp\left(\mathbf{S}_{w_{i},Z_{i}}\right)\qquad\qquad(1)$$ $|\lambda\lambda|\times d_{\mu}$. where S ∈ R*|V|×*dis a score matrix, |V| is the size of the vocabulary. For simplicity, we do not exploit any morphological or contextual features for computing the scores. For every pair of words wi and wj (i ̸= j), we define a ternary potential function (corresponding to the ternary factors in Figure 1) over Zi, Zj and Hi, which evaluates the compatibility between the labels of the two words if wj is the dependency head of wi: $$\phi_{t}(H_{i},Z_{i},Z_{j})=\tag{2}$$ $$\left\{\begin{array}{l l}{{\exp\left(\mathbf{T}_{Z_{i},Z_{j}}\right)}}&{{\quad H_{i}=j}}\\ {{1}}&{{\quad\mathrm{otherwise}}}\end{array}\right.$$ where T ∈ R d×dis a score matrix. Inspired by the multi-head structure in transformers, we allow multiple dependency structures for the same sentence, which may represent different flavors of dependencies. Each dependency structure resides in a different *channel* with its own dependency head variables and ternary potential functions. For the c-th channel, we denote the set of dependency head variables by {H (c) i} n i=1 and the score matrix of the ternary potential function by T(c). Let h denote the total number of channels. We may stack all the score matrices T(c)for c = 1, · · · , h to form a score tensor T ∈ R d×d×h. Note that all the channels share the same set of latent label variables {Zi} n i=1. ## 2.2 Inference Following Wang and Tu (2020), we use Mean Field Variational Inference (MFVI) to perform approximate inference. Different from the previous work, however, we need to run inference over latent labels in addition to dependency heads. MFVI iteratively passes messages between random variables and computes an approximate posterior marginal distribution for each random variable (denoted by Q(·)). Let F (t) ic denote the message received by variable H (c) iat time step t from ternary factors, and G (t) idenote the message received by variable Zi at time step t from ternary factors. We have $$\begin{array}{c}{{{\mathcal F}_{i c}^{(t)}(j)=\sum_{a}\sum_{b}\left(Q_{i}^{(t)}(a)Q_{j}^{(t)}(b){\bf T}_{a,b}^{(c)}\right)}}\\ {{{\mathcal G}_{i}^{(t)}(a)=\sum_{c}\sum_{j\neq i}\sum_{b}\left(Q_{i c}^{(t)}(j)Q_{j}^{(t)}(b){\bf T}_{a,b}^{(c)}\right.}}\\ {{{\left.+Q_{j c}^{(t)}(i)Q_{j}^{(t)}(b){\bf T}_{b,a}^{(c)}\right)}}}\end{array}\quad(3)$$ where $$\begin{array}{l}{{Q_{i}^{(t)}(a)\propto\exp\left(\mathbf{S}_{w_{i},a}+\mathcal{G}_{i}^{(t-1)}(a)\right)}}\\ {{Q_{i c}^{(t)}(j)\propto\exp\left(\mathcal{F}_{i c}^{(t-1)}(j)\right)}}\end{array}$$ (5) are the approximate marginal distributions at time step t, with Q (t) i(·) over Zi and Q (t) ic (·) over H (c) i. We initialize these distributions by $$\begin{array}{l}{{Q_{i}^{(0)}(a)\propto\exp\left(\mathbf{S}_{w_{i},a}\right)}}\\ {{Q_{i c}^{(0)}(j)\propto1}}\end{array}$$ After a fixed number of T > 0 iterations, we obtain the final posterior marginal distribution Q (T) i(Zi) for i = 1, · · · , n. Resulted from interactions with all the words of the sentence, the distribution Q (T) i(Zi) incorporates information of not only the i-th word, but also its context. Therefore, we can treat the probability vector of this distribution as a contextual vector representation for the i-th word. In practice, we find that using unnormalized scores in log space as contextual word representations produces better results, i.e., we skip exponentiation and normalization when computing Q (T) i(Zi) using Equation 5 during the final iteration. Since all the computation during MFVI is fully differentiable, we can regard the corresponding computation graph as a recurrent or graph neural network parameterized with score matrix S and tensor T. We can use the contextual word representations for downstream tasks by connecting the network to any downstream task-specific network, and we can update the model parameters using any task-specific learning objective through gradient descent. This is exactly the same as how transformers are used. ## 2.3 Extensions And Variants We introduce a few extensions and variants to the basic model that are empirically beneficial. Additional variants are discussed in Appendix B. 2.3.1 Distance Similar to the case of transformers, our probabilistic model is insensitive to the word order of the input sentence. In order to capture the order information, we apply relative positional encoding to our model by using distance-sensitive ternary potential functions. Specifically, we use different ternary scores for different distances between words denoted by the two Z variables of the potential function. The ternary potential function in Equation 2 becomes: (5) (6) $\frac{1}{2}$ $$\begin{array}{l l}{{}}&{{\phi_{t}(H_{i}^{(c)},Z_{i},Z_{j})=}}\\ {{}}&{{\left\{\begin{array}{l l}{{\exp\left(\mathbf{T}[f(i-j)]_{Z_{i},Z_{j}}^{(c)}\right)}}&{{\quad H_{i}^{(c)}=j}}\\ {{1}}&{{\quad\mathrm{otherwise}}}\end{array}\right.}}\end{array}\right.\tag{9}$$ where f is a clip function with threshold γ: $$f(x)={\left\{\begin{array}{l l}{0}&{x<-\gamma}\\ {x+\gamma+1}&{-\gamma\leq x<0}\\ {x+\gamma}&{0<x\leq\gamma}\\ {2\gamma+1}&{x>\gamma}\end{array}\right.}\quad(10)$$ Notice that x cannot be zero since the head of a word cannot be itself. We set γ = 3 by default. 2.3.2 Asynchronous Update During inference of the basic model, we iteratively update all variables in a synchronous manner. This can be problematic. Consider the first iteration. The messages passed to Z variables from H variables do not contain meaningful information because the initial distributions over H are uniform. Consequently, after one iteration, distributions over all Z variables become almost identical. To fix this problem, we use the asynchronous update strategy by default in this work. For each iteration, we first update distributions over H variables, and then update distributions over Z variables based on the updated distributions over H variables. Formally, we rewrite Formula 6 as $$Q_{i c}^{(t)}(j)\propto\exp\left({\mathcal{F}}_{i c}^{(t)}(j)\right)$$ and eliminate Formula 8 because distributions over H variables no longer need initialization. ## 2.3.3 Message Weight During inference, H variables have much fewer message sources than Z variables. This often pushes H variables towards being uniformly distributed. To balance the magnitude of the messages, we follow the Entropic Frank-Wolfe algorithm (LêHuu and Alahari, 2021), a generalization of MFVI, and introduce weight λZ > 0 and λH > 0 to Equation 5 and 6: tion 3 and 0. $$Q_{i}^{(t)}(a)\propto\exp\left(\frac{1}{\lambda_{Z}}\left(\mathbf{S}_{w_{i},a}+\mathcal{G}_{i}^{(t-1)}(a)\right)\right)\tag{11}$$ $$Q_{ic}^{(t)}(j)\propto\exp\left(\frac{1}{\lambda_{H}}\mathcal{F}_{ic}^{(t-1)}(j)\right)\tag{12}$$ We set $\lambda_{Z}=1$ and $\lambda_{H}=\frac{1}{d}$ by default${}^{2}$. 2.3.4 Tensor Decomposition Ternary score T is a tensor of shape d × d × h. Since d is usually set to several hundred, such a tensor leads to a huge number of parameters. To reduce the number of parameters, we apply the Kruskal form (which is closely related to tensor rank decomposition) to build the ternary score from smaller tensors. $$\mathbf{T}_{a,b}^{(c)}=\sum_{l=1}^{r}\mathbf{U}_{a,l}\cdot\mathbf{V}_{b,l}\cdot\mathbf{W}_{c,l}$$ where $\mathbf{U},\mathbf{V}\in\mathbb{R}^{d\times r}$ and $\mathbf{W}\in\mathbb{R}^{h\times r}$. **Claim the number of classes $\mathbf{U}$**. Since the number of channels h is relatively small, we may also choose only to decompose the first two dimensions. $$\mathbf{T}_{a,b}^{(c)}=\sum_{l=1}^{r}\mathbf{U}_{a,c,l}\cdot\mathbf{V}_{b,c,l}\tag{14}$$ where $\mathbf{U},\mathbf{V}\in\mathbb{R}^{d\times h\times r}$. 2We choose these weights in a similar way to choosing the scaling factor in scaled dot-product attention of transformers. See more details in Appendix A.5. ## 2.3.5 Root Node Dependency parsing assumes a dummy root node, which we may add to the CRF model. The root node is not associated with any word and instead can be seen as representing the entire sentence. Therefore, we assume that it has a different (and possibly larger) label set from words and hence requires a different ternary potential function. Specifically, we define Z*ROOT* as a discrete latent label of the root node with a label set of size d*root*. For i ∈ {1, 2, · · · , n}, c ∈ {1, 2, · · · , h}, we add a ternary potential function over Zi, H(c) iand Z*ROOT* : $$\begin{array}{c c}{{\phi_{t}(H_{i}^{(c)},Z_{i},Z_{R O O T})=}}\\ {{}}&{{\left\{\begin{array}{l l}{{\exp\left(\mathbf{T}_{Z_{i},Z_{R O O T}}^{\prime(c)}\right)}}&{{\quad H_{i}^{(c)}=R O O T}}\\ {{1}}&{{\quad\mathrm{otherwise}}}\end{array}\right.}}\end{array}$$ where T ′∈ R d×d*root*×his the root score tensor. During inference, we initialize Q(0)(Z*ROOT* ) with a uniform distribution. After inference, we can regard the posterior marginal distribution of Z*ROOT* as a sentence representation. ## 3 Comparison With Transformers Although our probabilistic transformers are derived as a probabilistic model of dependency structures over latent word labels, we find that its computational process has lots of similarities to that of transformers. Below, we first re-formulate a probabilistic transformer in a tensor form to facilitate its comparison with a transformer, and then discuss the similarities between the two models at three levels. $$(13)$$ ## 3.1 Probabilistic Transformers In Tensor Form Consider a probabilistic transformer using a distance-insensitive ternary potential function without a dummy root node. We tensorize the update formulas in the inference process of probabilistic transformers. Suppose Q (t) z ∈ R n×dis a tensor that represents the posterior distributions of all the Z variables, and Q (t) h,c ∈ R n×nis a tensor that represents the posterior distributions of all the H variables in channel c (with a zero diagonal to rule out self-heading). We can rewrite Equation 3 and 4 as $$\mathcal{F}_{c}^{(t)}=Q_{z}^{(t)}\mathbf{T}^{(c)}Q_{z}^{(t)T}\tag{15}$$ $$\mathcal{G}^{(t)}=\sum_{c}\left(Q_{h,c}^{(t)}Q_{z}^{(t)}\mathbf{T}^{(c)T}+Q_{h,c}^{(t)T}Q_{z}^{(t)}\mathbf{T}^{(c)}\right)\tag{16}$$ $$\begin{array}{c}{{{\mathcal{F}}_{c}^{(t-1)}=Q_{c}K_{c}^{T}}}\\ {{{\mathcal{G}}^{(t-1)}=2\sum_{c}Q_{h,c}^{(t-1)}V_{c}{\bf U}^{(c)T}}}\end{array}$$ $$(26)$$ $$Q_{z}^{(t)}=\sigma(\mathbf{S}+2\sum_{c}\operatorname{channel}_{c}\mathbf{U}^{(c)T})$$ $$\begin{array}{l c r}{{Q_{z}^{(t)}=\sigma({\bf S}+{\mathcal G}^{(t-1)})}}&{{}}&{{(17)}}\\ {{Q_{h,c}^{(t)}=\sigma({\mathcal F}_{c}^{(t-1)})}}&{{}}&{{(18)}}\end{array}$$ $$(28)$$ $$\mathrm{channel}_{c}=\sigma\left(\frac{Q_{c}K_{c}^{T}}{\lambda_{H}}\right)V_{c}$$ $$(29)$$ Vc (29) $$Q_{h,c}^{(t)}=\sigma\left(\frac{{\mathcal{F}}_{c}^{(t)}}{\lambda_{H}}\right)\qquad\qquad(19)$$ $$\mathrm{Attention}(Q,K,V)=\sigma\left({\frac{Q K^{T}}{\sqrt{d_{k}}}}\right)V$$ $${\mathcal{G}}^{(t)}=2\sum_{c}Q_{h,c}^{(t)}Q_{z}^{(t)}\mathbf{T}^{(c)T}$$ (c)T(20) $$\mathbf{T}^{(c)}=\mathbf{U}^{(c)}\mathbf{V}^{(c)T}$$ $$(21)$$ 3.3 Multi-Channel Update vs. Multi-Head Attention Multi-head attention in transformers is formulated as: $$\operatorname{MultiiHead}(Q,K,V)=$$ $$\operatorname{Concat}\left(\operatorname{head}_{1},\ldots,\operatorname{head}_{h}\right)W^{O}$$ where $$\begin{array}{l}{{{\mathcal{F}}_{c}^{(t)}=Q_{z}^{(t)}{\bf U}^{(c)}{\bf V}^{(c)T}Q_{z}^{(t)T}}}\\ {{{\mathcal{G}}^{(t)}=2\sum_{c}Q_{h,c}^{(t)}Q_{z}^{(t)}{\bf V}^{(c)}{\bf U}^{(c)T}}}\end{array}$$ z(22) z V(c)U(c)T(23) $$(22)\,$$ $$(23)\,$$ $$\mathrm{head}_{i}=\mathrm{Attention}\left(Q W_{i}^{Q},K W_{i}^{K},V W_{i}^{V}\right)$$ $$\mathrm{MultiHead}(Q,K,V)=\sum_{i}\mathrm{head}_{i}(W_{i}^{O})^{T}$$ $$\begin{array}{c}{{Q_{c}=Q_{z}^{(t-1)}{\bf U}^{(c)}}}\\ {{K_{c}=V_{c}=Q_{z}^{(t-1)}{\bf V}^{(c)}}}\end{array}$$ z U(c)(24) z V(c)(25) $$\begin{array}{l}{(24)}\\ {(25)}\end{array}$$ $$\mathrm{where}\quad W^{O}\equiv\mathrm{~Concat}(W_{1}^{O},\ldots,W_{h}^{O})\quad\mathrm{and}\quad W_{i}^{Q},W_{i}^{K},W_{i}^{V},W_{i}^{O}\in\mathbb{R}^{d\times r}.\mathrm{~Our~multi-channel}$$ $$7617$$ For time step t − 1, we could rewrite Formula 22 and 23 as Apply Equation 27, 19, 26 to 17, we have and σ is the softmax function. We still set λZ to its default value 1 but regard λH as a hyperparameter. With asynchronous update, Equation 18 becomes: We call the computation of channelc a *singlechannel update* for channel c. Now we have a tensorized formulation of the computation in probabilistic transformers and we are ready for its comparison with transformers at three different levels. We assume that T(c)is symmetric for c = 1, · · · , h. This is the only assumption that we make in this section beyond the original definition from the previous section. Symmetric score matrices indicate that the ternary factors are insensitive to the head-child order, which is related to undirected dependency parsing (Sleator and Temperley, 1993). If T(c)is symmetric, then Q (t) h,c is also symmetric based on Formula 15 and 19. Thus, we can simplify Equation 16 to ## 3.2 Single-Channel Update Vs. Scaled Dot-Product Attention Scaled dot-product attention in transformers is formulated as: $$(20)^{\frac{1}{2}}$$ As we can see, our single-channel update in Equation 29 is almost identical to scaled dot-product attention in transformers. The only difference is that the diagonal of the tensor QcKT c is zero in our model because the head of a word cannot be itself. Suppose we decompose the ternary score tensor into two tensors U, V ∈ R d×h×raccording to Equation 14, which can be rewritten as: where U(c), V(c) ∈ R d×rare the c-th channel of tensor U and V respectively. Substitute 21 into 15 and 20, we have where where It is equivalent to We define ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) update formula (the second term within the softmax function in Equation 28) is similar to the multi-head attention in transformers, as shown in Figure 2. The main difference is that probabilistic transformers use the same parameters for WK and WV(both are V, shown in green color in Figure 2b) and for WQ and WO (both are U, shown in orange color in Figure 2b). Recall that U and V are obtained from matrix decomposition (Equation 14). Therefore, the correspondence between U, V and WQ, WK, WO, WVin transformers suggests that the latter can also be seen as derived from tensor decomposition. Previous work on transformers has the same findings (Elhage et al., 2021). ## 3.4 Full Model Comparison Figure 3 compares the full computation graphs of the two models, which have a similar overall structure that repeats a module recurrently until outputting contextual word representations. Within the module, we have also established the correspondence between multi-channel update and multihead attention. On the other hand, there are a few interesting differences. First, our model does not have a feed-forward structure as in a transformer. However, we do propose a variant of our model that contains global variables representing topics (Appendix B.3), which may have similar functionality to the feed-forward structure. Second, our model does not have residual connections or layer norms. Instead, it adds the initial distributions (unary scores) to the updated message at each iteration. This may replace the functionality of residual connections and may even make more sense when the downstream task strongly depends on the original word information. Third, we have an additional softmax in each iteration. Note that we do softmax before the first iteration (Equation 7) and also at the end of each iteration (Equation 28), but bypass it in the last iteration when producing the output word representations, so our model could be equivalently formulated as doing softmax before each iteration, which we show in Figure 3c. Doing softmax in this way is similar to the layer norm in pre-LN transformers (Xiong et al., 2020) (Figure 3b). Finally, our model shares parameters in all iterations. This is similar to some variants of transformers that share parameters between layers, such as Universal Transformer (Dehghani et al., 2019) and ALBERT (Lan et al., 2019). One consequence of these differences is that probabilistic transformers have much fewer parameters than transformers with the same number of layers, heads and embedding dimensions, because of shared parameters between iterations, absence of a feed-forward structure, and tied parameter matrices in multi-channel updates. ## 4 Experiments We empirically compare probabilistic transformers with transformers on three tasks: masked language modeling, sequence labeling, and text classification. For each task, we use two different datasets. We also perform a syntactic test to evaluate the compositional generalization ability of our model. ![6_image_0.png](6_image_0.png) ## 4.1 Tasks And Datasets Here we briefly introduce our tasks and datasets. A detailed description is presented in Appendix D. Masked Language Modeling (MLM). We perform MLM tasks on two corpora: the Penn TreeBank (PTB) (Marcus et al., 1993) and Brown Laboratory for Linguistic Information Processing (BLLIP) (Charniak et al., 2000). Following Shen et al. (2022), we randomly replace words with a mask token <mask> at a rate of 30%. The performance of MLM is evaluated by measuring perplexity (lower is better) on masked words. We project the final word representation of each mask token to the vocabulary. For transformers, we tie the projection parameters to the initial word embeddings. We find that this trick improves the performance of transformers. Sequence Labeling. For sequence labeling tasks, we perform part-of-speech (POS) tagging on two datasets: the Penn TreeBank (PTB) (Marcus et al., 1993) and the Universal Dependencies (UD) (De Marneffe et al., 2021). We also perform named entity recognition (NER) on CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003). We directly project the final word representation of each word to the target tag set. For POS tagging, we evaluate the results by the accuracy of wordlevel predictions. For NER, we evaluate the results by measuring the F1 score of named entities. Text Classification. We use the Stanford Sentiment Treebank (SST) (Socher et al., 2013) as the dataset. It has two variants: binary classification (SST-2) and fine-grained classification (SST-5). For transformers, we add a <CLS> token at the front of the sentence and then project its representation to the tag set. For our model, we use the variant with a root node introduced in Section 2.3.5 and project the representation of the root node to the tag set. Syntactic Test. To evaluate the compositional generalization abilities of our model, we perform a syntactic test on the COGS dataset (Kim and Linzen, 2020). We follow the settings in Ontanón et al. (2021), who cast the task as a sequence labeling task. As in sequence labeling, we project word representations to tag sets. If all words in a sentence are correctly predicted, the sentence prediction will be counted as correct. We evaluate the results by the sentence-level accuracy of the predictions. ## 4.2 Settings We tune transformers and our model separately for each task except the syntactic test. For the syntactic test, we find that both transformers and our model easily reach 100% accuracy on the validation set. This observation is consistent with Ontanón et al. (2021). Therefore, instead of tuning, we use the best-performed setting of transformers in Ontanón et al. (2021) for our experiments. The hyperparameters of our model are determined by their counter- | Task | Dataset | Metric | Transformer | Probabilistic Transformer | |----------------|---------------|-------------------------|---------------|-----------------------------| | MLM | PTB | Perplexity | 58.43 ± 0.58 | 62.86 ± 0.40 | | BLLIP | 101.91 ± 1.40 | 123.18 ± 1.50 | | | | POS | PTB | Accuracy | 96.44 ± 0.04 | 96.29 ± 0.03 | | UD | 91.17 ± 0.11 | 90.96 ± 0.10 | | | | NER | CoNLL-2003 | F1 | 74.02 ± 1.11 | 75.47 ± 0.35 | | CLS | SST-2 | Accuracy | 82.51 ± 0.26 | 82.04 ± 0.88 | | SST-5 | 40.13 ± 1.09 | 42.77 ± 1.18 | | | | Syntactic Test | COGS | Sentence-level Accuracy | 82.05 ± 2.18 | 84.60 ± 2.06 | parts of transformers based on the correspondence discussed in Section 3. For our model, we integrate all the variants mentioned in Section 2.3 except the root node variant, which we only use for text classification tasks. We tune the tensor decomposition strategy on different tasks. For MLM tasks, we add a small L2 regularization term to the ternary scores in our model, which we experimentally find beneficial. We optimize both models using the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.999. ## 4.3 Results We report the average and standard deviation results of 5 random runs in Table 1. It shows that our model has a competitive performance compared with transformers. In most tasks, probabilistic transformers perform competitively to transformers. It is worth noting that in these experiments, probabilistic transformers have much fewer parameters than transformers. For most tasks, the number of parameters of our best model is about one-fifth to one-half of that of the best transformer. We also conduct case studies of the dependency structures inferred by our model after training on downstream tasks. Similar to the case of selfattentions in transformers, the inferred dependency structures are only partially consistent with human intuition. See Appendix F for details. ## 5 Related Work There have been several studies trying to incorporate syntactic structures to transformers. Strubell et al. (2018) force one attention head to attend to predicted syntactic governors of input tokens. Wang et al. (2019); Ahmad et al. (2021) try to integrate constituency or dependency structures into transformers. Shen et al. (2021) propose a dependency-constrained self-attention mechanism to induce dependency and constituency structures. Our work deviates from all these previous studies in that we start from scratch with probabilistic modeling of word representations and dependencies, but obtain a model that is strikingly similar to transformers. ## 6 Discussion It is worth noting that in this work, our primary goal is not to propose and promote a new model to compete with transformers. Instead, it is our hope that our work could benefit the analysis and extension of transformers, as well as inspire future research of transformer-style models that are linguistically more principled, theoretically more well-founded, and empirically no less powerful than existing models. In the long run, we aim to bridge the gap between traditional statistical NLP and modern neural NLP, so that valuable ideas, techniques and insights developed over the past three decades in statistical NLP could find their place in modern NLP research and engineering. The datasets used in our experiments have small to medium sizes (around 10k to 60k training sentences). Our preliminary experiments with MLM on larger data show that our models significantly underperform transformers, which suggests that our model may not be as scalable as transformers. One possible cause is the absence of a feed-forward structure in our model. Recent researches show that the feed-forward layers might serve as an important part of transformers (Dong et al., 2021). Further research is needed to analyze this problem. Our model can be extended in a few directions. Instead of discrete labels, we may assume Z variables representing discrete vectors or even continuous vectors, which may lead to more complicated inference. We may model dependency labels by pairing every H variable with a dependency label variable. While we focus on contextual word representation (i.e., encoding) in this paper, we may extend our probabilistic model to include a decoder. Considering the similarity between our model and transformers, we speculate that some of these extensions may be used to inspire extensions of transformers as well. ## 7 Conclusion We present probabilistic transformers, a type of syntactic-aware probabilistic models for contextual word representation. A probabilistic transformer acquires discrete latent representations of all words in the input sentence by modeling a syntactic dependency structure of the input sentence. We use MFVI for approximate inference and find a striking resemblance between the computation graph of the inference procedure of our model and that of a transformer. Our experimental results demonstrate that our model performs competitively to transformers on small to medium sized datasets. ## Limitations Though we have found a tight connection between probabilistic transformers and transformers in Section 3, this does not mean that our model can be directly used to interpret or modify transformers. For instance, in Section 3.3, we find that WK and WVin transformers both correspond to U in probabilistic transformers. However, if we tie WK and WVin transformers, then we may observe a performance drop on some downstream tasks. The performance of probabilistic transformers lags behind transformers on large datasets (>100k), which suggests that our model may not be as scalable as transformers. We have discussed this in Section 6. The way of positional encoding for probabilistic transformers leads to slower training and inference speed. On masked language modeling tasks, our model is about 3 times slower than transformers with either absolute or relative positional encoding, though it has much fewer parameters than transformers. ## Acknowledgements This work was supported by the National Natural Science Foundation of China (61976139). ## References Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang. 2021. Gate: graph attention transformer encoder for cross-lingual relation and event extraction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12462–12470. Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-theart NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59, Minneapolis, Minnesota. Association for Computational Linguistics. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, and Mark Johnson. 2000. Bllip 1987–89 wsj corpus release 1, ldc no. LDC2000T43. Linguistic Data Consortium. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In *Proceedings of the 2019 ACL Workshop BlackboxNLP:* Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. *arXiv preprint arXiv:1803.05449*. Marie-Catherine De Marneffe, Christopher D Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal dependencies. *Computational linguistics*, 47(2):255– 308. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In *International Conference on Learning Representations*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Marco Dinarelli and Loïc Grobol. 2019. Seq2biseq: Bidirectional output-wise recurrent neural networks for sequence modelling. *arXiv preprint* arXiv:1904.04733. Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. 2021. Attention is not all you need: pure attention loses rank doubly exponentially with depth. In *Proceedings of the 38th International Conference* on Machine Learning, volume 139 of *Proceedings* of Machine Learning Research, pages 2793–2803. PMLR. N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, Y Bai, A Chen, T Conerly, et al. 2021. A mathematical framework for transformer circuits. *Transformer Circuits Thread*. Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. *arXiv preprint arXiv:2203.14680*. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ruining He, Anirudh Ravula, Bhargav Kanagal, and Joshua Ainslie. 2021. RealFormer: Transformer likes residual attention. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 929–943, Online. Association for Computational Linguistics. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics. Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R Bowman. 2019. Do attention heads in bert track syntactic dependencies? arXiv preprint arXiv:1911.12246. Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 1725–1744, Online. Association for Computational Linguistics. Najoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 9087–9105, Online. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Nikita Kitaev, Thomas Lu, and Dan Klein. 2022. Learned incremental representations for parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3086–3095, Dublin, Ireland. Association for Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*, pages 1–16. Ð Khuê Lê-Huu and Karteek Alahari. 2021. Regularized frank-wolfe for dense crfs: Generalizing mean field and beyond. *Advances in Neural Information* Processing Systems, 34:1453–1467. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Roberta: A robustly optimized bert pretraining approach. In *International Conference on Learning* Representations, pages 1–15. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. *Comput. Linguist.*, 19(2):313–330. Tomáš Mikolov et al. 2012. Statistical language models based on neural networks. Presentation at Google, Mountain View, 2nd April, 80:26. Santiago Ontanón, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. 2021. Making transformers solve compositional tasks. *arXiv preprint* arXiv:2108.04378. Vinit Ravishankar, Artur Kulmizev, Mostafa Abdou, Anders Søgaard, and Joakim Nivre. 2021. Attention can reflect syntactic structure (if you let it). In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3031–3045, Online. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Yikang Shen, Shawn Tan, Alessandro Sordoni, Peng Li, Jie Zhou, and Aaron Courville. 2022. Unsupervised dependency graph network. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4767–4784, Dublin, Ireland. Association for Computational Linguistics. Yikang Shen, Yi Tay, Che Zheng, Dara Bahri, Donald Metzler, and Aaron Courville. 2021. StructFormer: Joint unsupervised induction of dependency and constituency structure from masked language modeling. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7196–7209, Online. Association for Computational Linguistics. Solomon Eyal Shimony. 1994. Finding maps for belief networks is np-hard. *Artificial intelligence*, 68(2):399–410. Daniel D. Sleator and Davy Temperley. 1993. Parsing English with a link grammar. In *Proceedings of the* Third International Workshop on Parsing Technologies, pages 277–292, Tilburg, Netherlands and Durbuy, Belgium. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguisticallyinformed self-attention for semantic role labeling. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics. Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Herve Jegou, and Armand Joulin. 2019. Augmenting self-attention with persistent memory. *arXiv* preprint arXiv:1907.01470. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Xinyu Wang and Kewei Tu. 2020. Second-order neural dependency parsing with message passing and end-to-end training. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 93–99, Suzhou, China. Association for Computational Linguistics. Yaushian Wang, Hung-Yi Lee, and Yun-Nung Chen. 2019. Tree transformer: Integrating tree structures into self-attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1061–1070, Hong Kong, China. Association for Computational Linguistics. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. 2020. On layer normalization in the transformer architecture. In *International Conference on Machine Learning*, pages 10524–10533. PMLR. ## A Extended Entropic Frank-Wolfe In Section 2.3.3, we add message weights to the update function of the posterior marginal distributions. It follows an extension of the Entropic Frank-Wolfe algorithm (Lê-Huu and Alahari, 2021), which is a generalization of MFVI. Below we briefly introduce the algorithm and our extension following most of the notations in their paper. ## A.1 Entropic Frank-Wolfe Suppose we want to minimize a continuous differentiable energy function E(·). Vanilla Frank-Wolfe solves the problem minx∈X E(x) by starting from a feasible x (0) ∈ X at time step 0, and iterating the following steps: $$\mathbf{p}^{(t)}\in{\underset{\mathbf{p}\in{\mathcal{X}}}{\operatorname{argmin}}}\left\langle\nabla E\left(\mathbf{x}^{(t)}\right),\mathbf{p}\right\rangle$$ $$\mathbf{x}^{(t+1)}=\mathbf{x}^{(t)}+\alpha_{t}\left(\mathbf{p}^{(t)}-\mathbf{x}^{(t)}\right)$$ where αt ∈ [0, 1] follows some stepsize scheme, X is the value range of x, and here we let x ∈ R n×d be the concatenation of the distributions over the label set of all variables in CRF. Regularized Frank-Wolfe (Lê-Huu and Alahari, 2021) adds a regularization term r(·) to the objective. It solves the new objective E(x) + r(x) by iterating $$\begin{array}{c}{{\mathbf{p}^{(t)}\in\operatorname{argmin}\left\{\left\langle\nabla E\left(\mathbf{x}^{(t)}\right),\mathbf{p}\right\rangle+r(\mathbf{p})\right\}}}\\ {{\mathbf{p}\in\mathcal{X}}}\end{array}$$ It has been proved that regularized Frank-Wolfe achieves a sublinear rate of convergence O(1/ √t) for suitable stepsize schemes. Entropic Frank-Wolfe is a special case of regularized Frank-Wolfe, which sets the regularization term as an entropy function r(x) = −λH(x), 7623 where H(x) = −Pi∈V Ps∈S xis log xis, S is the label set of the variables, V is the set of indices of the variables. Entropy Frank-Wolfe has a closedform solution for the update process $$\mathbf{p}^{(t)}=\operatorname*{argmin}_{\mathbf{p}\in\mathcal{X}}\left\{\left\langle\nabla E\left(\mathbf{x}^{(t)}\right),\mathbf{p}\right\rangle-\lambda H(\mathbf{p})\right\}$$ $$=\operatorname*{softmax}\left(-\frac{1}{\lambda}\left(\nabla E\left(\mathbf{x}^{(t)}\right)\right)\right)\quad\forall t\geq0\tag{30}$$ When $\lambda=1$ and $\alpha_{t}=1,\forall t\geq0$, it is the same as When λ = 1 and αt = 1, ∀t ≥ 0, it is the same as the mean field algorithm. ## A.2 Extended Entropic Frank-Wolfe We extend the Entropic Frank-Wolfe algorithm by using a more general regularization term $$r(\mathbf{x})=-\sum_{i\in{\mathcal{V}}}\lambda_{i}H(\mathbf{x}_{i})$$ , where λi > 0 is the regularization weight of the i-th variable and H(xi) = −Ps∈S xis log xis is the entropy of xi over the probability simplex ∆ = x ∈ R d: x ≥ 0, 1⊤x = 1 . It allows us to assign different regularization weights for different variables. We claim that the update function could be written as $$\mathbf{p}^{(t)}=\operatorname*{argmin}_{\mathbf{p}\in\mathcal{X}}\left\{\left\langle\nabla E\left(\mathbf{x}^{(t)}\right),\mathbf{p}\right\rangle-\lambda_{i}H(\mathbf{p}_{i})\right\}\tag{31}$$ $$=\operatorname*{softmax}\left(\mathbf{R}\right)\quad\forall t\geq0$$, where $\mathbf{R}\in\mathbb{R}^{nd}$ and , where R ∈ R $$\mathbf{R}_{i}=-{\frac{1}{\lambda_{i}}}\left(\nabla E\left(\mathbf{x}_{i}^{(t)}\right)\right)\quad\forall i\in{\mathcal{V}}$$ This extension is still a special case of the regularized Frank-Wolfe algorithm. As a result, it inherits all the convergence properties from the regularized Frank-Wolfe mentioned in the previous section. On the other hand, it is also an extension of MFVI, which allows adding a message weight to each variable during inference. ## A.3 A Proof For Extended Entropic Frank-Wolfe We give a simple proof to the close-form solution of extended Entropic Frank-Wolfe in Equation 31. Since the optimization could reduce to n independent subproblems over each i ∈ V, We only need to give the closed-form solution to each subproblem: Lemma 1. *For a given vector* c ∈ R d, λ > 0, the optimal solution z∗to $$\operatorname*{min}_{{\bf z}\in\Delta}\left\{\langle{\bf c},{\bf z}\rangle+\lambda\sum_{s=1}^{d}z_{s}\log z_{s}\right\}$$ is z∗ = softmax(− 1 λ c), where ∆ is the probability simplex x ∈ R d: x ≥ 0, 1⊤x = 1 . Proof. We can rewrite the problem as $$\begin{array}{r l}{\operatorname*{min}_{\mathbf{z}}}&{{}\langle\mathbf{c},\mathbf{z}\rangle+\lambda\sum_{s=1}^{d}z_{s}\log z_{s}}\\ {s.t.}&{{}\mathbf{1}^{\top}\mathbf{z}}&{{}=1,}\\ {\mathbf{-z}}&{{}\leq\mathbf{0},}\end{array}$$ The Lagrangian of the above problem is given by $$L(\mathbf{z},\boldsymbol{\mu},\nu)=\langle\mathbf{c},\mathbf{z}\rangle+\lambda\sum_{s=1}^{d}z_{s}\log z_{s}$$ $$+\boldsymbol{\mu}^{\top}(-\mathbf{z})+\nu\left(\mathbf{1}^{\top}\mathbf{z}-1\right)$$ $$=-\nu+\sum_{s=1}^{d}(c_{s}z_{s}+\lambda z_{s}\log z_{s})$$ $$-\mu_{s}z_{s}+\nu z_{s})$$ where µ = (µ1, µ2*, . . . , µ*d) ≥ 0 and ν ∈ R are the Lagrange multipliers. Since the given problem is convex and there exists z ∈ R dsuch that 1⊤z = 1 and z > 0, the Slater's constraint qualification holds. Thus, it suffices to solve the following Karush-Kuhn-Tucker (KKT) system to obtain the optimal solution: $c_{s}+\lambda\log z_{s}+1-\mu_{s}+\nu=0\quad\forall1\leq s\leq d$, $\mathbf{1}^{\top}\mathbf{z}=1$, $\mathbf{z}\geq\mathbf{0}$, $\mu\geq\mathbf{0}$, $\mu_{s}z_{s}=0\quad\forall1\leq s\leq d$. The first equation implies ∀1 ≤ s ≤ *d, z*s > 0, and thus in combination with the last, we obtain ∀1 ≤ s ≤ *d, µ*s = 0. Therefore, the first equation becomes $$c_{s}+\lambda\log z_{s}+1+\nu=0$$ ∀1 ≤ s ≤ d. Rewrite the equation as $$z_{s}=\exp\left({\frac{-1-\nu}{\lambda}}\right)\exp\left(-{\frac{1}{\lambda}}c_{s}\right)$$ ∀1 ≤ s ≤ d. Summing up this result for all s, and taking into account the second equation, we have $$\sum_{s=1}^d\exp\left(\frac{-1-\nu}{\lambda}\right)\exp\left(-\frac{1}{\lambda}c_s\right)=1$$ That is, $$\exp\left(\frac{-1-\nu}{\lambda}\right)=\frac{1}{\sum_{s=1}^d\exp\left(-\frac{1}{\lambda}c_s\right)}$$ Combine these two formulas, we have $$z_{s}={\frac{\exp\left(-{\frac{1}{\lambda}}c_{s}\right)}{\sum_{t=1}^{d}\exp\left(-{\frac{1}{\lambda}}c_{t}\right)}}$$ $\sum_{t=1}^{\infty}\exp\left(-\frac{1}{\lambda}\mathbf{c}\right)$ $\forall1\leq s\leq d$. In other words, $\mathbf{z}=\text{softmax}(-\frac{1}{\lambda}\mathbf{c})$. ## A.4 Inference In Crf In this work, we apply the extended Entropic Frank-Wolfe to do inference in the CRF. Let s = (Z1, · · · , Zn, H(1) 1, · · · , H(1) n , H(2) 1, · · · , H(h) n ) denote an assignment to all the random variables. Our CRF encodes the joint distribution $$p(\mathbf{s})={\frac{1}{Z}}\prod_{i}\phi_{u}(Z_{i})\prod_{c}\prod_{i}\prod_{j\neq i}\phi_{t}(H_{i}^{(c)},Z_{i},Z_{j})$$ where Z is a normalization factor. The objective is to find an assignment s that maximizes the joint distribution p(s). To express in the form of an energy function, let p(s) = 1Z exp(−e(s)), we have $$e(\mathbf{s})=-\sum_{i}\mathbf{S}_{w_{i},Z_{i}}-\sum_{c}\sum_{i}\sum_{j\neq i}\mathbb{1}_{H_{i}=j}\mathbf{T}_{Z_{i},Z_{j}}^{(c)}$$ where 1Hi=j is an indicator function, which is equal to 1 if Hi = j and is equal to 0 otherwise. The objective could now be expressed as minimizing the energy function e(s). In general, the problem of CRF inference is NPHard (Shimony, 1994). In MFVI, we solve the continuous relaxation of the CRF problem instead. Let X be the simplex. That is, we allow a marginal distribution for each random variable. As in Section 2.2, let Qi(·) be the approximate marginal distribution over Zi and Qic(·) be the approximate marginal distribution over H (c) i. The energy function is then $$\begin{array}{l}{{E(Q_{*})=-\sum_{i}\sum_{a}Q_{i}(a)\mathbf{S}_{w_{i},a}}}\\ {{-\sum_{c}\sum_{i}\sum_{j\neq i}\sum_{a}\sum_{b}Q_{i}(a)Q_{j}(b)Q_{i c}(j)\mathbf{T}_{a,b}^{(c)}}}\\ {{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\gamma}}\end{array}$$ Then we have $$\begin{array}{c}{{\frac{\partial E}{\partial Q_{i}(a)}=-\mathbf{S}_{w_{i},a}-\sum_{c}\sum_{j\neq i}\sum_{b}}}\\ {{\qquad\left(Q_{j}(b)Q_{i c}(j)\mathbf{T}_{a,b}^{(c)}+Q_{j}(b)Q_{j c}(i)\mathbf{T}_{b,a}^{(c)}\right)}}\\ {{\frac{\partial E}{\partial Q_{i c}(j)}=-\sum_{a}\sum_{b}Q_{i}(a)Q_{j}(b)\mathbf{T}_{a,b}^{(c)}}}\end{array}$$ In MFVI, the update for each distribution is the softmax of the derivative (let λ = 1 and αt = 1, ∀t ≥ 0 in Equation 30). That is, $$\begin{array}{c}{{Q_{i}^{(t)}(a)\propto\exp\left(-\frac{\partial E^{(t-1)}}{\partial Q_{i}^{(t-1)}(a)}\right)}}\\ {{Q_{i c}^{(t)}(j)\propto\exp\left(-\frac{\partial E^{(t-1)}}{\partial Q_{i c}^{(t-1)}(j)}\right)}}\end{array}$$ Together with Equation 3 and 4, we have $$\begin{array}{l}{{\frac{\partial E^{(t-1)}}{\partial Q_{i}^{(t-1)}(a)}=-\mathbf{S}_{w_{i},a}-\mathcal{G}_{i}^{(t-1)}(a)}}\\ {{\frac{\partial E^{(t-1)}}{\partial Q_{i c}^{(t-1)}(j)}=-\mathcal{F}_{i c}^{(t-1)}(j)}}\end{array}$$ , which directly leads us to Formula 5 and 6. In the extended Entropic Frank-Wolfe, the update for each distribution is the regularized softmax of the derivative (Equation 31). That is, $$Q_{i}^{(t)}(a)\propto\exp\left(-\frac{1}{\lambda_{i}}\frac{\partial E^{(t-1)}}{\partial Q_{i}^{(t-1)}(a)}\right)$$ $$Q_{i c}^{(t)}(j)\propto\exp\left(-\frac{1}{\lambda_{i c}}\frac{\partial E^{(t-1)}}{\partial Q_{i c}^{(t-1)}(j)}\right)$$ where $\partial$ denotes a $\partial$-$\mu$-$T$-$\mu$-$\lambda$. Let λi = λZ > 0, λic = λH > 0, ∀*i, c*. Then it is equivalent to Formula 11 and 12 with regularization weight λZ > 0 for Z variables and λH > 0 for H variables. ## A.5 The Choice Of Message Weights In Section 2.3.3, we set λZ = 1 and λH = 1 d by default. This choice comes from a theoretical analysis similar to Vaswani et al. (2017), and we empirically find it helpful to improve the performance. Assume that the ternary scores in T are independent random variables with mean 0 and variance σ 2. Then from Equation 3, we know that F (t) ic (j) is a weighted sum of these random variables. Suppose the weights are uniformly distributed, then F (t) ic (j) 7625 has mean 0 and variance d 2 (d 2) 2 σ 2 = 1 d 2 σ 2. Since d is usually set to several hundred, this might result in a small variance in the message received by H variables and thus lead to uniformly distributed H variables. To balance this effect, we set λH = 1 d such that the variance of 1 λHF (t) ic (j) is still σ 2. From Equation 4 we know that the variance of G (t) i(a) is 2(n−1) hd σ 2. Here, since n varies in sentences, it is impossible to set a fixed λZ that always recovers the original variance σ 2. Compared to F (t) ic (j), the variance of G (t) i(a) does not change significantly. For simplicity, we set λZ = 1. ## B More Extensions And Variants We have introduced several extensions and variants that are beneficial to the model performance in Section 2.3. There are some other variants that we find do not bring significant improvement empirically, but might also be meaningful and have interesting correspondences to transformers. ## B.1 Step Size In our model, we can retain information between iterations and do partially update with a proper step size. Let $$\begin{array}{l}{{Q_{i}^{\star(t)}(a)\propto\exp\left(\mathbf{S}_{w_{i},a}+\mathcal{G}_{i}^{(t-1)}(a)\right)}}\\ {{Q_{i c}^{\star(t)}(j)\propto\exp\left(\mathcal{F}_{i c}^{(t-1)}(j)\right)}}\end{array}$$ be the original posterior marginal distributions of the variables at time step t, which is the same as Formula 5 and 6. We have the posterior distributions with step size $$\begin{array}{c}{{Q_{i}^{(t)}(Z_{i})=\alpha_{Z}Q_{i}^{\star(t)}(Z_{i})+(1-\alpha_{Z})Q_{i}^{(t-1)}(Z_{i})}}\\ {{Q_{i c}^{(t)}(H_{i}^{(c)})=\alpha_{H}Q_{i c}^{\star(t)}H_{i}^{(c)}+(1-\alpha_{H})Q_{i c}^{(t-1)}H_{i}^{(c)}}}\end{array}$$ where $\alpha_{Z}$ is the set of all $\alpha_{Z}$. where αZ, αH ∈ (0, 1] are the step sizes of each update. When αZ = αH = 1, it is equivalent to the original model. We initialize these distribution by Formula 7 and 8. ## B.2 Damping Similar to step size in Appendix B.1, the damping approach also aims at retaining information between iterations. Instead of partially updating the posterior distribution, the damping approach partially updates the messages. We define messages in time step t as $$M_{i}^{(t)}(a)={\bf S}_{w_{i},a}+{\cal G}_{i}^{(t-1)}(a)\tag{32}$$ $$M_{ic}^{(t)}(j)={\cal F}_{ic}^{(t-1)}(j)\tag{33}$$ (t) $i=1$ where M (t) i(Zi) is the message passed to Zi and M (t) ic (H (c) i) is the message passed to H (c) i. Thus, Formula 5 and 6 can be written as $$\begin{array}{l}{{Q_{i}^{(t)}(a)\propto\exp\left(M_{i}^{(t)}(a)\right)}}\\ {{Q_{i c}^{(t)}(j)\propto\exp\left(M_{i c}^{(t)}(j)\right)}}\end{array}$$ Now, we add damping factors βZ and βH, which restrict the message update between iterations. We change Equation 32 and 33 to $$\begin{array}{c}{{M_{i}^{(t)}(a)=(1-\beta_{Z})\left({\bf S}_{w_{i},a}+{\cal G}_{i}^{(t-1)}(a)\right)}}\\ {{\qquad\qquad+\beta_{Z}M_{i}^{(t-1)}(a)}}\\ {{M_{i c}^{(t)}(j)=(1-\beta_{H})\left({\cal F}_{i c}^{(t-1)}(j)\right)+\beta_{H}M_{i c}^{(t-1)}(j)}}\end{array}$$ We initialize the message by $$\begin{array}{c}{{M_{i}^{(0)}(a)=\mathbf{S}_{w_{i},a}}}\\ {{M_{i c}^{(0)}(j)=0}}\end{array}$$ When βZ = βH = 0, there is no damping in the update process and it is equivalent to the original model. When βZ = 0.5 and βH = 0, it is similar to the residual connection in transformers. When βZ = βH = 0.5, it is similar to the residual attention mechanism proposed in RealFormer (He et al., 2021). ## B.3 Global Variables As we mentioned in Section 3.4, probabilistic transformers do not have a feed-forward structure as in transformers. Feed-forward layers, however, constitute two-thirds of a transformer model's parameters. Recent researches show that the feedforward layers might serve as an important part of transformers (Dong et al., 2021; Geva et al., 2021, 2022). Inspired by Sukhbaatar et al. (2019), who combines the feed-forward layer and the self-attention layer into a unified all-attention layer, we design a similar structure based on dependency relations. Intuitively, we could add some global variables that are similar to the latent word representations (Z variables) but these representations are global features that do not change with input sentences. We will introduce 3 different model designs below. B.3.1 All-dep Based on the intuition above, we add some global variables to the CRF model. Define Fi as the i-th discrete global feature variable with the same label set as Z variables, representing the global features of the corpus. The total number of global feature variables is m. These variables are observed and the distributions on the label set will not change during inference. The head of each word could either be another word or a global feature variable. That is, H (c) i ∈ {1, 2, · · · *, n, n* + 1, · · · , n + m}. Then, for each word wi and global feature Fj in channel c, we define a ternary potential function over Zi, H (c) iand Fj , which evaluates the compatibility between the labels of the word and the global feature of the entire corpus. $$\phi_{t}(H_{i}^{(c)},Z_{i},F_{j})=$$ $$\begin{cases}\exp(\mathbf{T}_{Z_{i},F_{j}}^{\prime\prime(c)}),&H_{i}^{(c)}=n+j\\ &1,\qquad\text{otherwise}\end{cases}$$ where $\mathbf{T}^{\prime\prime}(c)\in\mathbb{R}^{d\times d}$ is a score matrix for channel $c$. An illustration of the CRF model is shown in Figure 4. We call this setting *all-dep* since the head of each word could either be another word or a dummy global feature variable. It follows the all-attn setting in Sukhbaatar et al. (2019). Notice that Fj is a variable that does not participate in inference. It could be seen as part of the model. Thus, we could design an equivalent model that does not contain global feature variables but have a binary factor between Zi and H (c) i: $\phi_{b}(H^{(c)}_{i},Z_{i})=$ $$\left\{\begin{array}{ll}\prod_{g}\exp(P(F_{H^{(c)}_{i}-n}=g){\bf T}^{\prime\prime}_{Z_{i},g}),&H^{(c)}_{i}>n\\ &\\ &1,&\mbox{otherwise}\end{array}\right.$$ where $P(F_{i}=g)$ is the probability that the $i$-th global variable has label g. It can be proved that the MFVI inference process for the model with global feature variables and the model with binary factors is the same. Move the product inside the exponential term, we have $\phi_{b}(H_{i}^{(c)},Z_{i})=$ $$\left\{\begin{array}{ll}\exp(\sum_{g}P(F_{H_{i}^{(c)}-n}=g){\bf T}_{Z_{i},g}^{\prime\prime}),&H_{i}^{(c)}>n\\ &1,\quad\mbox{otherwise}\quad\mbox{a}\\ &7627\end{array}\right.$$ The term inside the exponential is a weighted sum of ternary scores. We may re-formulate this potential function with a simplified term: $$\phi_{b}(H_{i}^{(c)},Z_{i})=$$ $$\left\{\begin{array}{c}{{\exp({\bf B}_{H_{i}^{(c)}-n,Z_{i}}^{(c)}),\quad H_{i}^{(c)}>n}}\\ {{1,\quad\mathrm{otherwise}}}\end{array}\right.$$ where B(c) ∈ R m,d is a score matrix for channel c. The weighted sum of ternary scores could be regarded as a neural parameterization of the binary scores B(c). An illustration of the simplified CRF model is shown in Figure 5. Given the model above, we can now derive the following iterative update equations of posterior distribution: F (t) ic (j) = Q (t) i(a)Q (t) j(b)T (c) a,b, j ≤ n X a X b (34) Q (t) i(a)B (c) j,a, j > n X a Q (t) ic (j)Q (t) j(b)T (c) a,b G (t) i(a) =X c X j̸=i,j≤n X b +Q (t) jc (i)Q (t) j(b)T (c) b,a + X c X j>n Q (t) ic (j)B (c) j,a (35) where Q (t) i(a) ∝ exp Swi,a + G (t−1) i(a) (36) Q (t) ic (j) ∝ exp F (t−1) ic (j) (37) The initialization of the posterior marginal distributions Q (t) i(·) and Q (t) ic (·) is the same as Formula 7 and 8. Notice that F (t) ic ∈ R n+m looks like a concatenation of a context vector and a persistent vector in all-attention networks (Sukhbaatar et al., 2019). ## B.3.2 Dep-Split Following the *attn-split* setting in Sukhbaatar et al. (2019), we also design a *dep-split* version of our model. In each channel, we split the head of each word into two heads: one for the head word in the sentence and one for the global feature. We call the heads for global features 'global heads'. Denote G (c) i ∈ {1, ·, m} as the global head variable for i-th word in channel c. H (c) i ∈ {1, ·, n} 1 (1) 1 (ℎ) 2 (1) 2 (ℎ) Unary Factors Ternary Factors Ternary Factors (for ) Dependency Head Variables Global Feature 1 2 Variables Label Variables 1 2 Unary Factors Binary Factors Ternary Factors 1 (1) 1 (ℎ) 2 (1) 2 (ℎ) Dependency Head Variables 1 2 Label Variables Figure 5: An equivalent factor graph for the *all-dep* CRF model in Figure 4. is still the variable representing the syntactic dependency head of the i-th word in the c-th channel. Similar to the approaches in the *all-dep* setting, we define a simplified binary potential function for Zi and G (c) i $$\phi_{b}(G_{i}^{(c)}=k,Z_{i}=a)=\exp\left(\mathbf{B}_{k,a}^{(c)}\right)\quad\quad.$$ k,a(38) Figure 6 illustrates the CRF model of the *dep-split* setting. We could derive the following iterative update equations of posterior distribution: $$\mathcal{F}_{ic}^{(t)}(j)=\sum_{a}\sum_{b}\left(Q_{i}^{(t)}(a)Q_{j}^{(t)}(b)\mathbf{T}_{a,b}^{(c)}\right)\tag{39}$$ $$\mathcal{H}_{i,k,c}^{(t)}=\sum_{a}\left(Q_{i}^{(t)}(a)\mathbf{B}_{k,a}^{(c)}\right)$$ (40) $$\mathcal{G}_{i}^{(t)}(a)=\sum_{c}\sum_{j\neq i}\sum_{b}Q_{ic}^{(t)}(j)Q_{j}^{(t)}(b)\mathbf{T}_{a,b}^{(c)}$$ $$+\sum_{c}\sum_{j\neq i}\sum_{b}Q_{jc}^{(t)}(i)Q_{j}^{(t)}(b)\mathbf{T}_{b,a}^{(c)}$$ (41) $$+\sum_{c}\sum_{k}Q_{ic}^{\prime(t)}(k)\mathbf{B}_{k,a}^{(c)}$$ where $$\begin{array}{r}{{}}\\ {{Q_{i}^{(t)}(a)\propto\exp\left(\mathbf{S}_{w_{i},a}+\mathcal{G}_{i}^{(t-1)}(a)\right)}}\\ {{}}\\ {{Q_{i c}^{(t)}(j)\propto\exp\left(\mathcal{F}_{i c}^{(t-1)}(j)\right)}}\\ {{}}\\ {{Q_{i c}^{\prime(t)}(k)\propto\exp\left(\mathcal{H}_{i,k,c}^{(t-1)}\right)}}\end{array}$$ (42) (43) $$(38)$$ are the approximate marginal distributions at time step t, with Q ′(t) ic (·) over G (c) i. We initialize these distributions by Formula 7, 8 and $$Q_{i c}^{'(0)}(k)\propto1$$ $$(45)$$ ic (k) ∝ 1 (45) ## B.3.3 Single-Split Following the *single-split* setting in Sukhbaatar et al. (2019), we design a CRF model that is similar to the *dep-split* model but only allows one global head for each word. We also call this setting *singlesplit*. Denote Gi as the global head variable for i-th word with a label set of size m. We define a binary potential for Zi and Gi $$\phi_{b}(G_{i}=k,Z_{i}=a)=\exp\left({\bf B}_{k,a}\right)\tag{46}$$ where B ∈ R m×dis a score matrix. Figure 7 illustrates the CRF model of the *single-split* setting. ![16_image_0.png](16_image_0.png) $$\blacksquare{\mathrm{~Ternary~Factors}}$$ Figure 6: The factor graph for the *dep-split* CRF model where n = 2. For clarity, binary and ternary factors with channel c > 1 are not shown in the figure. Dependency Head Variables 1 (1) 1 (ℎ) 2 (1) 2 (ℎ) Unary Factors Binary Factors Ternary Factors 1 2Global Head Variables 1 2 Label Variables We could derive the following iterative update equations of posterior distribution: $$\begin{array}{c}{{{\mathcal F}_{i c}^{(t)}(j)=\sum_{a}\sum_{b}\left(Q_{i}^{(t)}(a)Q_{j}^{(t)}(b){\bf T}_{a,b}^{(c)}\right)}}\\ {{{\mathcal H}_{i,k}^{(t)}=\sum_{a}\left(Q_{i}^{(t)}(a){\bf B}_{k,a}\right)}}\\ {{{\mathcal G}_{i}^{(t)}(a)=\sum_{c}\sum_{j\neq i}\sum_{b}Q_{i c}^{(t)}(j)Q_{j}^{(t)}(b){\bf T}_{a,b}^{(c)}}}\\ {{{\qquad\qquad+\sum_{c}\sum_{j\neq i}\sum_{b}Q_{j c}^{(t)}(i)Q_{j}^{(t)}(b){\bf T}_{b,a}^{(c)}}}\\ {{{\qquad\qquad+\sum_{k}Q_{i}^{\prime(t)}(k){\bf B}_{k,a}}}}\end{array}$$ a,b(47) (49) where single-split might be the setting that has the most similar computation process to that of transformers. If we consider the tensorized form of *single-split*, then for the posterior distributions of all the G variables Q (t) g ∈ R n×m, we have $$\begin{array}{c}{{{\mathcal{F}}_{c}^{(t)}=Q_{z}^{(t)}{\mathbf{T}}^{(c)}Q_{z}^{(t)T}}}\\ {{{\mathcal{H}}^{(t)}=Q_{z}^{(t)}{\mathbf{B}}^{T}}}\\ {{{\mathcal{G}}^{(t)}=\sum_{c}Q_{h,c}^{(t)}Q_{z}^{(t)}{\mathbf{T}}^{(c)T}}}\\ {{+\sum_{c}Q_{h,c}^{(t)T}Q_{z}^{(t)}{\mathbf{T}}^{(c)}}}\\ {{+Q_{g}^{(t)}{\mathbf{B}}}}\end{array}$$ (56) where $$\begin{array}{l}{{Q_{i}^{(t)}(a)\propto\exp\left(\mathbf{S}_{w_{i},a}+\mathcal{G}_{i}^{(t-1)}(a)\right)}}\\ {{{}}}\\ {{Q_{i c}^{(t)}(j)\propto\exp\left(\mathcal{F}_{i c}^{(t-1)}(j)\right)}}\\ {{{}}}\\ {{Q_{i}^{'(t)}(k)\propto\exp\left(\mathcal{H}_{i,k}^{(t-1)}\right)}}\end{array}$$ (50) (51) $$\begin{array}{l}{{Q_{z}^{(t)}=\sigma\left(\mathbf{S}+\mathcal{G}^{(t-1)}\right)}}\\ {{Q_{h,c}^{(t)}=\sigma\left(\mathcal{F}_{c}^{(t-1)}\right)}}\\ {{Q_{g}^{(t)}=\sigma\left(\mathcal{H}^{(t-1)}\right)}}\end{array}$$ (58) With the similar trick in Section 3, we have $$Q_{z}^{(t)}=\sigma({\bf S}+2\sum_{c}{\rm channel}_{c}\,{\bf U}^{(c)T}\tag{60}$$ $$+\,{\rm GFU}(Q_{z}^{(t-1)}))$$ are the approximate marginal distributions at time step t, with Q ′(t) i(·) over Gi. We initialize these distributions by Formula 7, 8 and $$Q_{i}^{'(0)}(k)\propto1$$ i(k) ∝ 1 (53) $$(\mathbf{53})$$ $$7629$$ ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) where $$\mbox{channel}_{c}=\sigma\left(\frac{Q_{c}K_{c}^{T}}{\lambda_{H}}\right)V_{c}\tag{61}$$ $$\mbox{GFU}(x)=\sigma\left(x\mbox{B}^{T}\right)\mbox{B}\tag{62}$$ where we can regard GFU as an operator that updates the latent word representations from global features. An illustration of the computation process is shown in Figure 8. From Figure 9, we can see that the feed-forward structure in transformers is very similar to the global feature update process in probabilistic transformers with global variables. ## C Distance And Relative Positional Encoding (Rpe) In Section 3.2, we find that the single-channel update (Equation 29) in probabilistic transformers is almost identical to scaled dot-product attention in transformers. This observation is based on the hypothesis that probabilistic transformers and transformers are sharing the same positional encoding method. But this is not the case. In section 2.3.1, we mention that to capture the word order information, we use a clip function to select the ternary potential function based on the distance of two words (Equation 9). This is similar to the relative positional encoding (RPE) in transformers. Shaw et al. (2018) proposes a method to add an additional component to key and value, based on the clipped distance. Specifically, the scaled dot-product attention with RPE could be rewritten as $$e_{ij}=\frac{x_{i}W^{Q}\left(x_{j}W^{K}+a_{ij}^{K}\right)^{T}}{\sqrt{d_{k}}}$$ $$z_{i}=\sum_{j=1}^{n}\alpha_{ij}\left(x_{j}W^{V}+a_{ij}^{V}\right)$$ where $x_{i}$ is the input representation of the $i$-th word, ziis the output representation, αij = P exp eij k exp eik . The additional component is a learnable parameter that based on the clipped distance $$\begin{array}{c}{{a_{i j}^{K}=w_{\mathrm{clip}(j-i,k)}^{K}}}\\ {{a_{i j}^{V}=w_{\mathrm{clip}(j-i,k)}^{V}}}\\ {{\mathrm{clip}(x,k)=\operatorname*{max}(-k,\operatorname*{min}(k,x))}}\end{array}$$ For probabilistic transformers, we directly add the distance information to the ternary potential function. Combining Equation 9 and 29, we could rewrite the single-channel update as $e_{ij}=\dfrac{x_i\mathbf{U}_{ij}\left(x_j\mathbf{V}_{ij}\right)^T}{\lambda_H}$ $z_i=\sum\limits_{j=1}^n\alpha_{ij}\left(x_j\mathbf{V}_{ij}\right)$ $i_{j}=\frac{\exp e_{ij}}{\sum_k\exp e_{ik}}.$ The weights are based on . where αij = P k the clip function f in Equation 10 $$\begin{array}{l}{\mathbf{U}_{i j}=\mathbf{U}[f(i-j)]}\\ {\mathbf{V}_{i j}=\mathbf{V}[f(i-j)]}\end{array}$$ Notice that this way of positional encoding is quite parameter inefficient. It also makes our training process much slower than that of transformers. ## D Details For Tasks And Datasets In this section, we will introduce our tasks and datasets in detail. A brief introduction is shown in Section 4.1. ## D.1 Masked Language Modeling Masked Language Modeling (MLM) tasks generally evaluate the expressiveness of contextural word representations. We perform MLM tasks on two corpora: the Penn TreeBank (PTB) and Brown Laboratory for Linguistic Information Processing (BLLIP). We randomly replace words with a mask token <mask> at a rate of 30% and the model is required to predict the original word. Following Shen et al. (2022), we never mask <unk> tokens. The performance of MLM is evaluated by measuring perplexity (lower is better) on masked words. PTB. The Penn Treebank (Marcus et al., 1993), in particular the sections of the corpus corresponding to the articles of Wall Street Journal (WSJ), is a standard dataset for language modeling (Mikolov et al., 2012) and sequence labeling (Dinarelli and Grobol, 2019). Following the setting in Shen et al. (2021), we use the preprocessing method proposed in Mikolov et al. (2012). It removes all punctuation and replaces low-frequency words with <unk>. The processed dataset has a vocabulary size of 10000, including <unk> and <mask>. BLLIP. The Brown Laboratory for Linguistic Information Processing dataset (Charniak et al., 2000) is a large corpus similar to the PTB dataset in style. The entire dataset contains 24 million sentences from Wall Street Journal. In our experiments, we only use a small subset of this corpus. Following the same setting as Shen et al. (2022), we use the BLLIP-XS split proposed in Hu et al. (2020) with around 40k sentences and 1M tokens as the train set. The validation set consists of the first section each year and the test set consists of the second section each year. We remove all punctuation, replace numbers with a single character N and use lower-case letters. The vocabulary contains words that appear more than 27 times in the entire BLLIP dataset, with size 30231 including <unk> and <mask>. ## D.2 Sequence Labeling Sequence labeling tasks require models to predict the tag for each word in the sequence. For sequence labeling tasks, we perform part-of-speech (POS) tagging on two datasets: the Penn TreeBank (PTB) and the Universal Dependencies (UD). We also perform named entity recognition (NER) on CoNLL-2003. PTB. As introduced in Appendix D.1, we also use the PTB dataset for POS tagging but with a different setting. We use the most commons split of this corpus for POS tagging, where sections from 0 to 18 are used as the train set, sections from 19 to 21 are used as the validation set, and sections from 22 to 24 are used as the test set. All words in the train set compose the vocabulary. UD. UD is a project that develops crosslinguistically consistent treebank annotation for many languages (De Marneffe et al., 2021). We test our model on the language-specific part-of-speech (XPOS) tags of the English EWT dataset with the standard splits. All words in the train set compose the vocabulary. CoNLL-2003. It is a named entity recognition dataset which is released as part of CoNLL-2003 shared task (Tjong Kim Sang and De Meulder, 2003). We test our model on the English dataset. All words in the train set compose the vocabulary. We only project the final word representation of each word to the tag set with the BIOES scheme without using a CRF decoder. ## D.3 Text Classification Text Classification tasks need to classify sentences into different classes. We use the Stanford Sentiment Treebank (SST) (Socher et al., 2013) as the dataset. It has two variants: binary classification (SST-2) and fine-grained classification (SST-5). The dataset comes from SentEval (Conneau and Kiela, 2018). SST-2. SST-2 classifies each movie review into positive or negative classes. It contains 67k sentences in the train set. SST-5. SST-5 classifies sentences into 5 classes: negative, somewhat negative, neutral, somewhat positive and positive. It contains 8.5k sentences in the train set. In text classification, all words in the train set compose the vocabulary. ## D.4 Syntactic Test To evaluate the compositional generalization abilities of our model, we perform a syntactic test on the COGS (Kim and Linzen, 2020) dataset. COGS is a semantic parsing dataset that measures the compositional generalization abilities of models. We follow the settings in Ontanón et al. (2021), which turns the task from seq2seq into a sequence tagging task. The model needs to predict 5 tags for each input word: a *parent* word, the *role* of the relation between the word and its parent (if applicable), the category, the *noun determiner* (for nouns) and the verb name (for verbs). With these tags, one can reconstruct the original output deterministically. For role, category, *noun determiner* and verb name, we directly project word representations to each tag set. For the *parent* tag, (Ontanón et al., 2021) propose 3 types of prediction heads: - *Absolute* uses a direct projection to predict the absolute index of the parent in the input sequence (-1 for no parent). - *Relative* uses a direct projection to predict the relative offset of the parent token with respect to the current token, or self for no parent. - *Attention* uses the attention weights from a new attention layer with a single head to predict the parent. We empirically find that *relative* performs the best in most settings for both transformers and probabilistic transformers. This is not consistent with the observations in Ontanón et al. (2021) who finds that *attention* outperforms other settings. We still apply the *relative* setting in our experiments. ## E Hyperparameters And Implementation We report our hyperparameters in Table 2 for probabilistic transformers and Table 3 for transformers. We tune the models for each task except the syntactic test through random search. We run experiments on one NVIDIA GeForce RTX 2080 Ti and all the experiments could finish in one day. Our implementation is based on the flair framework (Akbik et al., 2019). ## F Case Studies Of Learned Dependency Structures A probabilistic transformer infers marginal distributions over both Z and H variables, the latter of which can be used to extract a dependency structure. Since our model is trained on downstream tasks such as MLM without access to gold parse trees, it can be seen as performing unsupervised dependency parsing. We visualize the dependency structures learned by a probabilistic transformer by looking at the most probable head of each word in the sentence. Figure 10 illustrates the dependency structures extracted from a probabilistic transformer trained on the PTB dataset under the MLM task. The sentence comes from the test set of the PTB dataset. We show the head of each word in all the channels. The numbers on the dependency arcs represent probabilities estimated by the model. The model does not contain a root node, so there is at least one circle in the dependency graph. From the figure, we can see that our model is very confident in its choices of dependency arcs, with all the probabilities close to 1, which indicates strong compatibilities between the latent representations of connected word pairs. The predicted structure somewhat makes sense. For example, it puts 'she said' together. But generally, most of the dependency arcs are not consistent with humandesigned dependency relations. | Probabilistic Transformer | MLM | POS | CLS | SYN | | | | |-----------------------------|--------|--------|--------|--------|--------|--------|--------| | PTB | BLLIP | PTB | UD | SST-2 | SST-5 | COGS | | | Label set size d | 384 | 384 | 128 | 128 | 512 | 256 | 64 | | Root label set size droot | - | - | - | - | 1024 | 512 | - | | # of channels h | 16 | 16 | 12 | 18 | 10 | 18 | 4 | | # of iterations T | 5 | 5 | 3 | 2 | 1 | 4 | 2 | | Distance threshold γ | 3 | 3 | 3 | 3 | 3 | 3 | 8 | | Decomposition | UV | UV | UV | - | UV | UVW | UV | | Decomposition rank r | 64 | 64 | 128 | - | 64 | 64 | 16 | | Dropout | 0.15 | 0.15 | 0.05 | 0.1 | 0.1 | 0.05 | 0.1 | | Asynchronous update | Yes | | | | | | | | Learning rate | 0.001 | 0.001 | 0.0024 | 0.0062 | 0.0001 | 0.0002 | 0.0025 | | Weight decay | 1.4e-6 | 1.4e-6 | 8e-6 | 2.2e-6 | 3e-7 | 3e-7 | 1e-9 | | L2 reg for T | 5e-4 | 5e-4 | 0 | 4e-4 | 0 | 0 | 0 | Table 2: Hyperparameters for probabilistic transformers in our experiments. Table 3: Hyperparameters for transformers in our experiments. | Transformer | MLM | POS | CLS | SYN | | | | |---------------------------|--------|--------|--------|--------|--------|--------|--------| | PTB | BLLIP | PTB | UD | SST-2 | SST-5 | COGS | | | Embedding size dmodel | 384 | 256 | 512 | 384 | 256 | 128 | 64 | | FFN inner layer size df f | 2048 | 2048 | 2048 | 512 | 512 | 1024 | 256 | | # of heads h | 8 | 14 | 14 | 14 | 10 | 14 | 4 | | # of layers N | 5 | 4 | 5 | 4 | 8 | 4 | 2 | | Positional Encoding | abs | abs | abs | abs | abs | abs | rel-8 | | Head dimension dqkv | 256 | 128 | 32 | 16 | 256 | 256 | 16 | | Dropout | 0.15 | 0.15 | 0.15 | 0 | 0.05 | 0 | 0.1 | | Learning rate | 0.0001 | 0.0002 | 0.0004 | 0.0004 | 0.0001 | 0.0002 | 0.0005 | | Weight decay | 1.2e-6 | 3.5e-6 | 3.2e-6 | 1.4e-6 | 1.9e-6 | 2.7e-6 | 1e-9 | this is not a major crash she said 1.001.00 0.50 1.00 0.991.00 1.001.00 this is not a major crash she said 1.001.00 1.00 0.96 1.00 1.00 1.00 1.00 (b) channel 2 ![21_image_0.png](21_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✗ A2. Did you discuss any potential risks of your work? We did not find potential risks in this work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4, Appendix E ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We use commonly-used benchmarks and the license could be easily found on the Internet. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix D ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? For a fair comparison, we use the datasets as is. We follow the preprocessing steps from previous work. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix D ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix D ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.3, Appendix E The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix E ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix E D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
gheini-etal-2023-joint
Joint Speech Transcription and Translation: Pseudo-Labeling with Out-of-Distribution Data
https://aclanthology.org/2023.findings-acl.483
Self-training has been shown to be helpful in addressing data scarcity for many domains, including vision, speech, and language. Specifically, self-training, or pseudo-labeling, labels unsupervised data and adds that to the training pool. In this work, we investigate and use pseudo-labeling for a recently proposed novel setup: joint transcription and translation of speech, which suffers from an absence of sufficient parallel data resources. We show that under such data-deficient circumstances, the unlabeled data can significantly vary in domain from the supervised data, which results in pseudo-label quality degradation. We investigate two categories of remedies that require no additional supervision and target the domain mismatch: pseudo-label filtering and data augmentation. We show that pseudo-label analysis and processing in this way results in additional gains on top of the vanilla pseudo-labeling setup providing a total improvement of up to 0.4{\%} absolute WER and 2.1 BLEU points for En{--}De and 0.6{\%} absolute WER and 2.2 BLEU points for En{--}Zh.
# Joint Speech Transcription And Translation: Pseudo-Labeling With Out-Of-Distribution Data Mozhdeh Gheini⋄∗, Tatiana Likhomanenko†, Matthias Sperber†**, Hendra Setiawan**† ⋄Information Sciences Institute, University of Southern California †Apple gheini@isi.edu, {antares,sperber,hendra}@apple.com ## Abstract Self-training has been shown to be helpful in addressing data scarcity for many domains, including vision, speech, and language. Specifically, self-training, or pseudo-labeling, labels unsupervised data and adds that to the training pool. In this work, we investigate and use pseudo-labeling for a recently proposed novel setup: joint transcription and translation of speech, which suffers from an absence of sufficient parallel data resources. We show that under such data-deficient circumstances, the unlabeled data can significantly vary in domain from the supervised data, which results in pseudo-label quality degradation. We investigate two categories of remedies that require no additional supervision and target the domain mismatch: pseudo-label filtering and data augmentation. We show that pseudo-label analysis and processing in this way results in additional gains on top of the vanilla pseudolabeling setup providing a total improvement of up to 0.4% absolute WER and 2.1 BLEU points for En–De and 0.6% absolute WER and 2.2 BLEU points for En–Zh. ## 1 Introduction Semi-supervised learning methods have been a cornerstone in addressing annotated data scarcity by taking advantage of and incorporating the relatively larger amounts of *unlabeled*1 data in the training process. Self-training is a relatively early instance of such methods (Scudder, 1965). Conceptually, self-training is simple: first, a base model is trained using limited labeled data. The base model is then used to predict labels for the unlabeled data. The generated labels are termed "*pseudo-labels*" (PLs) to signify their predicted nature, as opposed to gold supervised data. Finally, the pseudo-labels are combined with the initial seed supervised data to train ∗Work done during an internship at Apple. 1We use descriptors "(un)labeled" and "(un)supervised" interchangeably throughout this paper. a new model, and this process is repeated until no further improvement in performance is observed. Self-training, or pseudo-labeling interchangeably, has been shown to be effective to improve upon fully supervised baselines in low-resource settings for several sequence-to-sequence (seq2seq) tasks, such as machine translation (MT) (Zhang et al., 2018; He et al., 2020; Jiao et al., 2021), endto-end speech recognition (ASR) (Xu et al., 2020; Park et al., 2020; Kahn et al., 2020; Likhomanenko et al., 2021), end-to-end speech translation (ST) (Pino et al., 2020), and more recently speech-tospeech translation (Dong et al., 2022). In this work, we study pseudo-labeling for a recently proposed new setup, joint speech transcription and translation (STT) (Anastasopoulos and Chiang, 2018; Sperber et al., 2020): a setup that is of interest in use cases where both the transcript and translation of a speech signal are returned to the user. As we describe in detail later in §2.1, the fully supervised data for modeling end-to-end joint transcription and translation is triples of form (*s, tc, tl*) where s is the speech signal, tc is the transcript, and tl is the translation. As that is especially costly to come by, STT also seems to have the potential to benefit from pseudo-labeling. Our investigations show that while pseudolabeling (PL) is indeed helpful, the quality of pseudo-labels that bring about the benefits is subpar. Upon inspecting the supervised and unsupervised sets, that proves to be not surprising: with limited amounts of supervised data, it is likely that the supervised and unsupervised sets differ in domain, impacting the quality of pseudo-labels. Specifically, in our case, we identify two causes leading to domain mismatch with out-of-distribution unlabeled data: difference between the sequence length ranges and vocabulary sets of the supervised and unsupervised sets. In this work, we ask if we can specifically counteract the domain mismatch to reach a set of pseudo-labels of higher quality, 7637 and if that higher quality, in turn, translates into a better overall performance of pseudo-labeling. First, we propose PLs filtering based on simple data-centric criteria inspired by Likhomanenko et al. (2021). While PLs filtering is a common component of PL algorithms, it is usually based on the model prediction scores (Kahn et al., 2020; Park et al., 2020; Zhang et al., 2021, 2022), which may not directly target the identified domain mismatch aspects, e.g., different sequence length ranges, as our proposed filtering does. Second, we propose augmenting the supervised data by concatenating randomly-picked samples to create new ones and adding them to the supervised set. These two are essentially different in nature: while filtering increases the overall quality by removing samples with PLs that are likely to be faulty, augmentation does so by extending the supervised set and generating better labels in the first place. Our results confirm that indeed this distinction in nature gets reflected in different ways filtering and augmentation improve the performance of pseudo-labeling. The outline of this paper is as follows. We provide some background in §2 and detail the experimental setup in §3. Then, in §4, we report and discuss the results from vanilla pseudo-labeling, the observation of domain mismatch, and the gains brought about by filtering and augmentation. Our **contributions** are: 1) We specifically focus on PL in the face of domain mismatch between the supervised and unsupervised sets; 2) We investigate the mitigation of the effect of domain mismatch through two approaches: PLs filtering and augmentation by concatenation and demonstrate how they improve PL in different ways. These approaches can be repurposed wherever PL is considered as a solution; 3) We apply PL modified with those approaches specifically to a novel setup, joint speech transcription and translation, and report gains on top of the vanilla PL for STT. ## 2 Background Our work studies a pseudo-labeling solution for end-to-end joint speech transcription and translation. In this section, we provide the background for these two components involved in the study, namely *speech transcription and translation* and pseudo-labeling. ## 2.1 Speech Transcription And Translation Our task of speech transcription and translation (STT) is closely related to script recognition (ASR) and speech translation (ST). ASR is the task of generating the text equivalent to an audio speech signal. Meanwhile, ST aims to generate the text equivalent to the signal in a target language other than the language of the speaker. In contrast, STT generates both the transcript and the translation jointly in an end-to-end fashion. STT is particularly appealing in cases where both the transcript and translation are to be displayed to the user. Formally, STT can be modeled as follows: given a speech signal (s), the model generates the transcript (tc) and translation (tl) concatenated together in the output as one single sequence: s → tc_tl (Sperber et al., 2020). This formulation is simple to implement as it casts STT as an instance of the well-known seq2seq modeling and results in a *single* end-to-end model to be stored on device. Furthermore, as reported by Sperber et al. (2020), this formulation results in a reasonably consistent transcripts and translations as the coupled inference ensures that translations are conditioned on the transcripts. In our experiments, we use this STT formulation as it offers a good trade-off between accuracy, computational efficiency, and consistency. However, the major challenge that such modeling presents is insufficient data resources: threeway parallel samples of form (*s, tc, tl*) are expensive to annotate. Annotation would require multilingual annotators and would be time-consuming. To alleviate this limitation, we study how pseudolabeling can be employed effectively to combat data scarcity in this setting. We provide a background on pseudo-labeling in the next section. ## 2.2 Pseudo-Labeling Pseudo-labeling (PL), often referred to as selftraining in the literature, addresses the data insufficiency issue by taking advantage of much larger amounts of unsupervised data. More precisely, assume a labeled set L = {xi, yi} and an unlabeled set U = {xj}, where |U*| ≥ |*L|, are available (note that in the case of STT, yiis actually a tuple consisting of the transcript and the translation: yi = (tci*, tl*i)). PL starts with training an initial model M in a supervised manner using L. Then, using M, it generates pseudo-labels (predictions) for U. It then incorporates the pseudo-labels (PLs) to create a new model M+, which hopefully super- Algorithm 1 Pseudo-labeling Require: L = {xi, yi} and U = {xj} 1: Train a base model M on L 2: **while** The desired number of rounds or convergence has not been reached do 3: Generate the pseudo-labeled set: P = {xj , M(xj ) | xj ∈ U} 4: Obtain M+ by fine-tuning M on L ∪ P 5: Replace M with M+ 6: **end while** 7: **return** M sedes M in performance. M+ can then replace M to repeat this process for as many rounds as desired, or until no further gains are observed. Although conceptually simple, several key decisions need to be made before PL can be applied. How should M+ *be created?* M+ can be trained from scratch (Park et al., 2020) or alternatively obtained by continuously fine-tuning M (Xu et al., 2020) using the labeled set combined with the pseudo-labeled set. As we later report in §4, in our experiments, fine-tuning consistently outperforms training from scratch. Hence, we opt for fine-tuning in our experiments. Should PL be applied to supervised set? For the PL stage, we consider and experiment with labeling the supervised set in addition to the unsupervised set and monitor for any potential improvements. Similar to the previous item, as we later show in §4, using PLs for the supervised set does not prove to be beneficial in our experiments. Therefore, we generate predictions only for the unlabeled set. ## In What Way Should The Pseudo-Labels Be Used To update existing models? For instance, He et al. (2020), at each round, first train a model from scratch on the pseudo-labeled set, and then finetune it on the supervised set to obtain the final model for that round. Alternatively, Xu et al. (2020) combine the two sets and use a hyper-parameter to have control over the relative weight of the supervised portion against the pseudo-labeled portion. To keep our setup simple, we opt for combining the sets and treating them equally. With the key factors outlined above, Algorithm 1 shows how we carry out vanilla pseudo-labeling for our experiments. All results we report in §4.1 follow this algorithm. ## 3 Experimental Setup 3.1 Data In this work, we use two publicly available multilingual speech translation datasets which, thanks to the nature of their creation, include transcripts: CoVoST V2 (Wang et al., 2020) and MuST-C (Cattoni et al., 2021). CoVoST V2 is created by amending the validated audio clips and transcripts from the Common Voice crowd-sourced ASR corpus (Ardila et al., 2020) with professional translations. It covers translations from English into 15 languages and from 21 languages into English. MuSTC is created by automatically aligning the audio segments from TED talks to corresponding manual transcripts and translations (available from the TED website), which are also aligned. It covers translations from English into 14 languages. We conduct our experiments across two language pairs: English–German (En–De) and English–Chinese (En–Zh), which are available in both CoVoST and MuST-C. In all our experiments, we designate CoVoST as the supervised set, and MuST-C as the unsupervised set. Note that this means our objective is to reach the best performance possible on the CoVoST evaluation set. While we also have the gold transcripts and translations (labels in the STT problem) for MuST-C, we do not use them and practically treat MuST-C as an unlabeled set. We only use MuST-C gold labels for analysis and pseudo-label quality assessment. We provide the statistics of our data in Table 1. ## 3.2 Model To extract speech representations, we first use pretrained wav2vec 2.0 BASE (Baevski et al., 2020) 2 which results in 20ms per frame. On top of this extractor, we use a stack of three convolutional lay-2We use a model provided by Hugging Face Transformers (Wolf et al., 2020): facebook/wav2vec2-base-960h. | CoVoST | MuST-C | | | | |----------|----------|-------|------|------| | Train | Eval | Train | Eval | | | En–De | 233k | 15.5k | 251k | 1.4k | | En–Zh | 233k | 15.5k | 359k | 1.3k | ers to downsample the input further, resulting in 160ms per frame: each layer has a kernel of 3 and a stride of 2. Next we attach encoder-decoder Transformer (Vaswani et al., 2017) with pre-layer normalization, a hidden dimension of 1024, dropout of 0.1, and five and three layers of encoder and decoder, respectively, following Sperber et al. (2020). Positional embeddings (absolute sinusoidal) are only added on the decoder side. The whole model is trained in an end-to-end manner, including the wav2vec 2.0 feature extractor. On the output side, as described in §2.1, the decoder generates one sequence consisting of the transcript and the translation concatenated together. In terms of input prepossessing, we remove instances where speech is either shorter than 0.5s or longer than 15s, or either the transcript or the translation is longer than 50 words. After that, we use SentencePiece (Kudo and Richardson, 2018) for subword tokenization. The vocabulary is created using only the supervised set. We use a vocabulary size of 1020 and 8188 in the case of En–De and En– Zh, respectively. The transcription and translation vocabulary is shared in both cases. The objective function during optimization is a weighted sum of the CTC loss (Graves et al., 2006) on the encoder side and the cross-entropy loss on the decoder side. For both training a base model and fine-tuning an existing checkpoint on the union of the labeled set and the pseudo-labeled set, we use Adam optimizer (Kingma and Ba, 2015) with peak learning rate of 0.0005 after 500 warmup steps, coupled with inverse square root learning rate scheduling. We train for a total of 100 epochs and use SpecAugment (Park et al., 2019) in the same way and with the same parameters as wav2vec 2.0. After training, pseudo-labels are generated with a beam size of five. For both language pairs, we use the dev sets provided by the corpora as the held-out evaluation set. For scoring (and only for scoring), we remove diacritics and punctuation, and report our performance ![3_image_0.png](3_image_0.png) in terms of word error rate (WER) of transcripts and BLEU of translations using beam size of five with SACREBLEU.3 Our implementation is built upon PyTorch (Paszke et al., 2019), xnmt (Neubig et al., 2018), and Lightning (Falcon and The PyTorch Lightning team, 2019). ## 4 Results And Discussion We present our results in this section in the following order: §4.1 establishes vanilla pseudo-labeling performance, which leads to our analysis of the domain mismatch between the supervised and unsupervised sets. §4.2 and §4.3 then describe the two categories of remedies we devise to mitigate the effect of domain discrepancies on pseudo-labeling. As mentioned in §2.2, this is all using the best setting we were able to establish during our pilot experiments: at each pseudo-labeling round, we 1) label only the unsupervised data, and 2) finetune the existing checkpoint on the combination of supervised and pseudo-labeled data. We conduct our pilot experiments on En–De. We were able to confirm that the aforementioned setting consistently beats the rest over several rounds of pseudolabeling. Figure 1 illustrates the lead of the best setting over others in the last round of our experiments. The same pattern holds across all rounds. ## 4.1 Vanilla Pseudo-Labeling In Table 2, we include the results of vanilla PL, as in Algorithm 1, with no modifications. We report 3Hash: case.lc+numrefs.1+smooth.4.0+tok.{13a,zh} for {En–De,En–Zh}. | En–De | En–Zh | | | | | | | | | | | | |------------|---------|------|------|-------|------------|------|------|------|------|-------|------|------| | Base Model | R1 | R2 | R3 | Bound | Base Model | R1 | R2 | R3 | R4 | Bound | | | | üCoVoST | WER ↓ | 15.4 | 15.4 | 15.0 | 15.0 | 14.4 | 14.8 | 14.6 | 14.8 | 14.7 | 14.6 | 13.7 | | BLEU ↑ | 22.8 | 23.8 | 24.5 | 24.5 | 25.5 | 28.7 | 29.4 | 30.0 | 30.5 | 30.7 | 31.9 | | | MuST-C | WER ↓ | 45.1 | 45.2 | 29.7 | 28.4 | 9.6 | 47.9 | 46.2 | 43.8 | 42.8 | 37.2 | 8.9 | | BLEU ↑ | 7.3 | 9.1 | 9.7 | 9.6 | 22.4 | 9.1 | 9.9 | 9.6 | 9.0 | 8.3 | 18.9 | | WER and BLEU for En–De and En–Zh across both corpora. To reiterate, CoVoST (distinguished by the magnifying glass symbol ü) is our designated supervised set, and hence, what we are trying to boost performance on. MuST-C scores, on the other hand, are reported for the sake of analysis; the metrics are to assess the quality of PLs. We report the performance of the initial model (the fully supervised baseline, Model M on line 1 of the Algorithm 1) in the "Base Model" column. Scores from each pseudo-labeling round, thereafter, appear on the corresponding "R" column. To have an upper bound of what is possible with the collective data if pseudo-labels were predicted perfectly, we train a single model using both corpora in a supervised manner. Those numbers are provided in the "Bound" column. Note that this is the only case for which MuST-C gold labels are used. First and foremost, in confirmation with the literature, vanilla pseudo-labeling is effective. On üCoVoST, it is able to improve the base model by 0.4% absolute WER and 1.7 BLEU points on En– De, and 0.2% absolute WER and 2.0 BLEU points on En–Zh. However, with a closer look at the quality of pseudo-labels at each round (i.e., MuST-C scores), it is evident that the generated labels are far from ideal quality. Our investigation into the reasons as to why that is the case points to two root causes that indicate üCoVoST and MuST-C are significantly different in *domain* in the following aspects: Length mismatch between corpora. As shown in Figure 2, MuST-C speech sequences are generally longer, which also results in longer transcripts and translations. Vocabulary mismatch between corpora. We were also able to identify discrepancies between the vocabulary of words between the two corpora. ![4_image_0.png](4_image_0.png) For instance, on the English side, MuST-C and CoVoST each have roughly 64k and 121k unique types, respectively. Of those, only 38k types are in common, with CoVoST having more probability mass on rare (tail-end of the Zipfian distribution) vocabulary types. Specifically, even if we train plain machine translation systems on üCoVoST transcripts and translations (and take the audio out of the picture), the En–De system scores only 12.4 BLEU on MuST-C En–De, and the En–Zh system scores only 9.6 BLEU on MuST-C En–Zh. Following these observations, we next demonstrate that it is possible to counteract the domain mismatch and enhance the quality of labels to boost the effectiveness of pseudo-labeling. ## 4.2 Direction #1: Data-Centric Filtering Per §2.2, in vanilla PL, we use all the generated labels to update the model. Alternatively, PLs can be filtered to remove predictions of less quality. Recent works (Park et al., 2020) rely on confidence scores from the model to filter the pseudo-labels, which require careful and proper normalization. Kahn et al. (2020) use a combination of heuristicbased and confidence-based filtering. In our case, similar to Likhomanenko et al. (2021), we propose | En–De | En–Zh | | | | | | | | |---------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | üCoVoST | MuST-C | üCoVoST | MuST-C | | | | | | | WER ↓ | BLEU ↑ | WER ↓ | BLEU ↑ | WER ↓ | BLEU ↑ | WER ↓ | BLEU ↑ | | | Bound | 14.4 | 25.5 | 13.7 | 31.9 | | | | | | Vanilla PL | 15.4/15.0 | 23.8/24.5 | 45.2/28.4 | 9.1/9.7 | 14.6/14.6 | 29.4/30.7 | 46.2/37.2 | 9.9/9.9 | | Ratio to Gold | 15.3/15.0 | 24.1/24.7 | 22.8/15.8 | 9.6/10.4 | 14.5/14.2 | 29.5/30.5 | 23.2/17.4 | 10.0/10.2 | | Ratio KDE | 15.1/15.0 | 24.2/24.5 | 30.5/27.1 | 9.4/10.1 | 14.3/14.2 | 29.8/30.7 | 30.8/21.7 | 10.8/10.8 | | LASER | 15.2/15.0 | 24.1/24.5 | 34.7/27.6 | 9.6/10.0 | 14.6/14.3 | 29.4/30.6 | 40.8/20.3 | 10.7/11.2 | | Augmentation | 15.3/15.3 | 24.9/24.9 | 33.8/22.2 | 11.5/11.8 | 14.6/14.3 | 30.1/30.9 | 48.7/25.4 | 11.9/11.9 | and only rely on data-centric metrics to specifically target domain-mismatch and select a subset of pseudo-labels to use in the next round: transcript length to audio length ratio and transcript and translation LASER embeddings cosine similarity. ## 4.2.1 Length Ratio Distribution A sign of flawed inference and faulty output in seq2seq models has been known to be looping (Chorowski and Jaitly, 2017): the model generates the same n-gram repeatedly. We were also able to identify looping occurring frequently in the PLs and resulting in long transcripts. While the supposed lengths of the correct transcripts are unknown, the length of the input audio can be used as an indicator: heuristically, the shorter the input audio, the shorter the transcript. To take advantage of this signal with no supervision overhead, we estimate the probability density function (PDF) of the joint probability distribution over the input audio lengths and predicted transcripts lengths using kernel density estimation (KDE). At each PL round then, we only keep the top 90% (found empirically) of the most probable transcripts. Figure 3 visualizes the effect of such filtering. Instances that have the highest PDF values, have a similar ratio of transcript length to audio length to that of gold transcripts. Hence, this can be a useful metric that needs no additional supervision. To gauge the maximum potential effectiveness of length ratio-based filtering, we also conduct experiments with filtering based on the ratio of the generated transcript length to the *gold* transcript length, where we only keep those with the length within 0.9× and 1.1× the length of the corresponding gold transcript. Note that this only has discussion purposes, as it uses supervision in the form of access to the length of the gold transcripts. Table 3 (rows "Ratio to Gold" and "Ratio KDE") shows how our length ratio-based filtering methods compare against plain vanilla pseudo-labeling. For each method, we run the same number of rounds as we did for vanilla pseudo-labeling in Table 2. We report the performance of the first round and the best round (first round/best round in table cells) of each method. Results from each separate round are comprehensively provided in Appendix A. On üCoVoST, "Ratio KDE" speeds up gains relative to vanilla pseudo-labeling despite incorporating fewer labels (only 90%): 15.1 vs. 15.4 WER and 24.2 vs. 23.8 BLEU at the first round in the case of En–De. The same pattern holds for En–Zh. Looking at the scores on MuST-C, it is evident that moderating the quality of pseudo-labels in this way, does indeed translate into better pseudo-labels for future rounds and improved performance on the supervised set. Also, "Ratio to Gold", benefiting from a form of supervision, expectedly results in better quality on the unsupervised set. However, on the supervised set, it performs similarly to "Ratio KDE", demonstrating that "Ratio KDE" is effective enough at removing detrimental pseudo-labels. While "Ratio KDE" performs clearly better at earlier rounds, it saturates at the same performance as vanilla pseudo-labeling, which uses all the labels (with being better only in the case of En–Zh WER by 0.4% absolute WER). So it is especially beneficial when available resources can only cover a small number of pseudo-labeling rounds. ![6_image_0.png](6_image_0.png) ## 4.2.2 Laser Score Our second filtering method relies on the relationship between the generated translations and transcripts (this is in contrast to the previous method, which relied on the relationship between the generated transcripts and audio signals). For this, we use the pretrained LASER model (Artetxe and Schwenk, 2019), a multilingual sentence encoder, to embed the generated transcripts and translations in a multilingual space to rank pairs based on the cosine similarity and hold onto only the top 90%. Given that LASER lies at the center of this, the quality of representations of different languages in its multilingual space can affect the degree of gains it can bring about. Per Table 3, row "LASER", LASER-based filtering improves performance on the unsupervised set (and hence, the quality of the PLs) all across the board. Those improvements translate into better performance on the supervised set for both En– De and En–Zh. Importantly, the improvement pattern is similar to that of length ratio-based filtering: more gains at earlier rounds, saturating at the same performance as the vanilla PL. However, as opposed to ratio-based filtering, which needs no additional supervision, the LASER model is trained using a massive amount of bitext and benefits from supervision in that way. But that does not result in enhanced performance compared to ratio-based filtering. So while LASER scores present a second avenue for pseudo-label filtering, "Ratio KDE" incurs strictly no supervision overhead, is simple, and is the best-performing filtering method. ## 4.3 Direction #2: Data Augmentation Our previous filtering methods remove PLs so that the remaining subset has a higher quality. However, if we can generate better labels, to begin with, we can discard none and retain all the labels. Here, to improve the quality of the labels generated by the base model at no extra supervision cost, we use data augmentation by concatenation to directly target the reported length mismatch between corpora in §4.1. To do so, we create an augmented set from our supervised set by randomly selecting a pair of samples and constructing a new sample by concatenating the audio signals as the input and concatenating corresponding transcripts and translations as output. In our experiments, we build a set of 20k augmented samples as such using the original üCoVoST data. After training the base model, before generating PLs, we first further fine-tune the base model on the union of the original supervised set and the augmented set. We then proceed as in vanilla PL with the union of the original data and the augmented set as our supervised training set. As shown in Table 3, row "Augmentation", although no generated labels are thrown away, the quality of PLs is indeed increased in the subsequent round. This is especially pronounced in the case | Ref. Transcription | It means reduce your carbon dioxide emissions with the full range of choices that you make, and then purchase or acquire offsets for the remainder that you have not completely reduced. | |----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Ratio KDE | It means reduce your carbon dioxide emissions, with the full range of choices that you make, and then purchase or purchase or purchase. | | Augmentation | It means reduce your carbon dioxide emissions. With the full range of choices that you make. And then purchase or acquire offsets for the remainder that you have not completely reduced. | of translations. We provide an example evidencing this in Table 4. Here we compare the PLs generated by "Ratio KDE" and "Augmentation" for an utterance in MuST-C against each other. For a longer input, "Ratio KDE" suffers from looping and inadequate generation, and this instance actually gets filtered. However, "Augmentation" gets it right and retains it for training in the subsequent round. The fact that it also generates the output as sentences separated with periods indicates that this is indeed learned as a consequence of augmented samples. With retaining all pseudo-labels, not only does bootstrapping the supervised set using concatenation expedite the gains from pseudo-labeling, but it is also the most effective in terms of the final performance before saturation by improving the score in three cases: it improves the performance of vanilla pseudo-labeling on üCoVoST by 0.4 and 0.2 BLEU points on En–De and En–Zh, respectively, and by 0.3% absolute WER on En–Zh. Therefore, it further closes the gap between pseudolabeling and the upper bounds. To conclude our discussion on how domain mismatch can be addressed, we find filtering methods, which discard labels, to be only effective when due to any resource limitation, only a few rounds of pseudo-labeling can be run. This finding also echoes insights from Bansal et al. (2022) that studies data scaling laws for MT and shows while filtering may benefit computational efficiency, more unfiltered data can replace filtered data. As an alternative to filtering, we show that improving the quality of all generated labels through augmentation so that all can be kept, is the most effective, especially when as many rounds as needed can be run to reach saturation. ## 5 Related Work The two paradigms often considered in lowresource data scenarios are self-training and pretraining. Self-training, or pseudo-labeling, has long been studied for a variety of seq2seq tasks (He et al., 2020; Xu et al., 2020; Park et al., 2020; Kahn et al., 2020; Chen et al., 2020; Likhomanenko et al., 2021; Pino et al., 2020; Dong et al., 2022). Regarding the relationship between pretraining and self-training, Xu et al. (2021) and Wang et al. (2021) show that self-training and unsupervised pretraining are complimentary and can be combined to boost performance on speech recognition and speech translation, respectively. In the case of supervised pretraining, however, Zoph et al. (2020) show in the vision domain that as the size of the labeled data available grows, self-training remains helpful, whereas the benefits of supervised pretraining start to diminish. For applying self-training to the unvisited setup of joint speech transcription and translation (Sperber et al., 2020), we focus on domain mismatch, a matter which can get overlooked when gains from vanilla pseudo-labeling are observed. As solutions, we study pseudo-label filtering and augmentation by concatenation. In contrast to conventional filtering, which relies on normalized model confidence scores (Park et al., 2020; Kahn et al., 2020), or recently, the agreement between several forward passes of the model run with dropout (Khurana et al., 2021), we define and use data-centric factors that are attuned to the domain differences we observe and directly target them. Concatenation as an effective augmentation method has been studied in the context of machine translation (Agrawal et al., 2018; Kondo et al., 2021; Nguyen et al., 2021; Gowda et al., 2022) and speech-to-text (Lam et al., 2022). In our case, we use it to expose our base model to sequences of higher length to improve the quality of generated pseudo-labels. ## 6 Conclusion We study pseudo-labeling for joint speech transcription and translation. We show that while vanilla pseudo-labeling is helpful, additional improvements are obtained by addressing the low quality of generated pseudo-labels due to domain mismatch between the supervised and unsupervised sets. We find that our proposed solutions help in two different ways, as they are in distinct nature: pseudo-label filtering, which discards low-quality labels, is mostly helpful by expediting gains in earlier rounds, especially for transcriptions. Augmentation by concatenation, on the other hand, does not discard any of the labels. As a result, it is able to maintain an edge over vanilla pseudo-labeling in the late rounds as well. ## Limitations We would like to acknowledge the following limitations of this work. Our study setup only takes advantage of supervised data in the form of triples of <speech, transcriptions, translations>. This is because we first and foremost want to investigate the effectiveness of pseudo-labeling in the most extreme case. However, the setup can be extended to be able to also rely on ASR-only (<speech, transcription>) and ST-only (<speech, translation>) pairs. We leave incorporating ASR and ST data as a future work as well as incorporating external language and machine translation models. We identified two sources of domain mismatch: input length ranges and vocabulary mismatch. However, the solutions that we investigate directly target the length mismatch, without explicitly addressing the vocabulary mismatch. The latter is indeed more challenging to address, especially without incurring additional supervision. In fact, circling back to the previous item as a future direction, incorporating supervision in the form of ASR or ST can expand the vocabulary set, also addressing vocabulary mismatch. ## Acknowledgements We would like to thank Qin Gao, Amittai Axelrod, Boliang Zhang, Barry Theobald, David Grangier, Jiatao Gu and the rest of machine translation and machine learning research teammates for fruitful discussions and constructive feedback on the manuscript. ## References Ruchit Agrawal, Marco Turchi, and Matteo Negri. 2018. Contextual handling in neural machine translation: Look behind, ahead and on both sides. In *Proceedings of the 21st Annual Conference of the European* Association for Machine Translation, pages 11–20. Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82–91. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4218–4222, Marseille, France. European Language Resources Association. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *Transactions* of the Association for Computational Linguistics, 7:597–610. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Advances in Neural Information Processing Systems*, volume 33, pages 12449–12460. Curran Associates, Inc. Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Colin Cherry, Behnam Neyshabur, and Orhan Firat. 2022. Data scaling laws in NMT: The effect of noise and architecture. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pages 1466–1482. PMLR. Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. Mustc: A multilingual corpus for end-to-end speech translation. *Computer Speech & Language*, 66:101155. Yang Chen, Weiran Wang, and Chao Wang. 2020. Semisupervised ASR by end-to-end self-training. In *Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual* Event, Shanghai, China, 25-29 October 2020, pages 2787–2791. ISCA. Jan Chorowski and Navdeep Jaitly. 2017. Towards better decoding and language model integration in sequence to sequence models. In *Interspeech 2017,* 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 523–527. ISCA. Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, Qibing Bai, and Yu Zhang. 2022. Leveraging Pseudo-labeled Data to Improve Direct Speech-toSpeech Translation. In *Proc. Interspeech 2022*, pages 1781–1785. William Falcon and The PyTorch Lightning team. 2019. PyTorch Lightning. Thamme Gowda, Mozhdeh Gheini, and Jonathan May. 2022. Checks and strategies for enabling codeswitched machine translation. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of* the 23rd International Conference on Machine Learning, ICML '06, page 369–376, New York, NY, USA. Association for Computing Machinery. Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael Lyu, and Irwin King. 2021. Self-training sampling with monolingual data uncertainty for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2840–2850. Jacob Kahn, Ann Lee, and Awni Hannun. 2020. Selftraining for end-to-end speech recognition. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7084–7088. Sameer Khurana, Niko Moritz, Takaaki Hori, and Jonathan Le Roux. 2021. Unsupervised domain adaptation for speech recognition via uncertainty driven self-training. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6553–6557. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Seiichiro Kondo, Kengo Hotate, Tosho Hirasawa, Masahiro Kaneko, and Mamoru Komachi. 2021. Sentence concatenation approach to data augmentation for neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 143–149, Online. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Tsz Kin Lam, Shigehiko Schamoni, and Stefan Riezler. 2022. Make more of your data: Minimal effort data augmentation for automatic speech recognition and translation. *CoRR*, abs/2210.15398. Tatiana Likhomanenko, Qiantong Xu, Jacob Kahn, Gabriel Synnaeve, and Ronan Collobert. 2021. slimIPL: Language-Model-Free Iterative PseudoLabeling. In *Proc. Interspeech 2021*, pages 741–745. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Liming Wang. 2018. XNMT: The eXtensible neural machine translation toolkit. In *Proceedings of the 13th Conference* of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 185– 192, Boston, MA. Association for Machine Translation in the Americas. Toan Q. Nguyen, Kenton Murray, and David Chiang. 2021. Data augmentation by concatenation for lowresource translation: A mystery and a solution. In Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021), pages 287–293, Bangkok, Thailand (online). Association for Computational Linguistics. Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. *Proc. Interspeech 2019*, pages 2613–2617. Daniel S. Park, Yu Zhang, Ye Jia, Wei Han, ChungCheng Chiu, Bo Li, Yonghui Wu, and Quoc V. Le. 2020. Improved Noisy Student Training for Automatic Speech Recognition. In Proc. Interspeech 2020, pages 2817–2821. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In *Advances in Neural Information Processing Systems 32*, pages 8024–8035. Curran Associates, Inc. Juan Pino, Qiantong Xu, Xutai Ma, Mohammad Javad Dousti, and Yun Tang. 2020. Self-Training for Endto-End Speech Translation. In Proc. Interspeech 2020, pages 1476–1480. H. Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory, 11(3):363–371. Matthias Sperber, Hendra Setiawan, Christian Gollan, Udhyakumar Nallasamy, and Matthias Paulik. 2020. Consistent transcription and translation of speech. Transactions of the Association for Computational Linguistics, 8:695–709. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Changhan Wang, Anne Wu, and Juan Pino. 2020. Covost 2: A massively multilingual speech-to-text translation corpus. Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, and Alexis Conneau. 2021. LargeScale Self- and Semi-Supervised Learning for Speech Translation. In *Proc. Interspeech 2021*, pages 2242– 2246. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, and Michael Auli. 2021. Selftraining and pre-training are complementary for speech recognition. In *ICASSP 2021 - 2021 IEEE* International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3030–3034. Qiantong Xu, Tatiana Likhomanenko, Jacob Kahn, Awni Hannun, Gabriel Synnaeve, and Ronan Collobert. 2020. Iterative Pseudo-Labeling for Speech Recognition. In *Proc. Interspeech 2020*, pages 1006– 1010. Bowen Zhang, Songjun Cao, Xiaoming Zhang, Yike Zhang, Long Ma, and Takahiro Shinozaki. 2022. Censer: Curriculum semi-supervised learning for speech recognition based on self-supervised pretraining. *arXiv preprint arXiv:2206.08189*. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. 2021. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information Processing Systems, 34:18408– 18419. Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Joint training for neural machine translation models with monolingual data. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 32. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training. In *Advances in Neural Information Processing Systems*, volume 33, pages 3833–3845. Curran Associates, Inc. ## A Extended Results | üCoVoST | MuST-C | | | | |---------------|----------|-------|--------|------| | WER ↓ | BLEU ↑ | WER ↓ | BLEU ↑ | | | Bound | 14.4 | 25.5 | | | | Base Model | 15.4 | 22.8 | 45.1 | 7.3 | | 15.4 | 23.8 | 45.2 | 9.1 | | | Vanilla PL | 15.0 | 24.5 | 29.7 | 9.7 | | 15.0 | 24.5 | 28.4 | 9.6 | | | 15.3 | 24.1 | 22.8 | 9.6 | | | Ratio to Gold | 15.0 | 24.5 | 18.5 | 10.2 | | 15.1 | 24.7 | 15.8 | 10.4 | | | 15.1 | 24.2 | 30.5 | 9.4 | | | Ratio KDE | 15.0 | 24.5 | 27.7 | 9.8 | | 15.4 | 24.4 | 27.1 | 10.1 | | | 15.2 | 24.1 | 34.7 | 9.6 | | | LASER | 15.0 | 24.5 | 29.1 | 9.9 | | 15.3 | 24.5 | 27.6 | 10.0 | | | Augmentation | 15.3 | 24.9 | 33.8 | 11.5 | | 15.3 | 24.9 | 22.2 | 11.8 | | Table 5: Extended results on En–De. All run until saturation. Each row represents one round of pseudolabeling with the respective method. | üCoVoST | MuST-C | | | | |---------------|----------|-------|--------|------| | WER ↓ | BLEU ↑ | WER ↓ | BLEU ↑ | | | Bound | 13.7 | 31.9 | | | | Base Model | 14.8 | 28.7 | 47.9 | 9.1 | | 14.6 | 29.4 | 46.2 | 9.9 | | | 14.8 | 30.0 | 43.8 | 9.6 | | | Vanilla PL | 14.7 | 30.5 | 42.8 | 9.0 | | 14.6 | 30.7 | 37.2 | 8.3 | | | 14.5 | 29.5 | 23.2 | 10.0 | | | 14.3 | 30.5 | 18.7 | 10.2 | | | Ratio to Gold | 14.4 | 30.5 | 17.9 | 9.7 | | 14.2 | 30.5 | 17.4 | 9.9 | | | 14.3 | 29.8 | 30.8 | 10.8 | | | 14.3 | 30.2 | 22.0 | 10.8 | | | Ratio KDE | 14.2 | 30.4 | 21.7 | 10.8 | | 14.2 | 30.7 | 21.7 | 10.4 | | | 14.6 | 29.4 | 40.8 | 10.7 | | | 14.4 | 30.4 | 27.5 | 10.4 | | | LASER | 14.4 | 30.5 | 24.4 | 10.4 | | 14.3 | 30.6 | 20.3 | 11.2 | | | 14.6 | 30.1 | 48.7 | 11.9 | | | 14.5 | 30.5 | 35.7 | 11.0 | | | Augmentation | 14.5 | 30.9 | 26.3 | 11.5 | | 14.3 | 30.9 | 25.4 | 11.3 | | Table 6: Extended results on En–Zh. All run until saturation. Each row represents one round of pseudo-labeling with the respective method. ## B Responsible Nlp Research B.1 Computing Infrastructure Our experiments are each run using 32 NVIDIA V100 GPUs (4 8-GPU nodes). ## B.2 Licenses Of Artifacts Used We use the following artifacts in compliance with their terms of use: - CoVoST V2 dataset (Wang et al., 2020) under CC BY-NC 4.0 - MuST-C dataset (Cattoni et al., 2021) under CC BY-NC-ND 4.0 - wav2vec 2.0 under Apache License 2.0 - LASER (Artetxe and Schwenk, 2019) under BSD - Transformers (Wolf et al., 2020) under Apache License 2.0 - xnmt (Neubig et al., 2018) under Apache License 2.0 - Lightning (Falcon and The PyTorch Lightning team, 2019) under Apache License 2.0 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section "Limitations" after "Conclusion" A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Appendix B.2 ✓ B1. Did you cite the creators of artifacts you used? Section 3 and Appendix B.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix B.2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B.2 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.1 ## C ✓ **Did You Run Computational Experiments?** Section 3 And Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3.2 and Appendix B.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-word
Word-level Prefix/Suffix Sense Detection: A Case Study on Negation Sense with Few-shot Learning
https://aclanthology.org/2023.findings-acl.484
Morphological analysis is an important research issue in the field of natural language processing. In this study, we propose a context-free morphological analysis task, namely word-level prefix/suffix sense detection, which deals with the ambiguity of sense expressed by prefix/suffix. To research this novel task, we first annotate a corpus with prefixes/suffixes expressing negation (e.g., il-, un-, -less) and then propose a novel few-shot learning approach that applies an input-augmentation prompt to a token-replaced detection pre-training model. Empirical studies demonstrate the effectiveness of the proposed approach to word-level prefix/suffix negation sense detection.
# Word-Level Prefix/Suffix Sense Detection: A Case Study On Negation Sense With Few-Shot Learning Yameng Li1,2 Zicheng Li2 Ying Chen1∗ **Shoushan Li**2 1College of Information and Electrical Engineering, China Agricultural University, China 2Natural Language Processing Lab, Soochow University, China {ymli233, 20205227019}@stu.suda.edu.cn chenying@cau.edu.cn lishoushan@suda.edu.cn ## Abstract Morphological analysis is an important research issue in the field of natural language processing. In this study, we propose a context-free morphological analysis task, namely word-level prefix/suffix sense detection, which deals with the ambiguity of sense expressed by prefix/suffix. To research this novel task, we first annotate a corpus with prefixes/suffixes expressing negation (e.g., il- , un-, -*less*) and then propose a novel fewshot learning approach that applies an inputaugmentation prompt to a token-replaced detection pre-training model. Empirical studies demonstrate the effectiveness of the proposed approach to word-level prefix/suffix negation sense detection.1 ## 1 Introduction Morphological analysis mainly refers to processing a word into a lemma (root) and a well-defined morphological tag (Anglin et al., 1993; Haspelmath and Sims, 2013; Morita et al., 2015; Nicolai and Kondrak, 2017; Deacon et al., 2017; Ganesh et al., 2019). For instance, through morphological analysis, the word "*unhappy*" will be divided into a lemma "*happy*" and a negation sense prefix tag "un-". Morphological analysis has played an important role in natural language processing (NLP) and it has been applied to many downstream tasks such as spelling checking (Aduriz et al., 1993; Oflazer, 1995; Sénéchal and Kearnan, 2007; Levesque et al., 2021) and machine translation (Lee, 2004; Habash, 2007; Toutanova et al., 2008; Belinkov et al., 2017). One major challenge in morphological analysis is that prefixes/suffixes are sometimes ambiguous. For instance, in English, the prefix "un- " often means a meaning "not", i.e., a negation ∗*Corresponding author 1https://github.com/mengmeng233/Word-level-PrefixSuffix-Sense-Detection sense. However, not all words with the prefix "un- " have a negation sense, such as "*unanimous*" and "*unpick*". Besides, the substring "un-" sometimes does not appear as a prefix in some words, such as "*universe*" and "*unique*". In this study, we directly address the above challenge by proposing a novel morphological analysis task, namely word-level prefix/suffix negation sense detection, which aims to detect whether a substring in a word is a prefix/suffix and meanwhile takes a specific pre-defined morphological sense. As a preliminary study, we focus on negative prefixes/suffixes. In many languages, one way to make a negative expression is to add a negative prefix/suffix to a word. For instance, in English, il-, im-, un-, and -*less* are some popular negative prefixes/suffixes. One straightforward approach to prefix/suffix negation sense detection is to build a dictionary that covers all words with the prefixes/suffixes expressing such a sense. However, this is unrealistic because there are always many newly-emerging words due to non-standard language usage or incorrect spelling in some informal texts like Twitter. Therefore, we address the task of word-level prefix/suffix negation sense detection in a computational way. Specifically, to further reduce the annotation cost, we propose a few-shot learning approach by employing the token-replaced detection model as our basic prompt-learning model due to its excellent performance in few-shot learning (Li et al., 2022). Furthermore, we propose a novel prompt, namely input-augmentation prompt, which relies only on the input word. As illustrated in Fig.1(c), for the input word is "*unhappy*", the prompt, " unhappy It is not happy", is used to predict whether the word "not" is original or *replaced* so as to determine whether the input word is a negation word or not, where the substring "*happy*" is generated by removing the potential prefix (i.e., un-) from the input word. The de- ![1_image_1.png](1_image_1.png) ![1_image_0.png](1_image_0.png) TemplateTemplate ![1_image_2.png](1_image_2.png) sign of our input-augmentation prompt can avoid one major shortcoming of existing few-shot learning approaches, i.e., the selection of labels (e.g., two labels, "*positive*" and "*negative*" in Fig.1a) or the selection of label description words (e.g., "*negative positive*" in Fig.1b) has a big impact on learner performance (Jiang et al., 2020; Gao et al., 2020; Li et al., 2022). Moreover, our empirical studies also demonstrate that our approach achieves much better performances than the existing few-shot learning approaches. ## 2 Related Work Morphological analysis aims to learn about the morphological structure of a given word form, and in general, there are four specific tasks: morphological tagging (i.e., assigning some pre-defined morphological tags to a word in a sentence) (Müller et al., 2013; Labeau et al., 2015; Cotterell and Heigold, 2017; Conforti et al., 2018; Malaviya et al., 2019), *lemmatization* (i.e., converting a word in a sentence into the normalized form) (Plisson et al., 2004; Chrupała, 2006; Jongejan and Dalianis, 2009; Straková et al., 2014; Bergmanis and Goldwater, 2018), *morphological segmentation* (i.e., judging whether the substring in a word could be segmented as a prefix/suffix) (Ruokolainen et al., 2013, 2016; Goldsmith et al., 2017; Cotterell et al., 2019), and *morphological disambiguation* (i.e., assigning a correct morphological segmentation to a word by leveraging the context) (Hakkani-Tür et al., 2002; Yildiz et al., 2016; Cotterell et al., 2018; Wiedemann et al., 2019). Compared to the above tasks, our work has at least three different aspects. First, our task is a combination of *morphological tagging* and morphological segmentation. Second, our task is word-level, i.e., the input contains only a single word without context, which leads to the inapplicability of previous approaches based on contextual information. Third, we propose a novel few-shot learning approach to our task. To the best of our knowledge, this is the first attempt of studying fewshot learning in morphological analysis. ## 3 Corpus Generation We use six prefixes, i.e., un-, im-, in-, il-, irand dis- as negation prefixes and two suffixes, i.e., -*less* and -*free* as negation suffixes to collect words from two resources, i.e., the ninth edition of Oxford Advanced Learner′*s Dictionary* (AS et al., 2005) and 1.6 million English Tweeter data collected by Go et al. (2015). In summary, we obtain 2,717 and 6,671 words with negation prefixes/suffixes from the Oxford dictionary and tweeter data, respectively. Then, we randomly select 3,000 words and annotates such words as our corpus. Specifically, we assign two annotators to annotate each word into two categories, i.e., *positive* and *negative*. The *Kappa* consistency check value of the human annotation is 0.87. Moreover, for words with different sense annotations, we assign another annotator to make a final decision. Table 1 shows the data statistics of the corpus. | Neg. | P os. | Neg. | P os. | | | |--------|-----------|------------|---------|-----|-----| | un- | 482 | 186 | il- | 20 | 59 | | in- | 372 | 858 | dis- | 172 | 256 | | im- | 100 | 194 | -less | 163 | 24 | | ir- | 48 | 53 | -free | 9 | 4 | | ALL: | Neg. 1634 | P os. 1366 | | | | Table 1: Statistics of the annotated corpus. ## 4 Methodology Problem statement: The prefix/suffix negation sense detection task can be formulated as follows. Let Dl = {*w, y*} be labeled data, where w is the input word and y is a label in {*positive, negative*}. Our approach aims to provide a few-shot learner for such a detection task. Approach overview: As shown in Figure 1(c), a prompt-based learner, which is based on a pretrained token-replaced detection model and an input-augmentation prompt, is built for the prefix/suffix negation sense detection task. The goal of a pre-trained token-replaced detection model (e.g., ELECTRA) is to predict whether a token in the input string is *replaced* or not. Approach specification: First, an inputaugmentation prompt x*prompt* is constructed for an input word w, as follows. $$x_{p r o m p t}=w\;I t\;i s\;n o t\;\overline{{{w}}},$$ xprompt = *w It is not* w, (1) where "*It is not*" is a template, and w is a substring of the input word w without the prefix/suffix, such as w = "*happy*" for w = "*unhappy*". Second, prompt x = [w1, w2*, ..., w*n] is fed into the encoder in the discriminator of the pre-trained token-replaced detection model to obtain an output sequence y = [y1, y2*, ..., y*n], where wiis the ith word in the prompt, and yiis the prediction label (either original or *replaced*) for word wi, indicating whether the word is *original* or replaced. Finally, we map the label set of the pre-trained token-replaced detection model to the label set of our task, with the following formulas. P("negative"|x*prompt*) = P(y"not" = *original*) (2) and $$\,_{\mathit{a p t}})=P(y)$$ P("positive"|x*prompt*) = P(y"not" = *replaced*), (3) where y"not" denotes the label corresponding to the word "not" in the input-augmentation prompt as shown in formula (1). For instance, suppose that the input word is "*unhappy*", we first obtain the input-augmentation prompt "*unhappy It is not happy*" and then use the pretrained token-replaced model to predict whether the word "not" in the prompt is *original* or replaced. If the prediction result is *original*, we conclude that the input word "*unhappy*" is a negative word. In the training phase of our few-shot learning setting, only a few prompt samples, together with their labels are used to update the parameters in the discriminator of the pre-trained token-replaced detection model. It is important to note that our approach reuses the pre-trained parameters in the pretrained token-replaced detection model and does not use any other new parameters. ## 5 Experiments Data setting: 2,000 samples are randomly selected from the human-annotated corpus. First, 400 samples are selected as test data, including 200 for each class. Then, we follow the evaluation protocol of Li et al. (2022) by running 5 experiments with 5 different training and development splits. In each split, 16 training samples (i.e., 8 samples in each class) and 16 development samples (i.e., 8 samples in each class) are selected in few-shot learning. In fully-supervised learning, 1,400 training samples and 200 development samples are used. Evaluation Metrics: Standard Macro-F1, *Accuracy*, F1 for negative samples (1-F1), and F1 for positive samples (0-F1) are used to evaluate the performance. Model Settings: We employ ELECTRA as the pre-trained token-replaced detection model. The weight_decay is 2e-3, the maximum length is set to 64, and the remaining hyper-parameters are obtained by searching. | Approach | Basic model | 0-F1 | 1-F1 | Macro-F1 | Acc. | |---------------------------|---------------|-----------|-----------|------------|-----------| | Finetuning-RoBERTa | RoBERTa-large | 73.3(4.3) | 75.2(3.6) | 74.3(3.9) | 74.3(3.9) | | Finetuning-ELECTRA | ELECTRA-large | 70.8(4.0) | 75.1(4.5) | 73.0(3.4) | 73.4(3.3) | | Prompt-RoBERTa | RoBERTa-large | 75.6(1.8) | 79.0(1.0) | 77.3(1.1) | 77.5(1.1) | | Prompt-ELECTRA | ELECTRA-large | 78.2(1.6) | 79.6(3.1) | 78.8(2.3) | 78.8(2.3) | | Warp | RoBERTa-large | 69.8(3.4) | 73.2(5.7) | 71.5(4.3) | 71.8(4.4) | | DART | RoBERTa-large | 70.6(2.1) | 71.2(7.6) | 70.9(4.8) | 71.2(4.9) | | P-tuning-v2 | RoBERTa-large | 70.7(1.3) | 75.2(2.8) | 73.0(1.5) | 73.2(1.8) | | Our Approach | ELECTRA-large | 87.4(2.9) | 87.4(3.6) | 87.4(3.2) | 87.4(3.2) | | Fully-supervised Learning | ELECTRA-large | 87.1 | 87.9 | 87.5 | 87.5 | Table 2: The performances of different methods for prefix/suffix negation sense detection (k=16). We implement the following approaches for comparison: (1) **Finetuning-RoBERTa** (Liu et al., 2019): Based on the fine-tuning approach and RoBERTalarge model, the prediction label is obtained by mapping the "[CLS]" token to label space. (2) **Finetuning-ELECTRA** (Clark et al., 2020): It is similar to finetuning-RoBERTa except that the ELECTRA-large model is used. (3) **Prompt-RoBERTa** (Gao et al., 2020): It is a discrete prompt learning approach based on RoBERTa-large, as shown in Figure 1(a), where the prompt is " w it is [*mask*] ", and the prediction label is obtained by the filling of "[*mask*]" (either "*negative*" or "*positive*"). (4) **Prompt-ELECTRA** (Li et al., 2022): It is a discrete prompt learning approach based on ELECTRA-large, as shown in Figure 1(b), where the prompt is " w *is a negative positive word* ". (5) **Warp** (Hambardzumyan et al., 2021): It is a continuous prompt learning approach, in which the best prompt template is obtained by searching in the (continuous) embedding space. Moreover, the template is learned using adversarial refactoring. (6) **DART** (Zhang et al., 2021): It is a continuous prompt learning approach, in which the search for the best prompt template is based on backpropagation. (7) **P-tuning-v2** (Liu et al., 2021): It is a continuous prompt learning approach, in which the search for the best prompt is based on a prefixed-tuned multi-layer prompt. (8) **Fully-supervised Learning:** 1,400 training and 200 development samples are used to re-train the ELECTRA-large model. Table 2 shows the performances of different approaches, from which we can see that : (1) Our approach significantly outperforms the fullysupervised learning and fine-tuning approaches, which proves the effectiveness of our few-shot learner. (2) Our approach performs much better than other prompt-based learners, e.g., obtaining 8.6% increase on *Macro-F1* when compared with Prompt-ELECTRA. The improvement confirms the effectiveness of our input-augmentation prompt. (3) Our approach, using only 16 training and 16 development samples, almost performs equivalent to the fully-supervised learning approach with 1,400 training and 200 development samples. An error analysis is made for our approach, which shows two main error causes: (1) the input word w or its substring w has multiple meanings, such as "*hapless*" vs. "hap", and "*disembarkation*" vs. "*embarkation*". (2) the meaning of w and w is irrelevant, such as "*dispossession*" vs. "*possession*", and "*ingot*" vs. "got". This indicates that more efforts are needed for our prefix/suffix negation sense detection. ## 6 Conclusion In this study, we propose a novel word-level morphological analysis task, namely prefix/suffix sense detection, and make a case study on negation sense. We provide an annotated corpus for the prefix/suffix negation sense detection, and then propose a novel few-shot learning approach, which uses an input-augmentation prompt and a pretrained token-replaced detection model to effectively make the negation sense detection. Empirical studies show that our approach performs much better than other approaches in the few-shot scenario, such as using only 16 training samples. ## Limitations The limiation of this work is that we only consider one type of prefixes/suffixes, i.e., negative prefixes/suffixes. In our future work, we would like to work on other types of prefix/suffix sense detection tasks, such as prefix/suffix sense detection on occupation. For instance, in English, there are many suffixes such as -or, -er, and -ee, which mean a person with a certain occupation. ## Acknowledgement We thank the reviewers for their insightful comments and suggestions. This work was supported by the NSFC grant (No.62076176), and the General Research Fund (GRF) project sponsored by the Research Grants Council Hong Kong (Project No.15611021). ## References Itziar Aduriz, Eneko Agirre, Iñaki Alegria, Xabier Arregi, Jose Maria Arriola, Xabier Artola, Arantza Díaz de Ilarraza, Nerea Ezeiza, Montse Maritxalar, Kepa Sarasola, et al. 1993. A morphological analysis based method for spelling correction. In *Proceedings of the sixth conference on European chapter of the Association for Computational Linguistics*, pages 463–463. Jeremy M Anglin, George A Miller, and Pamela C Wakefield. 1993. Vocabulary development: A morphological analysis. Monographs of the society for research in child development, pages i–186. Wehmeier Sally Hornby AS, Ashby Michael, Albert Sydney Hornby, Sally Wehmeier, and Michael Ashby. 2005. Oxford Advanced Learner's EnglishChinese Dictionary: AS Hornby, Sally Wehmeier, Michael Ashby. oxford university press. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? arXiv preprint arXiv:1704.03471. Toms Bergmanis and Sharon Goldwater. 2018. Context sensitive neural lemmatization with lematus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1391–1400. Grzegorz Chrupała. 2006. Simple data-driven contextsensitive lemmatization. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. *arXiv preprint arXiv:2003.10555*. Costanza Conforti, Matthias Huck, and Alexander Fraser. 2018. Neural morphological tagging of lemma sequences for machine translation. In *Proceedings of the 13th Conference of the Association* for Machine Translation in the Americas (Volume 1: Research Track), pages 39–53. Ryan Cotterell and Georg Heigold. 2017. Crosslingual, character-level neural morphological tagging. *arXiv preprint arXiv:1708.09157*. Ryan Cotterell, Christo Kirov, Sabrina J Mielke, and Jason Eisner. 2018. Unsupervised disambiguation of syncretism in inflected lexicons. *arXiv preprint* arXiv:1806.03740. Ryan Cotterell, Arun Kumar, and Hinrich Schütze. 2019. Morphological segmentation inside-out. arXiv preprint arXiv:1911.04916. S Hélène Deacon, Xiuli Tong, and Kathryn Francis. 2017. The relationship of morphological analysis and morphological decoding to reading comprehension. *Journal of Research in Reading*, 40(1):1–16. LS Ganesh, Rahul R Marathe, et al. 2019. Dynamic capabilities: A morphological analysis framework and agenda for future research. *European Business Review*. Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*. A Go, R Bhayani, and L Huang. 2015. Sentiment140twitter sentiment analysis tool. *Sentiment140,[Online]. Available: http://help. sentiment140. com/home.[Accessed 30 March 2018]*. John A Goldsmith, Jackson L Lee, and Aris Xanthos. 2017. Computational learning of morphology. *Annual Review of Linguistics*, 3:85–106. Nizar Habash. 2007. Arabic morphological representations for machine translation. In *Arabic computational morphology*, pages 263–285. Springer. Dilek Z Hakkani-Tür, Kemal Oflazer, and Gökhan Tür. 2002. Statistical morphological disambiguation for agglutinative languages. *Computers and the Humanities*, 36(4):381–410. Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. 2021. Warp: Word-level adversarial reprogramming. *arXiv preprint arXiv:2101.00121*. Martin Haspelmath and Andrea Sims. 2013. *Understanding morphology*. Routledge. Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438. Bart Jongejan and Hercules Dalianis. 2009. Automatic training of lemmatization rules that handle morphological changes in pre-, in-and suffixes alike. In *Proceedings of the Joint Conference of the 47th Annual* Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 145–153. Matthieu Labeau, Kevin Löser, and Alexandre Allauzen. 2015. Non-lexical neural architecture for fine-grained pos tagging. In *Proceedings of the 2015* Conference on Empirical Methods in Natural Language Processing, pages 232–237. Young-Suk Lee. 2004. Morphological analysis for statistical machine translation. Technical report, IBM THOMAS J WATSON RESEARCH CENTER YORKTOWN HEIGHTS NY. Kyle C Levesque, Helen L Breadmore, and S Hélène Deacon. 2021. How morphology impacts reading and spelling: Advancing the role of morphology in models of literacy development. *Journal of Research in Reading*, 44(1):10–26. Zicheng Li, Shoushan Li, and Guodong Zhou. 2022. Pre-trained token-replaced detection model as fewshot learner. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3274–3284. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *arXiv preprint* arXiv:2110.07602. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Chaitanya Malaviya, Shijie Wu, and Ryan Cotterell. 2019. A simple joint model for improved contextual neural lemmatization. arXiv preprint arXiv:1904.02306. Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2292–2297, Lisbon, Portugal. Association for Computational Linguistics. Thomas Müller, Helmut Schmid, and Hinrich Schütze. 2013. Efficient higher-order crfs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322–332. Garrett Nicolai and Grzegorz Kondrak. 2017. Morphological analysis without expert annotation. In *Proceedings of the 15th Conference of the European* Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 211–216. Kemal Oflazer. 1995. Error-tolerant finite state recognition with applications to morphological analysis and spelling correction. *arXiv preprint cmp-lg/9504031*. Joël Plisson, Nada Lavrac, Dunja Mladenic, et al. 2004. A rule based approach to word lemmatization. In Proceedings of IS, volume 3, pages 83–86. Teemu Ruokolainen, Oskar Kohonen, Kairit Sirts, StigArne Grönroos, Mikko Kurimo, and Sami Virpioja. 2016. A comparative study of minimally supervised morphological segmentation. *Computational Linguistics*, 42(1):91–120. Teemu Ruokolainen, Oskar Kohonen, Sami Virpioja, and Mikko Kurimo. 2013. Supervised morphological segmentation in a low-resource learning setting using conditional random fields. In *Proceedings of* the Seventeenth Conference on Computational Natural Language Learning, pages 29–37. Monique Sénéchal and Kyle Kearnan. 2007. The role of morphology in reading and spelling. Jana Straková, Milan Straka, and Jan Hajic. 2014. Open-source tools for morphology, lemmatization, pos tagging and named entity recognition. In *Proceedings of 52nd Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 13–18. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of ACL-08: HLT, pages 514–522. Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does bert make any sense? interpretable word sense disambiguation with contextualized embeddings. arXiv preprint arXiv:1909.10430. Eray Yildiz, Caglar Tirkaz, H Sahin, Mustafa Eren, and Omer Sonmez. 2016. A morphology-aware network for morphological disambiguation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable prompt makes pre-trained language models better few-shot learners. *arXiv* preprint arXiv:2108.13161. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations. ✗ A2. Did you discuss any potential risks of your work? At present, we haven't found any potential risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3, 5. ✓ B1. Did you cite the creators of artifacts you used? 3, 5. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 3 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3, 5. ## C ✓ **Did You Run Computational Experiments?** 5 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We report the model sizes used in all our work, which run on the same GPU. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left Blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-feng-2023-end
End-to-End Simultaneous Speech Translation with Differentiable Segmentation
https://aclanthology.org/2023.findings-acl.485
End-to-end simultaneous speech translation (SimulST) outputs translation while receiving the streaming speech inputs (a.k.a. streaming speech translation), and hence needs to segment the speech inputs and then translate based on the current received speech. However, segmenting the speech inputs at unfavorable moments can disrupt the acoustic integrity and adversely affect the performance of the translation model. Therefore, learning to segment the speech inputs at those moments that are beneficial for the translation model to produce high-quality translation is the key to SimulST. Existing SimulST methods, either using the fixed-length segmentation or external segmentation model, always separate segmentation from the underlying translation model, where the gap results in segmentation outcomes that are not necessarily beneficial for the translation process. In this paper, we propose Differentiable Segmentation (DiSeg) for SimulST to directly learn segmentation from the underlying translation model. DiSeg turns hard segmentation into differentiable through the proposed expectation training, enabling it to be jointly trained with the translation model and thereby learn translation-beneficial segmentation. Experimental results demonstrate that DiSeg achieves state-of-the-art performance and exhibits superior segmentation capability.
# End-To-End Simultaneous Speech Translation With Differentiable Segmentation Shaolei Zhang 1,2**, Yang Feng** 1,2∗ 1Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) 2 University of Chinese Academy of Sciences, Beijing, China {zhangshaolei20z, fengyang}@ict.ac.cn ## Abstract End-to-end simultaneous speech translation (SimulST) outputs translation while receiving the streaming speech inputs (a.k.a. streaming speech translation), and hence needs to segment the speech inputs and then translate based on the current received speech. However, segmenting the speech inputs at unfavorable moments can disrupt the acoustic integrity and adversely affect the performance of the translation model. Therefore, learning to segment the speech inputs at those moments that are beneficial for the translation model to produce highquality translation is the key to SimulST. Existing SimulST methods, either using the fixedlength segmentation or external segmentation model, always separate segmentation from the underlying translation model, where the gap results in segmentation outcomes that are not necessarily beneficial for the translation process. In this paper, we propose Differentiable Segmentation (*DiSeg*) for SimulST to directly learn segmentation from the underlying translation model. DiSeg turns hard segmentation into differentiable through the proposed expectation training, enabling it to be jointly trained with the translation model and thereby learn translation-beneficial segmentation. Experimental results demonstrate that DiSeg achieves state-of-the-art performance and exhibits superior segmentation capability1. ## 1 Introduction End-to-end simultaneous speech translation (SimulST) (Fügen et al., 2007; Oda et al., 2014; Ren et al., 2020; Zeng et al., 2021; Zhang et al., 2022a) outputs translation when receiving the streaming speech inputs, and is widely used in realtime scenarios such as international conferences, live broadcasts and real-time subtitles. Compared with the offline speech translation waiting for ∗Corresponding author: Yang Feng. 1 Code is available at https://github.com/ictnlp/ DiSeg. ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) (b) Adaptive segmentation with external segmentation model. ![0_image_2.png](0_image_2.png) (c) Differentiable segmentation within translation model. Figure 1: Illustration of differentiable segmentation (DiSeg) compared with the previous methods. the complete speech inputs (Weiss et al., 2017; Wang et al., 2020), SimulST needs to segment the streaming speech inputs and synchronously translate based on the current received speech, aiming to achieve high translation quality under low latency (Hamon et al., 2009; Cho and Esipova, 2016; Ma et al., 2020b; Zhang and Feng, 2022c). However, it is non-trivial to segment the streaming speech inputs as the speech always lacks explicit boundary (Zeng et al., 2021), and segmenting at unfavorable moments will break the acoustic integrity and thereby drop the translation performance (Dong et al., 2022). Therefore, the precise segmentation of streaming speech is the core challenge of SimulST task (Zhang et al., 2022a). To ensure that the speech representations derived from the segmentation results can produce highquality translation, SimulST model should learn a translation-beneficial segmentation from the underlying translation model. Existing SimulST methods, involving fixed and adaptive, always fail to learn the segmentation directly from the underlying translation model. The fixed method divides the streaming inputs based on the equal length, e.g., 280ms per segment (Ma et al., 2020b, 2021; Nguyen et al., 2021), as shown in Figure 1(a). Such methods completely ignore the translation model and always break the acoustic integrity (Dong et al., 2022). The adaptive method dynamically decides the segmentation and thereby achieves better SimulST performance, as shown in Figure 1(b). However, previous adaptive methods often use the external segmentation model (Ma et al., 2020b; Zhang et al., 2022a) or heuristic detector (Zeng et al., 2021; Chen et al., 2021; Dong et al., 2022) for segmentation, which leave a gap between the segmentation and translation model. This gap hinders learning segmentation directly from the translation model (Arivazhagan et al., 2019), hence making them difficult to get segmentation results that are most beneficial to translation quality. Under these grounds, we aim to integrate the segmentation into translation model, and thereby directly learn segmentation from the underlying translation model, as shown in Figure 1(c). To this end, we propose *Differentiable Segmentation* (*DiSeg*) for SimulST, which can be jointly trained with the underlying translation model. DiSeg employs a Bernoulli variable to indicate whether the streaming speech inputs should be segmented or not. Then, to address the issue that hard segmentation precludes back-propagation (i.e., learning) from the underlying translation model, we propose an expectation training to turn the segmentation into differentiable. Owing to powerful segmentation, DiSeg can handle simultaneous and offline speech translation through a unified model. Experiments show that DiSeg achieves state-of-theart performance on SimulST, while also delivering high-quality offline speech translation. ## 2 Background Offline Speech Translation The corpus of speech translation task is always denoted as the triplet D={(s, x, y)}, where s= s1, · · · , s|s| is source speech, x = x1, · · · , x|x| is source transcription and y = y1, · · · , y|y| is target translation. The mainstream speech translation architecture often consists of an acoustic feature extractor and a translation model following (Nguyen et al., 2020). Acoustic feature extractor extracts speech features a = a1, · · · , a|a| from source speech s, which is often realized by a pre-trained acoustic model (Baevski et al., 2020). Then, the translation model, realized by a Transformer model (Vaswani et al., 2017), generates y based on all speech features a. During training, existing methods always improve speech translation performance through multi-task learning (Anastasopoulos and Chiang, 2018; Tang et al., 2021a,b), including speech translation, automatic speech recognition (ASR) and machine translation (MT) (add a word embedding layer for MT task), where the learning objective Lmtl is: $${\mathcal{L}}_{m t l}={\mathcal{L}}_{s t}+{\mathcal{L}}_{a s r}+{\mathcal{L}}_{m t},$$ where Lst, Lasr and Lmt are the cross-entropy loss of pairs s→y, s→x and x→y, respectively. Simultaneous Translation (SimulST) Unlike offline speech translation, SimulST needs to decide when to segment the inputs and then translate based on the received speech features (Ren et al., 2020; Ma et al., 2020b). Since the decisions are often made based on the speech features after downsampling, we use g(t) to denote the number of speech features when the SimulST model translates yt, where speech features a≤g(t) are extracted from the current received speech ˆs. Then, the probability of generating ytis p (yt| ˆs, y<t). How to decide g(t) is the key of SimulST, which should be beneficial for the translation model to produce the high-quality translation. ## 3 Method We propose differentiable segmentation (DiSeg) to learn segmentation directly from the translation model, aiming to achieve translation-beneficial segmentation. As shown in Figure 2, DiSeg predicts a Bernoulli variable to indicate whether to segment, and then turns the hard segmentation into differentiable through the proposed expectation training, thereby jointly training segmentation with the translation model. We will introduce the segmentation, training and inference of DiSeg following. ## 3.1 Segmentation To segment the streaming speech inputs, DiSeg predicts a Bernoulli variable 0/1 for each speech feature, corresponding to waiting or segmenting. Specifically, for the speech feature ai, DiSeg predicts a Bernoulli segmentation probability pi, corresponding to the probability of segmenting the speech at ai. Segmentation probability piis calculated through a feed-forward network (FFN) fol- ![2_image_0.png](2_image_0.png) lowed by a sigmoid activation, and then piis used to parameterize the Bernoulli variable bi: $$\begin{array}{l}{{p_{i}=\mathrm{Sigmoid}(\mathrm{FFN}(a_{i}))\,,}}\\ {{b_{i}\sim\mathrm{Bernoulli}(p_{i})\,.}}\end{array}$$ If bi = 1, DiSeg segments the streaming speech at ai; If bi = 0, DiSeg waits for more inputs. In inference, DiSeg sets bi = 1 if pi ≥ 0.5, and sets bi = 0 if pi <0.5 (Raffel et al., 2017). Segmented Attention After segmenting the speech features, we propose segmented attention for the encoder of the translation model, which is an attention mechanism between uni-directional and bi-directional attention. In segmented attention, each speech feature can focus on the features in the same segment and the previous segments (i.e., bi-directional attention within a segment, unidirectional attention between segments), as shown in Figure 3(a). In this way, segmented attention can not only satisfy the requirement of encoding streaming inputs in SimulST task (i.e., the characteristic of uni-directional attention) (Elbayad et al., 2020; Zeng et al., 2021), but also capture more comprehensive context representations of segments (i.e., the advantage of bi-directional attention). ## 3.2 Expectation Training Hard segmentation based on Bernoulli variable bi precludes back-propagation (i.e., learning) from the translation model to the segmentation probability pi during training. To address this, we propose expectation training to turn segmentation into differentiable. In expectation training, we first constrain the number of segments, and then learns segmentation from the translation model at both acoustic and semantic levels, which are all trained in expectation via the segmentation probability pi. ![2_image_1.png](2_image_1.png) $$\begin{array}{l}{(2)}\\ {(3)}\end{array}$$ Learning Segment Number To avoid too many segments breaking the acoustic integrity or too few segments degenerating the model into offline translation, we need to constrain the total number of segments. Intuitively, the source speech should be divided into K segments, where K is the number of words in the source transcription, so that each speech segment can correspond to a complete word. To this end, we apply Lnum to constrain the expected segment number to be K. In particular, in order to prevent excessive segmentation on consecutive silent frames, we also to encourage only one segmentation in several consecutive speech frames. Therefore, Lnum is calculated as: $$\mathcal{L}_{num}=\left\|\sum_{i=1}^{|\mathbf{a}|}p_{i}-K\right\|_{2}+\tag{4}$$ $$\left\|\sum_{\mathbf{MaxPool}}\left(p_{i},\left|\frac{|\mathbf{a}|}{K}\right|\right)-K\right\|_{2},\tag{5}$$ where P|a| i=1 piis the expected segment number and MaxPool (·) is the max polling operation with kernel size of ⌊|a| /K⌋. To make the effect of piin expectation training match biin the inference, we hope that pi ≈ 0 or pi ≈1 and thereby make pi ≈bi. To achieve this, we aim to encourage the discreteness of segmentation probability pi during training. Following Salakhutdinov and Hinton (2009); Foerster et al. (2016); Raffel et al. (2017), a straightforward and efficient method is adding a Gaussian noise before the sigmoid activation in Eq.(2). Formally, in expectation training, Eq.(2) is rewritten as: $$p_{i}=\mathrm{Sigmoid}(\mathrm{FFN}(a_{i})+{\mathcal{N}}(0,n))\,,$$ where N (0, n) is a Gaussian noise with 0 as mean and n as variance. Noise is only applied in training. Learning Segmentation at Acoustic Level A good segmentation should avoid breaking acoustic integrity and benefit the underlying translation model. As mentioned in Sec.3.1, the encoder of the translation model applies the segmented attention to model the correlation between speech features and get the source representations. Correspondingly, we propose *expected segmented attention* to turn the hard segmentation into differentiable during training, thereby directly learning translationbeneficial segmentation from the translation model. In segmented attention during inference, speech feature ai can only pay attention to feature aj that locates in the same segment or previous segments, and mask out the rest features. In expected segmented attention, to enable back-propagation, we introduce the probability that ai can pay attention to aj instead of the hard segmentation, denoted as βi,j . As shown in Figure 3(b), βi,j measures the probability that aj locates in the same segment as ai or in the segment before ai, calculated as: $$\beta_{i,j}=\begin{cases}\prod_{l=i}^{j-1}(1-p_{l})\,,&\mathrm{if}\;i<j\\ 1&\,,\quad\mathrm{if}\;i\geq j\end{cases}.\quad\,$$ If aj lags behind ai, the premise that ai and aj are in the same segment is that no segmentation is between ai and aj−1, i.e., Qj−1 l=i (1 − pl). If aj is before ai, ai can necessarily focus on aj . Then, βi,j is multiplied with the original soft attention αi,j , and normalized to get the final attention γi,j : γ˜i,j =αi,j × βi,j , (8) γi,j =˜γi,j/ X |a| l=1 γ˜i,l. (9) (8) $\binom{9}{2}$ . Finally, γi,j is used to calculate the context vector. Owing to the expected segmented attention, pi can be jointly trained with the translation model via cross-entropy loss Lmtl. Specifically, if the underlying translation model prefers to let ai pay attention to the subsequent aj for better source representation (i.e., a large γi,j ), the probability βi,j that they are in the same segment will be increased, which teaches DiSeg to prevent segmenting between ai and aj . In this way, DiSeg can avoid breaking acoustic integrity and learn the segmentation that is beneficial to the translation model. Learning Segmentation at Semantic Level Besides encouraging the related speech features to locate in the same segment via expected segmented attention, we aim to further learn segmentation at ![3_image_0.png](3_image_0.png) the semantic level. In the multi-task learning framework, the transcription x is monotonically corresponding to the source speech, so transcription is a good choice to provide semantic supervision for segmentation. However, there is a significant gap between transcription representations and speech features in sequence length (Liu et al., 2020; Zeng et al., 2021), so how to align them is key challenge for semantic supervision. Fortunately, the proposed differentiable segmentation divides the speech into K segments, where K is also the word number in transcription. Therefore, both speech and transcription sequences can be mapped to the sequence of length K and accordingly reach an agreement on the corresponding representation to achieve semantic supervision, as shown in Figure 4. For transcription x, since it is a sequence of subwords after tokenization (Kudo and Richardson, 2018), we introduce a *subword-to-word map* to get the representation of the whole word. Given the text embedding e=Emb(x) of subwords, the whole word representation is the average pooling result on the embedding of all subwords it contains. Formally, the representation f t k of the k th word that consists of subwords x [lk : rk] is calculated as: $$f_{k}^{t}={\frac{1}{r_{k}-l_{k}+1}}\sum_{i=l_{k}}^{r_{k}}e_{i}.\qquad\qquad(10)$$ For speech, the segment representations also need to be differentiable to segmentation (i.e., pi), thereby enabling DiSeg to learn segmentation at semantic level. To this end, we propose an *expected* Algorithm 1: Wait-seg Policy for DiSeg Input: streaming speech inputs s, lagging segments k Output: target outputs ˆy Initialization: yˆ0 =⟨BOS⟩, target index t= 1, current received speech ˆs=[ ] 1 **while** yˆt−1 ̸=⟨EOS⟩ do 2 Extract speech features {al} **Cunen received speech $\mathbf{s}-1$** $\mathbf{i}=1\neq\langle\text{EOS}\rangle$**do** act speech features $\{a_{l}\}_{l=1}^{i}$ from $\mathbf{s}$; list segmentation $\{b_{l}\}_{l=1}^{i}$; $\mathbf{i}_{l=1}^{i}$$b_{l}\geq t+k-1$ or $\mathbf{s}=$$\mathbf{s}$ **//** Translate $\hat{y}_{l}$ based on $\mathbf{s}$; $t\gets t+1$; $\mathbf{s}\leftarrow\mathbf{s}+\mathbf{s}.\text{read}()$; $\mathbf{i}$ 3 Predict segmentation {bl} 4 if Pi 8 ˆs ← ˆs + s.read(); 9 **return** ˆy; feature-to-segment map to get the expected segment representations. Expected feature-to-segment map does not forcibly assign speech feature ai to a certain segment but calculates the probability that ai belongs to the k th segment, denoted as p(ai ∈ Segk), which can be calculated via dynamic programming (refer to Appendix A for details): $$p(a_{i}\in\operatorname{Seg}_{k})=p\big{(}a_{i-1}\in\operatorname{Seg}_{k-1}\big{)}\times p_{i-1}\tag{11}$$ $$+\ p(a_{i-1}\in\operatorname{Seg}_{k})\times(1-p_{i-1}).$$ Then, the expected representation f s k of the k th segment is calculated by weighting all speech features: $$f_{k}^{s}=\sum_{i=1}^{|\mathbf{a}|}p(a_{i}\in\operatorname{Seg}_{k})\times a_{i}.\qquad(12)$$ Owing to the proposed two mappings, transcription and speech are mapped to the representations with the same length, i.e., K segments/words, where f s k corresponds to f t k . To provide semantic supervision for segmentation, we apply multi-class N-pair contrastive loss Lctr (Sohn, 2016) between f sand f t, where f t k is the positive sample of f s k and the rest are the negative samples, calculated as: $${\mathcal{L}}_{c t r}=-\sum_{{\bf f}^{s},{\bf f}^{t}}\log\frac{\exp\bigl(s i m\bigl(f_{k}^{s},f_{k}^{t}\bigr)/\tau\bigr)}{\sum_{n=1}^{K}\exp\bigl(s i m\bigl(f_{k}^{s},f_{n}^{t}\bigr)/\tau\bigr)}.\tag{13}$$ sim(·) calculates the cosine similarity between segment and word representations. τ is temperature and we set τ = 0.1 following Wang and Liu (2021). Overall, the total loss of expectation training is: $${\mathcal{L}}_{D i S e g}={\mathcal{L}}_{m t l}+{\mathcal{L}}_{n u m}+{\mathcal{L}}_{c t r}.\qquad(14)$$ 3.3 Inference Policy Owing to the proposed differentiable segmentation, the streaming speech inputs are divided into multiple segments, where each segment contains roughly one word. Accordingly, inspired by the wait-k policy (Ma et al., 2019) in simultaneous machine translation, we propose *wait-seg policy* for DiSeg. Specifically, wait-seg policy first waits for k segments, and then translates a target word whenever deciding to segment the streaming speech inputs, where k is a hyperparameter to control the latency. Formally, given lagging segments k, DiSeg translates yt when receiving g(t; k) speech features: $$g(t;k)=\underset{i}{\operatorname{argmin}}\left(\sum_{l=1}^{i}b_{l}\geq t+k-1\right).\tag{15}$$ The specific inference is shown in Algorithm 1. To keep the training and inference matching, we also apply the wait-seg policy during training via the proposed wait-seg decoder. When translating yt, wait-seg decoder will mask out the speech features aithat *i> g*(t; k) (Ma et al., 2019). Accordingly, we introduce multi-task training and multilatency training to enhance DiSeg performance. Multi-task Training Since we adopt the multitask learning framework (refer to Sec.2), ASR and MT tasks should also adapt to DiSeg. Specifically, ASR task applies the same segmentation and policy (i.e., decoder) as the SimulST task, as both their inputs are speech. For the MT task, since the segment in the speech corresponds to the word in the transcription, MT task applies a uni-directional encoder and wait-k policy (i.e., decoder). Note that parameters of encoder and decoder are shared among various tasks. Multi-latency Training To enhance the DiSeg performance under multiple latency, we randomly sample k from [1, K] between batches during training (Elbayad et al., 2020). In inference, DiSeg only needs one model to complete SimulST under multiple arbitrary latency (Zhang and Feng, 2021c), including offline speech translation (the latency is the complete speech duration). In this way, DiSeg develops a unified model that can handle both offline and simultaneous speech translation. ## 4 Experiments 4.1 Datasets We conduct experiments on two end-to-end simultaneous translation benchmarks, MuST-C2 English →German (En→De, 234K pairs) and English→ Spanish (En→Es, 270K pairs) (Di Gangi et al., 2019). We use dev as the validation set (1423 2https://ict.fbk.eu/must-c ![5_image_0.png](5_image_0.png) pairs for En→De, 1316 pairs for En→Es) and use tst-COMMON as the test set (2641 pairs for En→De, 2502 pairs for En→Es), respectively. For speech, we use the raw 16-bit 16kHz mono-channel audio wave. For text, we use SentencePiece (Kudo and Richardson, 2018) to generate a unigram vocabulary of size 10000, sharing between languages. ## 4.2 System Settings We conduct experiments on the following systems. Offline Offline speech translation, which waits for the complete speech inputs and then translates (bi-directional attention and greedy search). Wait-k Wait-k policy (Ma et al., 2019) with fixed segmentation of speech (Ma et al., 2020b), which translates a word every 280ms. Wait-k-Stride-n A variation of wait-k policy (Zeng et al., 2021), which translates n word every n×280ms. We set n= 2 following their best result. MMA3 Monotonic multihead attention (Ma et al., 2020c), which is adapted to SimulST by dividing speech into segments of equal length, i.e., 120ms, 200ms and 280ms *· · ·* (Ma et al., 2020b). MMA-CMDR MMA with cross-modal decision regularization (Zaidi et al., 2022), which leverages the transcription to improve the decision of MMA. SimulSpeech Segmentation based on word detector (Ren et al., 2020), which also uses two knowledge distillations to improve the performance. SH Synchronized ASR-assisted SimulST (Chen et al., 2021), which uses the shortest hypothesis in 3https://github.com/pytorch/fairseq/tree/ master/examples/simultaneous_translation ASR results to indicate the word number in speech. RealTrans A convolutional weighted-shrinking Transformer (Zeng et al., 2021), which detects the word number in the streaming speech and then decodes via the wait-k-stride-n policy. MoSST4 Monotonic-segmented streaming speech translation (Dong et al., 2022), which uses the integrate-and-firing method (Dong and Xu, 2020) to segment the speech based on the cumulative acoustic information. ITST5Information-transport-based policy for SimulST (Zhang and Feng, 2022b), which quantifies the transported information from source to target, and then decides whether to translate according to the accumulated received information. MU-ST Segmentation based on the meaning unit (Zhang et al., 2022a), which trains an external segmentation model based on the constructed data, and uses it to decide when to translate. DiSeg The proposed method in Sec.3. All implementations are adapted from Fairseq Library (Ott et al., 2019). We use a pre-trained Wav2Vec2.06(Baevski et al., 2020) as the acoustic feature extractor, and use a standard TransformerBase (Vaswani et al., 2017) as the translation model. For evaluation, we apply SimulEval7(Ma et al., 2020a) to report SacreBLEU (Post, 2018) for translation quality and Average Lagging (AL) (Ma et al., 2019) for latency. AL measures the average dura- ![6_image_0.png](6_image_0.png) tion (ms) that target outputs lag behind the speech inputs. The calculation refer to Appendix D. ## 4.3 Main Results We compare DiSeg and previous SimulST methods in Figure 5, where we only train a single DiSeg model and adjusted the lagging number k during the inference process to show the translation quality under different latency. Remarkably, DiSeg outperforms strong baselines under all latency and achieves state-of-the-art performance. Compared with fixed methods, such as Wait-k and MMA (Ma et al., 2020b), DiSeg can dynamically decides segmentation according to the inputs instead of equallength segmentation, which avoids breaking the acoustic integrity and thus achieves notable improvements. Compared with adaptive methods, including the state-of-the-art MU-ST, DiSeg also performs better. In previous adaptive methods, regardless of RealTrans, SH and SimulSpeech detecting the word number (Zeng et al., 2021; Chen et al., 2021; Ren et al., 2020), MoSST and ITST comparing the acoustic information with a threshold (Dong et al., 2022; Zhang and Feng, 2022b), or MU-ST training an external segmentation model (Zhang et al., 2022a), the final translation results is always non-differentiable to the segmentation, which hinders learning segmentation directly from the translation model. The proposed DiSeg turns the segmentation into differentiable, hence can learn translation-beneficial segmentation directly from the translation model, thereby achieving better performance. Furthermore, unlike the previous methods using uni-directional (e.g., RealTrans and ITST) or bi-directional attention (e.g., MUST and MoSST), the proposed segmented attention can not only encode streaming inputs but also get comprehensive segment representations (Zhang et al., 2021). In particular, DiSeg achieves comparable performance with the offline model when lagging 2300ms on En→De and 3000ms on En→Es, which is attributed to the improvements on translation quality brought by the segmented attention. ## 5 Analysis We conduct extensive analyses to study the effectiveness and specific improvements of DiSeg. Unless otherwise specified, all the results are reported on MuST-C En→De test set. ## 5.1 Ablation Study Discreteness Of Segmentation Probability To make expectation training more suitable for inference, we encourage the discreteness of segment probability via introducing Gaussian noise N (0, n) in Eq.(6). We compare the effect of discreteness in Figure 6(a), where appropriately encouraging discreteness effectively enhances expectation training, thereby improving DiSeg performance under low latency. However, too much noise will affect translation quality, especially under high latency, which is consistent with Arivazhagan et al. (2019). Number of Segments In DiSeg, we constrain the number of segments to be the word number K in the transcription rather than subword. To verify the effectiveness of segmentation granularity, we compare different segment numbers in Figure 6(b), noting that Lctr is also changed to be computed by subword embedding accordingly. Segmentation on word granularity is significantly better than subword granularity, mainly because many subwords are actually continuous and related in the speech, and segmentation at the word granularity can better preserve the acoustic integrity (Dong et al., 2022). Learning at Acoustic and Semantic Levels DiSeg learns segmentation at the acoustic and semantic levels, so we show the effectiveness of acoustic and semantic learning in Figure 6(c). The results demonstrate that both acoustic and semantic learning play the important role in SimulST per- | Encoder | En→De | En→Es | | | |-------------------|---------|---------|--------|-------| | Type | Greedy | Beam5 | Greedy | Beam5 | | Uni-directional | 22.94 | 24.26 | 28.54 | 28.92 | | Bi-directional | 22.92 | 24.64 | 28.47 | 29.51 | | DiSeg (segmented) | 23.34 | 24.68 | 28.96 | 29.65 | | −Lctr | 23.10 | 24.46 | 28.64 | 29.48 | formance. Specifically, acoustic learning encourages related speech features to be in the same segment through expected segment attention, where ensuring acoustic integrity is more important for SimulST under low latency, thereby achieving an improvement of 2 BLEU (AL≈1500ms). Semantic learning supervises the segment representations through the word representations in the transcription, which helps the segment representations to be more conducive to the translation model (Ye et al., 2022), thereby improving the translation quality. Effect of Wait-seg Decoder DiSeg introduces wait-seg decoder to learn wait-seg policy. As shown in Figure 6(d), compared with full decoder that can focus on all speech features, wait-seg decoder enhances DiSeg's ability to translate based on partial speech (Ma et al., 2019) and thus achieves significant improvements during inference. ## 5.2 Segmented Attention On Offline St How to encode streaming inputs is an important concern for SimulST (Zhang et al., 2021), where offline translation uses bi-directional attention to encode the complete source input and existing SimulST methods always apply uni-directional attention (Zeng et al., 2021; Zhang and Feng, 2022b). DiSeg applies segmented attention, which consists of bi-directional attention within a segment and unidirectional attention between segments. To study the modeling capability of segmented attention, we compare the performance of uni-directional attention, bi-directional attention and DiSeg (segmented attention) on offline speech translation in Table 1. DiSeg is a unified model that can handle both simultaneous speech translation and offline speech translation together. Therefore, we employ the same model as SimulST to complete offline speech translation, while only setting k = ∞ and applying beam search during inference. Uni- and bi-directional attention achieve similar performance in greedy search, which is consistent with Wu et al. (2021), while bi-directional Methods P(↑) R(↑) F1(↑) OS(0) **R-val**(↑) ES K-Means 30.7 18.0 22.7 -41.2 39.7 BES GMM 31.7 13.8 19.2 -56.6 37.9 VQ-CPC 18.2 54.1 27.3 196.4 -86.5 VQ-VAE 16.4 56.8 25.5 245.2 -126.5 SCPC 35.0 29.6 32.1 -15.4 44.5 DSegKNN 30.9 32.0 31.5 3.5 40.7 Fixed(280ms) 28.1 16.3 20.7 -42.0 38.4 DiSeg 34.9 32.3 **33.5** -7.4 **44.6** −Lctr 33.9 31.0 32.4 -8.5 43.9 attention performs better in beam search due to more comprehensive encoding. Owing to learning the translation-beneficial segmentation, DiSeg can outperform uni-/bi-directional attention on both greedy and beam search when only applying bidirectional attention within segments. Furthermore, when removing Lctr, segmented attention also achieves comparable performance to bi-directional attention, providing a new attention mechanism for future streaming models. Appendix C.2 gives visualization and more analyses of segmented attention. ## 5.3 Segmentation Quality To explore the segmentation quality of DiSeg, we conduct experiments on speech segmentation task (Demuynck and Laureys, 2002) with the annotated Buckeye corpus8(Pitt et al., 2005). Table 2 shows the segmentation performance of DiSeg and strong baselines (Kamper et al., 2017b,a; Kamper and van Niekerk, 2020; Bhati et al., 2022; Fuchs et al., 2022), and the metrics include precision (P), recall (R), F1 score, over-segmentation (OS) and R-value (refer to Appendix B for the calculation). The results show that DiSeg achieves better segmentation performance and Lctr can improve the segmentation quality by 1% score. More importantly, DiSeg achieves an OS score close to 0, demonstrating that DiSeg can get the appropriate number of segments, thereby avoiding too many or too few segments affecting SimulST performance (Zhang et al., 2022b). We will analyze the number of segments following. ## 5.4 Segmentation Quantity In training, we constrain the number of segments to be the word number in the transcription. To verify its effectiveness, we count the difference between the segment number and the word number during inference (i.e., \#Segments−\#Words) in Figure 7. 8https://buckeyecorpus.osu.edu ![8_image_0.png](8_image_0.png) Compared with the previous work considering that 280ms corresponds to a word on average (Ma et al., 2020b; Zaidi et al., 2022), DiSeg can get a more accurate number of segments, where the difference between the segment number and the word number is less than 2 in 70% of cases. Besides, as reported in Table 2, the OS score on the automatic speech segmentation task also demonstrates that DiSeg can achieve an appropriate number of segments. Therefore, constraining the expected segment number in expectation training is effective to control the number of segments. ## 5.5 **Adapting Multi-Task Learning To Simulst** During training, we adjust ASR task (segmented encoder + wait-seg decoder) and MT task (uniencoder + wait-k decoder) in multi-task learning to adapt to DiSeg, so we verify the effectiveness of the adaptation in Table 3. In the proposed adaptation \#1, since a speech segment corresponds to a word, both uni-encoder and wait-k decoder in MT task are fully compatible with the segmented attention and wait-seg decoder in SimulST task, thereby performing best. Both using bi-encoder (i.e., setting \#5-7) or full decoder (i.e., setting \#2-4) in ASR and MT tasks will affect DiSeg performance, and the performance degradation caused by the encoder mismatch is more serious. In general, ASR and MT tasks should be consistent and compatible with the SimulST task when adapting multi-task learning. ## 6 Related Work Early SimulST methods segment speech and then use the cascaded model (ASR+MT) to translate each segment (Fügen et al., 2007; Yarmohammadi et al., 2013; Rangarajan Sridhar et al., 2013; Zhang and Feng, 2023; Guo et al., 2023). Recent end-toend SimulST methods fall into fixed and adaptive. | # | ASR | MT | AL | BLEU | | | |------|-------|----------|------|--------|------|-------| | Enc. | Dec. | Enc. | Dec. | | | | | 1 | seg | wait-seg | uni | wait-k | 1514 | 20.74 | | 2 | seg | wait-seg | uni | full | 1428 | 19.32 | | 3 | seg | full | uni | wait-k | 1404 | 19.58 | | 4 | seg | full | uni | full | 1398 | 19.39 | | 5 | seg | wait-seg | bi | full | 1416 | 18.93 | | 6 | bi | full | uni | wait-k | 1704 | 20.23 | | 7 | bi | full | bi | full | 1374 | 18.82 | For fixed, Ma et al. (2020b) proposed fixed predecision to divide speech into equal-length segments, and migrated simultaneous MT method, such as wait-k (Ma et al., 2019; Zhang and Feng, 2021c,a; Guo et al., 2022) and MMA (Ma et al., 2020c), to SimulST. For adaptive, Ren et al. (2020) proposed SimulSpeech to detect the word in speech. Chen et al. (2021) used ASR results to indicate the word number. Zeng et al. (2021) proposed RealTrans, which detects the source word and further shrinks the speech length. Dong et al. (2022) proposed MoSST to translate after the acoustic information exceeding 1. Zhang and Feng (2022b) proposed ITST to judge whether the received information is sufficient for translation. Zhang et al. (2022a) proposed MU-ST, which constructs the segmentation labels based on meaning unit, and uses it to train a segmentation model. In the previous method, whether using an external segmentation model or the detector, the segmentation cannot receive the gradient (i.e., learning) from the underlying translation model as the hard segmentation is not differentiable. Owing to the differentiable property, DiSeg can be jointly trained with the underlying translation model and directly learn translation-beneficial segmentation. ## 7 Conclusion In this study, we propose differentiable segmentation (DiSeg) for simultaneous speech translation to directly learn segmentation from the underlying translation model. Experiments show the superiority of DiSeg in terms of SimulST performance, attention mechanism and segmentation quality. Future researches will delve into the untapped potential of differentiable segmentation in such streaming models and long sequence modeling, thereby reducing feedback latency or computational cost without compromising performance. ## Acknowledgements We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2018AAA0102502). ## Limitations In this study, we propose differentiable segmentation to learn how to segment speech from the underlying translation model, and verify its effectiveness on simultaneous speech translation. However, since it can be jointly trained with the underlying task (sequence-to-sequence task), differentiable segmentation is not limited to the SimulST task, but can be generalized to more streaming/online tasks, such as streaming automatic speech recognition (streaming ASR), simultaneous machine translation (SiMT), real-time text-to-speech synthesis (real-time TTS), online tagging and streaming parsing. Given that there may be some task-specific differences between various tasks, this work only focuses on the differentiable segmentation in the SimulST task, and we leave the study of how to apply differentiable segmentation to other streaming tasks into our future work. ## References Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82–91, New Orleans, Louisiana. Association for Computational Linguistics. Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation. pages 1313–1323. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Advances in Neural Information Processing Systems*, volume 33, pages 12449–12460. Curran Associates, Inc. Saurabhchand Bhati, Jesús Villalba, Piotr Zelasko, Lau- ˙ reano Moro-Velazquez, and Najim Dehak. 2022. Unsupervised speech segmentation and variable rate representation learning using segmental contrastive predictive coding. *IEEE/ACM Transactions on Audio,* Speech, and Language Processing, 30:2002–2014. Junkun Chen, Mingbo Ma, Renjie Zheng, and Liang Huang. 2021. Direct simultaneous speech-to-text translation assisted by synchronized streaming ASR. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4618–4624, Online. Association for Computational Linguistics. Kyunghyun Cho and Masha Esipova. 2016. Can neural machine translation do simultaneous translation? Kris Demuynck and Tom Laureys. 2002. A comparison of different approaches to automatic speech segmentation. In *Text, Speech and Dialogue*, pages 277–284, Berlin, Heidelberg. Springer Berlin Heidelberg. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2012–2017, Minneapolis, Minnesota. Association for Computational Linguistics. Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven word alignment interpretation for neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 1–12, Florence, Italy. Association for Computational Linguistics. Linhao Dong and Bo Xu. 2020. Cif: Continuous integrate-and-fire for end-to-end speech recognition. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6079–6083. Qian Dong, Yaoming Zhu, Mingxuan Wang, and Lei Li. 2022. Learning when to translate for streaming speech. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 680–694, Dublin, Ireland. Association for Computational Linguistics. Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2020. Efficient Wait-k Models for Simultaneous Machine Translation. In *Proc. Interspeech 2020*, pages 1461–1465. Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In *Advances in Neural Information Processing Systems*, volume 29. Curran Associates, Inc. Tzeviya Sylvia Fuchs, Yedid Hoshen, and Joseph Keshet. 2022. Unsupervised word segmentation using k nearest neighbors. *arXiv preprint* arXiv:2204.13094. Christian Fügen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine Translation, 21(4):209–252. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053–1062, Valencia, Spain. Association for Computational Linguistics. Shoutao Guo, Shaolei Zhang, and Yang Feng. 2022. Turning fixed to adaptive: Integrating post-evaluation into simultaneous machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2264–2278, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Shoutao Guo, Shaolei Zhang, and Yang Feng. 2023. Learning optimal policy for simultaneous machine translation via binary search. In *Proceedings of the* 61th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Olivier Hamon, Christian Fügen, Djamel Mostefa, Victoria Arranz, Muntsin Kolss, Alex Waibel, and Khalid Choukri. 2009. End-to-end evaluation in simultaneous translation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 345–353, Athens, Greece. Association for Computational Linguistics. Javier Iranzo Sanchez, Jorge Civera, and Alfons JuanCíscar. 2022. From simultaneous to streaming machine translation by leveraging streaming history. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6972–6985, Dublin, Ireland. Association for Computational Linguistics. Herman Kamper, Aren Jansen, and Sharon Goldwater. 2017a. A segmental framework for fullyunsupervised large-vocabulary speech recognition. Computer Speech & Language, 46:154–174. Herman Kamper, Karen Livescu, and Sharon Goldwater. 2017b. An embedded segmental k-means model for unsupervised segmentation and clustering of speech. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 719–726. Herman Kamper and Benjamin van Niekerk. 2020. Towards unsupervised phone and word segmentation using self-supervised vector-quantized neural networks. arXiv e-prints, page arXiv:2012.07551. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Ludwig Kürzinger, Dominik Winkelbauer, Lujun Li, Tobias Watzel, and Gerhard Rigoll. 2020. Ctcsegmentation of large corpora for german end-to-end speech recognition. In *Speech and Computer*, pages 267–278, Cham. Springer International Publishing. Phuc H. Le-Khac, Graham Healy, and Alan F. Smeaton. 2020. Contrastive representation learning: A framework and review. *IEEE Access*, 8:193907–193934. Chengdong Liang, Menglong Xu, and Xiao-Lei Zhang. 2021. Transformer-based end-to-end speech recognition with residual gaussian-based self-attention. arXiv preprint arXiv:2103.15722. Yuchen Liu, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2020. Bridging the modality gap for speechto-text translation. *arXiv preprint arXiv:2010.14920*. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language* Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025–3036, Florence, Italy. Association for Computational Linguistics. Xutai Ma, Mohammad Javad Dousti, Changhan Wang, Jiatao Gu, and Juan Pino. 2020a. SIMULEVAL: An evaluation toolkit for simultaneous translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 144–150, Online. Association for Computational Linguistics. Xutai Ma, Juan Pino, and Philipp Koehn. 2020b. SimulMT to SimulST: Adapting simultaneous text translation to end-to-end simultaneous speech translation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 582–587, Suzhou, China. Association for Computational Linguistics. Xutai Ma, Juan Miguel Pino, James Cross, Liezl Puzon, and Jiatao Gu. 2020c. Monotonic multihead attention. In *International Conference on Learning* Representations. Xutai Ma, Yongqiang Wang, Mohammad Javad Dousti, Philipp Koehn, and Juan Pino. 2021. Streaming simultaneous speech translation with augmented memory transformer. In *ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 7523–7527. Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Yannick Estève, and laurent besacier. 2020. Investigating self-supervised pre-training for end-to-end speech translation. In *ICML 2020 Workshop on Selfsupervision in Audio and Speech*. Ha Nguyen, Yannick Estève, and Laurent Besacier. 2021. An empirical study of end-to-end simultaneous speech translation decoding strategies. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7528–7532. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing segmentation strategies for simultaneous speech translation. In *Proceedings of the 52nd Annual Meeting of* the Association for Computational Linguistics (Volume 2: Short Papers), pages 551–556, Baltimore, Maryland. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. B. Petek, O. Andersen, and P. Dalsgaard. 1996. On the robust automatic segmentation of spontaneous speech. In *Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96*, volume 2, pages 913–916 vol.2. Mark A. Pitt, Keith Johnson, Elizabeth Hume, Scott Kiesling, and William Raymond. 2005. The buckeye corpus of conversational speech: labeling conventions and a test of transcriber reliability. Speech Communication, 45(1):89–95. Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Maja Popovic. 2017. ´ chrF++: words helping character n-grams. In *Proceedings of the Second Conference on Machine Translation*, pages 612–618, Copenhagen, Denmark. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings* of Machine Learning Research, pages 2837–2846. PMLR. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In *Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 230–238, Atlanta, Georgia. Association for Computational Linguistics. Okko Räsänen, Unto Laine, and Toomas Altosaar. 2009. An improved speech segmentation quality measure: the r-value. In *10th Interspeech Conference, Brighton, UK, September 6-10, 2009*. Yi Ren, Jinglin Liu, Xu Tan, Chen Zhang, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2020. SimulSpeech: End-to-end simultaneous speech to text translation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3787– 3796, Online. Association for Computational Linguistics. Alaa Ehab Sakran, Sherif Mahdy Abdou, Salah Eldeen Hamid, and Mohsen Rashwan. 2017. A review: Automatic speech segmentation. *International Journal of Computer Science and Mobile Computing*, 6(4):308–315. Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Semantic hashing. *International Journal of Approximate Reasoning*, 50(7):969–978. Special Section on Graphical Models and Information Retrieval. Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas. Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In *Advances in* Neural Information Processing Systems, volume 29. Curran Associates, Inc. Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021a. Improving speech translation by understanding and learning from the auxiliary text translation task. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4252–4261, Online. Association for Computational Linguistics. Yun Tang, Juan Pino, Changhan Wang, Xutai Ma, and Dmitriy Genzel. 2021b. A general multi-task learning framework to leverage text data for speech to text tasks. In *ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6209–6213. Cassia Valentini-Botinhao and Simon King. 2021. Detection and analysis of attention errors in sequenceto-sequence text-to-speech. In Interspeech 2021: The 22nd Annual Conference of the International Speech Communication Association, pages 2746– 2750. ISCA. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, *Advances in Neural Information Processing Systems 30*, pages 5998–6008. Curran Associates, Inc. Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In *Proceedings of the 2019 ACL Workshop* BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63–76, Florence, Italy. Association for Computational Linguistics. Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq S2T: Fast speech-to-text modeling with fairseq. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations, pages 33–39, Suzhou, China. Association for Computational Linguistics. Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2495–2504. Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly translate foreign speech. In Proc. Interspeech 2017, pages 2625–2629. Xueqing Wu, Lewen Wang, Yingce Xia, Weiqing Liu, Lijun Wu, Shufang Xie, Tao Qin, and Tie-Yan Liu. 2021. Temporally correlated task scheduling for sequence learning. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pages 11274–11284. PMLR. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 4449–4458, Brussels, Belgium. Association for Computational Linguistics. Shan Yang, Heng Lu, Shiyin Kang, Liumeng Xue, Jinba Xiao, Dan Su, Lei Xie, and Dong Yu. 2020. On the localness modeling for the self-attention based endto-end speech synthesis. *Neural Networks*, 125:121– 130. Mahsa Yarmohammadi, Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1032–1036, Nagoya, Japan. Asian Federation of Natural Language Processing. Rong Ye, Mingxuan Wang, and Lei Li. 2022. Crossmodal contrastive learning for speech translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5099–5113, Seattle, United States. Association for Computational Linguistics. Mohd Abbas Zaidi, Beomseok Lee, Sangha Kim, and Chanwoo Kim. 2022. Cross-Modal Decision Regularization for Simultaneous Speech Translation. In Proc. Interspeech 2022, pages 116–120. Xingshan Zeng, Liangyou Li, and Qun Liu. 2021. RealTranS: End-to-end simultaneous speech translation with convolutional weighted-shrinking transformer. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2461–2474, Online. Association for Computational Linguistics. Ruiqing Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2022a. Learning adaptive segmentation policy for end-to-end simultaneous translation. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 7862–7874, Dublin, Ireland. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2021a. ICT's system for AutoSimTrans 2021: Robust char-level simultaneous translation. In *Proceedings of the Second Workshop* on Automatic Simultaneous Translation, pages 1–11, Online. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2021b. Modeling concentrated cross-attention for neural machine translation with Gaussian mixture model. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 1401–1411, Punta Cana, Dominican Republic. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2021c. Universal simultaneous machine translation with mixture-of-experts wait-k policy. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7306–7317, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022a. Gaussian multihead attention for simultaneous machine translation. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 3019–3030, Dublin, Ireland. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022b. Informationtransport-based policy for simultaneous translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 992– 1013, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022c. Modeling dual read/write paths for simultaneous machine translation. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 2461–2477, Dublin, Ireland. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2022d. Reducing position bias in simultaneous machine translation with length-aware framework. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6775– 6788, Dublin, Ireland. Association for Computational Linguistics. Shaolei Zhang and Yang Feng. 2023. Hidden markov transformer for simultaneous machine translation. In The Eleventh International Conference on Learning Representations. Shaolei Zhang, Yang Feng, and Liangyou Li. 2021. Future-guided incremental transformer for simultaneous translation. *Proceedings of the AAAI Conference* on Artificial Intelligence, 35(16):14428–14436. Shaolei Zhang, Shoutao Guo, and Yang Feng. 2022b. Wait-info policy: Balancing source and target at information level for simultaneous machine translation. In *Findings of the Association for Computational* Linguistics: EMNLP 2022, pages 2249–2263, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yibin Zheng, Xinhui Li, Fenglong Xie, and Li Lu. 2020. Improving end-to-end speech synthesis with local recurrent neural network enhanced transformer. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6734–6738. ## A Dynamic Programming For Expected Feature-To-Segment Mapping In Sec.3.2, we propose the expected feature-tosegment map to get the expected segment representations while keeping segment representations differentiable to the segmentation. Expected feature-to-segment map calculates the probability p(ai ∈ Segk) that the speech feature ai belongs to the k th segment Segk , and then gets the expected segment representation by weighting all speech features ai with p(ai ∈ Segk). Given segmentation probability pi, we calculate p(ai ∈ Segk) via dynamic programming. Whether ai belongs to the k th segment depends on which segment that speech feature ai−1 is located in, consisting of 3 situations: - ai−1 ∈ Segk−1 : If DiSeg segments at feature ai−1 (with probability pi−1), then ai belongs to Segk ; - ai−1 ∈ Segk : If DiSeg does not segment at feature ai−1 (with probability 1 − pi−1), then ai belongs to Segk ; - Others: ai can not belong to Segk anyway, because the feature-to-segment mapping must be monotonic. By combining these situations, p(ai ∈ Segk) is calculated as: $$p(a_{i}\in\operatorname{Seg}_{k})=p\big{(}a_{i-1}\in\operatorname{Seg}_{k-1}\big{)}\times p_{i-1}\tag{16}$$ $$+\ p(a_{i-1}\in\operatorname{Seg}_{k})\times(1-p_{i-1}).$$ For the initialization, p(a1 ∈ Segk) is calculated as: $$p(a_{1}\in\operatorname{Seg}_{k})={\begin{cases}1&{\mathrm{if}}\;k=1\\ 0&{\mathrm{if}}\;k\neq1\end{cases}},\quad{\mathrm{(17)}}$$ where the first feature inevitably belongs to the first segment. Since we constrained the number of segments to be K during training, we truncate p(ai ∈ Segk) at p(ai ∈ SegK). ## B Metrics Of Word Segmentation Task In Sec.5.3, we evaluate the segmentation quality of DiSeg on the automatic speech segmentation task (Demuynck and Laureys, 2002; Sakran et al., 2017), and here we give the specific calculation of metrics for the speech segmentation task. ![14_image_0.png](14_image_0.png) Precision (P), recall (R) and the corresponding F1 score are used to measure whether the segmentation position is correct compared with the groundtruth segmentation. Over-segmentation (OS) (Petek et al., 1996) measures whether the number of segments generated by the model is accurate, calculated as: $$\mathrm{OS}={\frac{R}{P}}-1,$$ $$(18)$$ where OS = 0 means that the number of segments is completely accurate, a larger OS means more segments, and a smaller OS means fewer segments. Since a large number of segments is easy to obtain a high recall score while a poor OS score, a more robust metric R-value (Räsänen et al., 2009) is proposed to measure recall score and OS score together. R-value is calculated as: $${\mathrm{R-value}}=1-{\frac{\left|r_{1}\right|+\left|r_{2}\right|}{2}},$$ where $\ r_{1}={\sqrt{\left(1-R\right)^{2}+\mathrm{OS}^{2}}}$, $$r_{2}={\frac{-\mathrm{OS}+R-1}{\sqrt{2}}}.$$ 2 + OS2, (20) A larger R-value indicates better segmentation quality, and the only way to achieve a perfect R-value is to get a perfect recall score (i.e., R = 1) and a perfect OS score (i.e., OS = 0). ## C Extended Analyses C.1 Effectiveness Of Contrastive Learning During expectation training, we use the word representations to supervise the segment representations via contrastive learning Lctr (Sohn, 2016). To 7673 verify its effect, we compared the performance of applying contrastive learning loss Lctr and some other loss functions for semantic supervision in Figure 8, including - Lctr: reduce the cosine similarity between the expected segment representation f s k and the corresponding word representation f t k and meanwhile separates f s k with the rest of the word representations; - Lcos: reduce the cosine similarity between the expected segment representation f s k and the corresponding word representation f t k . - L2: reduce the L2 distance between the expected segment representation f s k and the corresponding word representation f t k . The results show that the cosine similarity Lcos is better than the L2 distance to measure the difference between the segment representation and the word representation (Le-Khac et al., 2020; Ye et al., 2022), and the contrastive learning Lctr further improves DiSeg performance by introducing negative examples. In particular, since Lcos and L2 loss fails to separate the representation of the segment and non-corresponding words, it is easy to cause the segment corresponds to more words or fewer words, which can still reduce Lcos or L2 loss but is not conducive to learning the precise segmentation. By making positive pairs (i.e., segment and the corresponding word) attracted and negative pairs (i.e., segment and those non-corresponding words) separated (Sohn, 2016), contrastive learning Lctr can learn more precise segmentation boundaries and thereby achieve better performance. ## C.2 Visualization Of Segmented Attention We visualize the proposed segmented attention, the previous uni-directional attention and bi-directional attention in Figure 9, 10 and 11, including the speech with various lengths. Comprehensive Compared with bi-directional attention, uni-directional attention obviously loses some information from the subsequent speech features (Zhang et al., 2021; Zhang and Feng, 2022d; Iranzo Sanchez et al., 2022). Segmented attention applies bidirectional attention within a segment and thereby can get a more comprehensive representation. In Figure 9, compared with uni-directional attention, segmented attention is more similar with bi-directional attention in attention distribution. Precise DiSeg learns the translation-beneficial segmentation through the proposed expected segmented attention (refer to Eq.(7)), which encourages the model to segment the inputs at the feature aiif ai does not need to pay attention to subsequent features. As shown in Figure 9(b), 10(b) and 11(b), DiSeg can learn precise segmentation and ensure the acoustic integrity in each segment. In particular, the segmentation shown in Figure 9(b) and 10(b) almost guarantees that each speech segment corresponds to a word in the transcription. Note that the last segment in Figure 9(b) and 10(b) often corresponds to silence at the end of speech. Since DiSeg learns segmentation without labeled segmentation/alignment data, the proposed segmented attention can be applied to more streaming tasks. Concentrate The issue of attention dispersion caused by long speech is one of the major challenges for speech modeling (Yang et al., 2020; Liang et al., 2021; Valentini-Botinhao and King, 2021; Zheng et al., 2020). As shown in Figure 11(a) and 11(c), both uni-directional and bi-directional attention tends to become scattered when dealing with long speech, and each feature can only get a very small amount of attention weight (e.g., the maximum attention weight in Figure 11(c) is 0.01.), which affects the modeling capability of the attention mechanism (Vig and Belinkov, 2019; Ding et al., 2019; Valentini-Botinhao and King, 2021). Segmented attention applies bi-directional attention within a segment and uni-directional attention between segments, which naturally introduces locality to attention, thereby effectively mitigating the issue of attention dispersion (Luong et al., 2015; Yang et al., 2018; Liang et al., 2021; Zhang and Feng, 2021b; Zheng et al., 2020). Specifically, as shown in Figure 10(b), segmented attention can be concentrated in the segment and pay more attention to the surrounding features. As shown in Figure 11(b), although the sequence of speech features is extremely long, segmented attention also can focus on the features in each segment (e.g., a clear attention distribution can be found inside each segment in Figure 11(b), and the maximum attention weight is 0.47.). Therefore, segmented attention provides a solution to enhance locality in long speech modeling. ## C.3 Case Study We visualize the simultaneous translation process of DiSeg on simple and hard cases in Figure 12. ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ![16_image_2.png](16_image_2.png) ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) Horizontally, the position of the word in outputs is the moment when it is translated, corresponding to the speech inputs. Red lines indicate where DiSeg decides to segment the speech inputs, and gray lines indicate the fixed segmentation of 280ms. For clarity, we use an external alignment tool Forced-Alignment9(Kürzinger et al., 2020) to align the transcription with the speech, where the green area is the speech interval corresponding to the transcription marked by the tool. Note that the alignment provided by external tools is only a rough reference, not necessarily absolutely accurate, and the value in the marked interval represents 9A CTC-based alignment tool based on the full speech and ground-truth transcription, the tutorial of which can be found at https://pytorch.org/audio/main/tutorials/ forced_alignment_tutorial.html ## The Probability Of Alignment. For the simple case in Figure 12(a), where the speech is short and the correspondence between reference and speech (transcription) is much monotonic, DiSeg can basically accurately segment the speech inputs and achieve high-quality translation. In particular, when both lagging 3 segments, DiSeg achieves much lower latency than Wait-k due to more precise segmentation. For the hard case in Figure 12(b), where the speech is much longer and contains a long silence, DiSeg can also precisely segment the speech inputs. Besides, there is an obvious word order difference (Zhang and Feng, 2022a) between reference and speech (transcription) in this case, which is more challenging for SimulST (Ma et al., 2019). Since the fixed segmentation cannot adjust, Wait-k misses translating '*know*'. DiSeg can dynamically adjust the segmentation, and thereby decides to segment and translate '*weiß*' after receiving '*know*' in the speech. Owing to precise segmentation, DiSeg can achieve better translation quality under the same latency. ## D Numerical Results D.1 Metrics For latency, besides Average Lagging (AL) (Ma et al., 2019), we additionally use Consecutive Wait (CW) (Gu et al., 2017), Average Proportion (AP) (Cho and Esipova, 2016) and Differentiable Average Lagging (DAL) (Arivazhagan et al., 2019) to evaluate the latency of DiSeg. Assuming that DiSeg translates yt at the moment T (yt), the calculations of latency metrics are as follows. Consecutive Wait (Gu et al., 2017) CW evaluates the average waiting duration between two adjacent outputs, calculated as: $$\text{CW}=\frac{\sum_{t=1}^{|y|}(\mathcal{T}(y_{t})-\mathcal{T}(y_{t-1}))}{\sum_{t=1}^{|y|}\mathbbm{1}_{\mathcal{T}(y_{t})-\mathcal{T}(y_{t-1})>0}},\tag{22}$$ where $\mathbbm{1}_{\mathcal{T}(y_{t})-\mathcal{T}(y_{t-1})>0}$ counts the number of $\mathcal{T}(y_{t})-\mathcal{T}(y_{t-1})>0$. **Average Preparation (Cho and Einsova, 2016)** Average Proportion (Cho and Esipova, 2016) AP evaluates the average proportion between T (yt) and the total duration T of the complete source speech, calculated as: $$\mathrm{AP}={\frac{1}{|\mathbf{y}|}}\sum_{t=1}^{|\mathbf{y}|}{\frac{T(y_{t})}{T}}.$$ $$(23)$$ $$(24)$$ $$(25)$$ T. (23) Average Lagging (Ma et al., 2019, 2020b) AL evaluates the average duration that target outputs lag behind the speech inputs, is calculated as: $$\mathrm{AL}={\frac{1}{\tau}}\sum_{t=1}^{\tau}{\mathcal{T}}(y_{t})-{\frac{t-1}{|\mathbf{y}|\,/\,T}},$$ where $\tau=\operatorname*{argmin}_{t}\left({\mathcal{T}}(y_{t})=T\right).$ |y| / T , (24) Differentiable Average Lagging (Arivazhagan et al., 2019) DAL is a differentiable version of average lagging, calculated as: $$\mathrm{DAL}={\frac{1}{|\mathbf{y}|}}\sum_{t=1}^{|\mathbf{y}|}{\mathcal{T}}^{\prime}(y_{t})-{\frac{t-1}{|\mathbf{y}|\,/\,T}},\qquad{\mathrm{(26)}}$$ where $$\mathcal{T}^{\prime}(y_{t})\!=\!\begin{cases}\mathcal{T}(y_{t})&t=1\\ \operatorname*{max}\!\left(\mathcal{T}(y_{t})\,,\mathcal{T}^{\prime}(y_{t-1})\!+\!\frac{T}{|\mathbf{y}|}\right)&t>1\end{cases}.\tag{27}$$ For translation quality, in addition to SacreBLEU (Post, 2018), we also provide TER (Snover et al., 2006), chrF (Popovic´, 2015) and chrF++ (Popovic´, 2017) score of DiSeg. ## D.2 Numerical Results The numerical results of DiSeg with more metrics are reported in Table 4 and Table 5. $$7677$$ | En→De | | | | | | | | | |---------|------|------|------|------|-------|-------|-------|--------| | k | CW | AP | AL | DAL | BLEU | TER | chrF | chrF++ | | 1 | 462 | 0.67 | 1102 | 1518 | 18.85 | 73.13 | 44.29 | 42.31 | | 3 | 553 | 0.76 | 1514 | 1967 | 20.74 | 69.95 | 49.34 | 47.09 | | 5 | 666 | 0.82 | 1928 | 2338 | 22.11 | 66.90 | 50.13 | 47.94 | | 7 | 850 | 0.86 | 2370 | 2732 | 22.98 | 65.42 | 50.36 | 48.23 | | 9 | 1084 | 0.90 | 2785 | 3115 | 23.01 | 65.48 | 50.24 | 48.13 | | 11 | 1354 | 0.92 | 3168 | 3464 | 23.13 | 65.04 | 50.42 | 48.31 | | 13 | 1632 | 0.94 | 3575 | 3846 | 23.05 | 64.85 | 50.53 | 48.41 | | 15 | 1935 | 0.96 | 3801 | 4040 | 23.12 | 64.92 | 50.47 | 48.36 | | En→Es | | | | | | | | | |---------|------|------|------|------|-------|-------|-------|--------| | k | CW | AP | AL | DAL | BLEU | TER | chrF | chrF++ | | 1 | 530 | 0.67 | 1144 | 1625 | 22.03 | 71.34 | 45.69 | 43.82 | | 3 | 563 | 0.76 | 1504 | 2107 | 24.49 | 66.63 | 53.09 | 50.85 | | 5 | 632 | 0.81 | 1810 | 2364 | 26.58 | 63.35 | 54.55 | 52.39 | | 7 | 788 | 0.85 | 2249 | 2764 | 27.81 | 61.87 | 55.28 | 53.16 | | 9 | 1010 | 0.89 | 2694 | 3164 | 28.33 | 60.98 | 55.51 | 53.40 | | 11 | 1257 | 0.92 | 3108 | 3530 | 28.59 | 60.63 | 55.64 | 53.55 | | 13 | 1534 | 0.94 | 3479 | 3855 | 28.72 | 60.49 | 55.61 | 53.53 | | 15 | 1835 | 0.95 | 3819 | 4160 | 28.92 | 60.22 | 55.80 | 53.71 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In Limitations Section. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? In Section 4 and 5. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Section 4. ## C ✓ **Did You Run Computational Experiments?** In Section 4 And 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Sectoin 4.2. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Appendix D. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Sectoin 4.2. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
shen-etal-2023-joint
Joint Generator-Ranker Learning for Natural Language Generation
https://aclanthology.org/2023.findings-acl.486
Generate-then-rank is a widely used mechanism for text generation, where a generator produces multiple text candidates and a ranker chooses the best one among the text candidates. However, existing methods usually train the generator and the ranker individually, neglecting the mutual feedback that could further enhance the generation quality. To tackle this limitation, we propose JGR, a novel joint training algorithm that integrates the generator and the ranker in a single framework. JGR optimizes the generator with a hybrid objective that combines data likelihood and ranker reward, and trains the ranker with a contrastive loss that compares the generator outputs. By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly. We evaluate JGR on various text generation tasks and demonstrate that it surpasses existing methods on four public datasets across three common generation scenarios. Our code and models are publicly available at \url{https://github.com/microsoft/ProphetNet/tree/master/JGR}.
# Joint Generator-Ranker Learning For Natural Language Generation Weizhou Shen1∗ , Yeyun Gong2† , Yelong Shen3†**, Song Wang**3, Xiaojun Quan4†, Nan Duan2, **Weizhu Chen**3 1School of Computer Science and Engineering, Sun Yat-sen University 2Microsoft Research Asia, 3Microsoft Research 1shenwzh3@mail2.sysu.edu.cn, 2{yegong, nanduan}@microsoft.com, 3{yelong.shen, sonwang, wzchen}@microsoft.com, 4xiaojunquan@gmail.com ## Abstract Generate-then-rank is a widely used mechanism for text generation, where a generator produces multiple text candidates and a ranker chooses the best one among the text candidates. However, existing methods usually train the generator and the ranker individually, neglecting the mutual feedback that could further enhance the generation quality. To tackle this limitation, we propose JGR, a novel joint training algorithm that integrates the generator and the ranker in a single framework. JGR optimizes the generator with a hybrid objective that combines data likelihood and ranker reward, and trains the ranker with a contrastive loss that compares the generator outputs. By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly. We evaluate JGR on various text generation tasks and demonstrate that it surpasses existing methods on four public datasets across three common generation scenarios. Our code and models are publicly available at https://github.com/ microsoft/ProphetNet/tree/master/JGR. ## 1 Introduction The quality of the output texts produced by neural natural language generation (NLG) models, such as those for machine translation (Vaswani et al., 2017) and summarization (Lewis et al., 2019), depends largely on how they are trained and decoded. The conventional approach is to train them with loglikelihood objectives and decode them with greedy or beam search strategies. However, this approach often fails to select the best sample with the highest evaluation score among the generated candidates, as shown by previous studies (Cohen and Beck, 2019; Meister et al., 2020). To overcome this limitation, some recent works (Liu and Liu, 2021; Liu et al., 2021; Li et al., ∗ Done during his internship at Microsoft Research Asia. † Corresponding authors. 2022b; Ravaut et al., 2022) proposed to use a separate ranker model to re-rank the output texts of the generator model, following a generate-then-rank pipeline. This pipeline can improve the quality of the output texts by exploiting the ranker model's ability to evaluate and compare different candidates. However, this pipeline also has a drawback: it requires training the generator and ranker models in two separate phases, which may not fully exploit the generative ability of the generator model and the feedback from the ranker model. In this paper, we propose a novel Joint training paradigm of both Generator and Ranker (JGR) for NLG tasks, which aims to overcome the drawback of the generate-then-rank pipeline. Unlike previous works, which train the generator and ranker models separately, we explore a joint and iterative training algorithm that updates both models in turn. Our main motivation for the joint and iterative training of the generator and ranker is twofold. First, the ranker model can provide valuable feedback to the generator model based on the ranking scores of the generated candidates. This encourages the generator model to produce better outputs. Second, the ranker model can also benefit from the outputs of a progressively better generator model, and improve its ranking performance gradually. The JGR framework consists of a generator and a ranker. During training, the generator and ranker alternate to update their parameters, and each of them involves the other's outputs in its own input signals. Specifically, the ranker model is trained to rank the outputs generated by the generator model for a given input text by assigning a ranking score. At the generator training phase, the generator model uses a combination of the ranker score and the matching score (e.g., BLEU) as the reward for each sample, and trains with policy gradients, which encourages the generator to produce candidates with higher rewards and mitigates the exposure bias issue in the teacher-forcing learning. To assess the effectiveness of JGR, we conduct experiments on four diverse NLG tasks from different domains, including abstractive summarization (Hermann et al., 2015), conversational summarization (Gliwa et al., 2019), question generation (Rajpurkar et al., 2016), and dialogue (Zhang et al., 2018). The experimental results demonstrate that JGR achieves remarkable performance gains over the conventional MLE training method, with a 3-point increase in ROUGE-2 score on the CNN/DailyMail dataset and a 3.5-point increase in BLEU-2 score on PersonaChat. Furthermore, we make several interesting observations from the results. First, the rewards from the ranker are more effective than the rewards from the direct metrics, but combining them together stabilizes the training and produces a better generator. Second, training the ranker only on the candidates from the generator is better than using ground-truth as positive examples. Third, sampling more candidates during training leads to better performance within a certain range, which is consistent with data augmentation. Fourth, though trained with reinforcement learning aimed at optimizing automatic evaluation metrics, JGR still does not compromise on other aspects of generation quality. Finally, the joint training paradigm increases the diversity of the generator outputs, which in turn benefits the ranker training. ## 2 Related Work 2.1 Natural Language Generation Natural language generation is a long-standing research topic. RNN-based methods for dialog systems (Wen et al., 2015) and convolutional methods for translation (Gehring et al., 2016) are some examples of earlier approaches. In the last few years, pre-trained transformer models have advanced the state of the art on many NLG tasks. These models, such as BART (Lewis et al., 2019), ProphetNet (Qi et al., 2020), and T5 (Raffel et al., 2020), use an encoder-decoder architecture and leverage large amounts of unlabeled data. Other models, such as GPT2 (Radford et al., 2019) and UniLM (Dong et al., 2019), use only a decoder or an encoder for natural language generation. Reinforcement learning can assist the training of NLG models, as shown by several works. Rennie et al. (2017); Paulus et al. (2018) used selfcritical methods that measure the reward as the difference between the metric score and the baseline score. Bahdanau et al. (2017); Le et al. (2022) introduced actor-critic frameworks (Konda and Tsitsiklis, 1999), which is also a joint training framework, while they have not considered the contrastive rewards between different candidates given one input. We provide a more detailed comparison in A.1. Another common approach to NLG is to apply adversarial networks (Goodfellow et al., 2014). For example, SeqGAN (Yu et al., 2017), RankGAN (Lin et al., 2017), GCN (Lamprier et al., 2022) and SelfGAN (Scialom et al., 2021b). These methods also introduce a joint training framework, however, instead of training a ranker, they trained a discriminator, which distinguishes the ground-truth text and the generator outputs. In Appendix A.2, we detail the main distinctions between these methods and our JGR. ## 2.2 Generate-Then-Rank Framework The generate-then-rank framework generates some candidate texts with a generator and then ranks them with a ranker. SimCLS (Liu and Liu, 2021), RefSum (Liu et al., 2021), and SumRanker (Ravaut et al., 2022) train rankers separately to rank the outputs of summarization models such as BART (Lewis et al., 2019). In other domains, such as code generation and math problem solving, rankers are also used to evaluate the generated outputs, as shown by AlphaCode (Li et al., 2022b) and Verifier (Cobbe et al., 2021). There are also some works trying to compress the generate-then-rank pipeline to one single model using extra training objectives, for example, MATCHSUM (Zhong et al., 2020), CoLo (An et al., 2022), and BRIO (Liu et al., 2022) with contrastive learning, and Amortized Noisy-Channel NMT (Pang et al., 2021) with Q-learning. However, the above methods do not explore the joint training framework that optimizes both generators and rankers together. In the retrieve-then-rank framework for dense retrieval (Karpukhin et al., 2020), a retriever first finds relevant documents from a large collection, then a ranker reorders them according to their scores. Our JGR is partially motivated by this framework, we think in the generate-then-rank framework, the generation can be viewed as a retrieval process. Therefore, during training and inference, the generator should sample enough candidates for the ranker to re-rank. Several works have proposed to jointly train the retriever and the ranker to improve retrieve-then-rank framework. Such as ![2_image_0.png](2_image_0.png) RocketQA v2 (Ren et al., 2021) and AR2 (Zhang et al., 2021). However, to our knowledge, JGR is the first work applying the joint training paradigm to the generate-then-rank framework for NLG. ## 3 Methodology The model architecture of our JGR, shown in Figure 1, has two components: a generator that outputs several text candidates for an input text using an encoder-decoder model, and a ranker that scores these text candidates. The JGR workflow works as follows: a) the generator generates multiple text candidates conditioned on the input text; b) the input text and the text candidates are combined and sent to the ranker; c) the ranker learns to rank the text candidates via a contrastive learning objective; d) the ranker gives a reward to each text candidate, which in turn is used to train the generator. In the following, we first introduce the basic elements of conditional text generation, including problem definition, model architecture, and model training. ## 3.1 Preliminaries Given a text pair (x, y), x is the input text sequence, y is the target text sequence. The conditional text generation tasks ask the model to generate a high-quality output yˆ that close the ground-truth y based on the input x. We adopt the Transformer-based (Vaswani et al., 2017) encoderdecoder architecture as the general model for conditional text generation. The encoder part transforms x into a tensor representation He using the Transformer model, as shown in Eqn. 1. $${\mathcal{H}}_{e}=\mathbf{Encoder}(\mathbf{x}),$$ He = **Encoder**(x), (1) The decoder part uses He as input and produces a text sequence via the auto-regressive fashion. $${\hat{\mathbf{y}}}\sim\mathbf{Decoder}({\hat{\mathbf{y}}},{\mathcal{H}}_{e})=\prod_{t=1}^{|{\hat{y}}|}p({\hat{y}}_{t}|{\hat{y}}_{<t},{\mathcal{H}}_{e}).\quad(2)$$ To simplify the notation, we use Gθ(·) to denote the encoder-decoder generation model with parameters θ, and pGθ (ˆy|x) to denote the probability of generating ˆy given x. The standard way to train the encoder-decoder sequence generation model is to minimize the negative log-likelihood of the ground-truth target sequence: $${\mathcal{L}}_{\mathrm{NLL}}=-\sum_{t=1}^{|{\bf y}|}\log p_{G_{\theta}}(y_{t}|y_{<t},{\bf x}).\qquad(3)$$ During inference of the generator, a decoding strategy such as beam search is usually adopted. However, previous studies (Cohen and Beck, 2019; Meister et al., 2020) observed that the top-scored candidate from decoding strategies is often not the optimal candidate regarding the evaluation metric. Therefore, we design JGR to alleviate this problem through joint training of the generator and a ranker. ## 3.2 Joint Generator-Ranker Training We use Gθ(·) and Dϕ(·) to represent the generator model and ranker model respectively, where Gθ(·) is a text generation model with an encoder-decoder structure as explained in section 3.1, and Dϕ(·) works as a scoring model that takes the concatenation of input text x and generated text ˆy as the input, and outputs a scalar value syˆ reprensenting $$(1)$$ the quality of the generated text: $$s_{\hat{y}}=D_{\phi}([\mathbf{x},{\hat{\mathbf{y}}}])$$ syˆ = Dϕ([x, ˆy]) (4) During the training stage, the generator and ranker are trained alternatively and iteratively. Algorithm 1 shows the training procedure of JGR. We first warm up the generator Gθ with a standard negative log-likelihood (NLL) loss according to Eqn 3. Then, we iteratively update the ranker and generator as follows. Fix Gθ(·)**, Train** Dϕ(·): the goal of the ranker model Dϕ(·) is to choose the best sample from a set of candidates generated by the generator model, which we denote as Yˆ = {ˆy 1, ˆy 2*, ...,* ˆy C} $$\{{\hat{\mathbf{y}}}^{1},{\hat{\mathbf{y}}}^{2},...,{\hat{\mathbf{y}}}^{C}\}\sim p_{G_{\theta}}(\cdot|\mathbf{x}),$$ where C is the number of sampled candidates. For each ˆy i, we calculate the matching score (e.g., BLEU or ROUGE) with the ground-truth text y, denoted as ∆(y, ˆy i). Then, we pick up the positive and negative samples in the candidate set based on ∆(y, ˆy i) for training the ranker. Specifically, we use ˆy + to denote the text candidate with the highest matching score, and Yˆ−, whose size is a hyper-parameter, to denote the negative candidate set containing a certain number of candidates with the lowest scores. The ranker model is trained by minimizing contrastive loss: $${\mathcal{L}}^{\phi}=-\log p_{D_{\phi}}({\hat{\mathbf{y}}}^{+}|{\hat{\mathcal{Y}}}^{-},\mathbf{x}),$$ where pDϕ (ˆy +|Yˆ−, x) is the probability of selecting yˆ + from {ˆy +*} ∪ Y*−, which is computed by applying softmax on the ranking scores: $$p_{D_{\phi}}(\hat{\mathbf{y}}^{+}|\hat{\mathcal{Y}}^{-},\mathbf{x})=\frac{\exp^{s_{\hat{\theta}}+}}{\exp^{s_{\hat{\theta}}+}+\sum_{\hat{\mathbf{y}}^{-}\in\hat{\mathcal{Y}}^{-}}\exp^{s_{\hat{\theta}}-}},\tag{7}$$ (7) where syˆ+ and syˆ− are the ranking scores of positive candidate and negative candidate, respectively. After several steps of updating the ranker, we fix the ranker and update the generator. Fix Dϕ(·)**, Train** Gθ(·): the generator model is trained in two ways. The first one is LNLL, which uses a teacher-forcing mechanism to minimize the negative log-likelihood loss function over the training instances as discussed in Section 3.1 ( Eqn. 3). The second one is LRL - a reinforcement learningbased approach in which the generator model acts as a policy network to produce a list of text samples Yˆ given the input x, and the ranker model gives a $\eqref{eq:walpha}$. Algorithm 1 Joint Training of Generator and Ranker (JGR) Require: Generator Gθ; Ranker Dϕ; Training data D. 1: Initialize Gθ and Dϕ from the pre-trained language models. 2: Train the warm-up generator G0 θ on D. 3: **while** model has not converged do 4: for training steps A do 5: Sample candidates Y ∼ ˆ pGθ (·|x) for each x in the mini-batch. 6: Select ˆy + and Yˆ− from Yˆ 7: Update parameters of Dϕ with Eq 6. 8: **end for** 9: for training steps B do 10: Sample candidates Y ∼ ˆ pGθ (·|x) for each x in the mini-batch. 11: Compute rewards R(ˆy) for each ˆy ∈ Yˆ. 12: Update parameters of Gθ with Eq 9. 13: **end for** 14: **end while** $$({\boldsymbol{5}})$$ reward to each text sample in Yˆ based on its ranking score. The generator model can be trained by maximizing (minimizing) the expected (negative) reward (Sutton et al., 1999): $$\mathcal{L}_{\mathrm{RL}}=-\sum(\mathcal{R}(\hat{\mathbf{y}})-b)\sum_{t}\log p_{G_{\theta}}(\hat{y}_{t}|\hat{y}_{<t},\mathbf{x}),\tag{8}$$ $$(6)$$ where R(ˆy) is the reward for sample ˆy, calculated by combining the matching score ∆(ˆy, y) and the ranking score syˆ: R(ˆy) = ∆(ˆy, y) + syˆ. A baseline b is used to reduce the variance in RL training, which is computed by averaging the rewards of all samples in the candidate set: b =P ˆy∈Yˆ R(ˆy)/C. We then combine LNLL and LRL to form the final objective function for generator model training : $${\mathcal{L}}^{\theta}={\mathcal{L}}_{\mathrm{NLL}}+{\mathcal{L}}_{\mathrm{RL}}.$$ $\eqref{eq:walpha}$. θ = LNLL + LRL. (9) After updating the generator for several steps, we go back to fixing the generator and updating the ranker. This iteration will continue until the entire JGR framework converges. ## 4 Experimental Settings 4.1 Datasets We evaluate the proposed method on four publicly available benchmarks across four domains: Method CNN/DailyMail SAMSum R-1 R-2 R-L AVG R-1 R-2 R-L AVG Lead-3 40.42 17.62 36.67 31.57 - - - - PTGEN (See et al., 2017) 36.44 15.66 33.42 28.51 - - - - PTGEN-COV (See et al., 2017) 39.53 17.28 36.38 31.06 - - - - BART (Lewis et al., 2019) 44.16∗ 21.28∗ 40.90∗ 35.45∗ 52.86†∗ 28.24†∗ 48.57†∗ 43.22†∗ PEGASUS (Zhang et al., 2020) 44.17 21.47 41.11 35.58 51.99 27.59 47.56 42.38 ProphetNet (Qi et al., 2021) 44.20 21.17 41.30 35.56 52.62 27.77† 48.33 42.91 GSUM (Dou et al., 2021) 45.94 22.32 42.48 36.91 - - - - BRIO (Liu et al., 2022) 47.48∗ 23.55∗ 44.57∗ 38.53∗- - - - JGR-G 46.86 23.18 43.74 37.93 53.85 29.22 49.93 44.33 JGR-R 47.63 **23.59** 44.50 38.57 54.30 29.48 50.51 **44.76** JGR-Ginit w. BRIO 48.39 23.22 46.11 39.24 - - - - JGR-Rinit w. BRIO **48.86** 23.35 46.56 **39.59** - - - - Table 1: Overall results on CNN/DailyMail and SAMSum. "JGR-G" indicates the generator model in JGR, and "JGR-R" is using the ranker of JGR to re-rank the outputs of JGR-G. The results with "†" means from our implementation. The results with "∗" are the results of backbone models for JGR-G. R-L B-4 MTR MASS (Song et al., 2019) 50.98 23.14 25.36 BART (Lewis et al., 2019) 51.46∗ 23.14∗ 26.56∗ UNILM (Dong et al., 2019) 52.04 23.75 25.61 ProphetNet (Qi et al., 2020) 51.50 22.50 26.00 JGR-G 52.79 24.52 26.46 JGR-R **53.57 24.73 26.97** Table 2: Overall results on SQUAD 1.1. CNN/DailyMail (Hermann et al., 2015) for abstractive summarization, SAMSum (Gliwa et al., 2019) for conversational summarization, SQuAD 1.1 (Rajpurkar et al., 2016) for question generation, and PersonaChat (Zhang et al., 2018) for dialogue generation. The details of these benchmarks and the used evaluation metrics are given in Appendix F. ## 4.2 Implemention Details We use BART-large (Lewis et al., 2019) as the backbone model for the generator. The backbone of the ranker is based on RoBERTa-large (Liu et al., 2019). The generator and ranker models are initialized with the off-the-shelf checkpoints1. On CNN/DailyMail , apart from initializing JGR with the language models, we also evaluate JGR that initializes the generator using the previous state-ofthe-art model BRIO (Liu et al., 2022).2 During training, the generator model adopts a nucleus sampling approach to generate the candidate set with temperature = 1.0 and top(p) = 1.0. In inference, we apply beam search decoding strategy with beam size = 16, and length penalty = Table 3: Overall results on PersonaChat. 1.0 for the generator, and we take the output text with the highest beam search score as the final output of the generator. We use the ranker to re-rank the total 16 beam search results and pick the one with the highest ranking order as the final output of the ranker. The details of other hyper-parameters (e.g., learning rate and training epochs, etc) are listed in Appendix G. JGR is implemented based on the open-source Huggingface Transformers framework (Wolf et al., 2020). We conduct experiments on a single node of 8 NVIDIA A100 GPUs. It is worth noting that in order to initialize the ranker with a more general and reasonable ranking function, we increase the number of training steps and add a certain number of warm-up steps at the first ranker training iteration. The hyperparameters of the first ranker training iteration are also introduced in Appendix G. ## 5 Results And Analyses 5.1 Overall Results | B-1 | B-2 | D-1 | D-2 | | |-------------------------------------|-------|-------|-------|------| | BART (Lewis et al., 2019) | 49.9∗ | 40.0∗ | 1.3∗ | 8.0∗ | | PLATOw/o lantent (Bao et al., 2020) | 40.6 | 31.5 | 2.1 | 12.1 | | PLATO (Bao et al., 2020) | 45.8 | 35.7 | 1.2 | 6.4 | | ProphetNet (Qi et al., 2020) | 46.7 | 39.0 | 1.3 | 7.5 | | DialogVED (Chen et al., 2022) | 48.2 | 39.9 | 1.5 | 9.4 | | JGR-G | 52.5 | 43.2 | 1.4 | 6.2 | | JGR-R | 53.3 | 43.5 | 1.5 | 8.0 | Table 1 shows the results of JGR and other baseline methods on summarization tasks CNN/DailyMail and SAMSum. "Lead-3" is an ad-hoc summarization approach that uses the first three sentences in the article as the summary. "PTGEN" and "PTGEN-COV" are sequence-to-sequence generation methods without pre-training. Other baselines are pre-trained language models fine-tuned on the benchmarks. "JGR-G" indicates the generator model in JGR, and "JGR-R" is using the ranker of JGR to re-rank the outputs of JGR-G. "JGR-G/Rinit w. BRIO" are our JGR with the generator initialized from BRIO. As shown in Table 1, the generator model (JGR-G) itself achieves a considerable performance gain compared with its backbone models on both the two benchmarks, which verifies the effectiveness of the proposed JGR training to obtain a better generator. On both CNN/DailyMail and SAMSum, the ranker (JGR-R) can further improve the performance of JGR-G. Both JGR-G and JGR-R can reach state-of-the-art on SAMSum. If initialized with BRIO, both our JGR-G and JGR-R can surpass the state-of-the-art on CNN/DailyMail with a considerable margin. In Table 2, we compare the performance of JGR with four pre-trained language models (Song et al., 2019; Lewis et al., 2019; Dong et al., 2019; Qi et al., 2020) on SQuAD 1.1, since they have reported the results finetuned and evaluated in the same data split as in Liu et al. (2020). With a relatively weak backbone model, BART, our JGR-G can still outperform all the compared baselines. And JGR-R can also further improve the results of JGR-G. Table 3 shows the results of compared methods in persona-based response generation. As shown in the results, our JGR-G and JGR-R can surpass the baselines significantly on the metrics of BLEU-1 and BLEU-2. However, both JGR-G and JGR-R can only perform the same level of the baselines on Distinct-1 and Distinct-2. It is noteworthy that PLATO and DialogVED are the only two language models that are pre-trained using a conversational corpus among these baselines. They achieved high scores on Distinct-1 and Distinct-2, showing the importance of pre-training corpus. ## 5.2 Performance Of Generate-Then-Rank Frameworks Recently, several works adopt the generate-thenrank framework, especially on the summarization tasks (Liu and Liu, 2021; Liu et al., 2021; Ravaut et al., 2022; Liu et al., 2022; An et al., 2022). Different from JGR, these methods do not introduce the iterative training of the generator and ranker. We compare these methods with that our JGR-R on CNN/DailyMail. Since all the above methods train Generator Ranker R-1 R-2 R-L Gain G0- 44.16 21.28 40.90 0.00 G0 SimCLS 46.67 22.15 43.54 2.00 G0 RefSum 45.15 21.70 42.00 0.83 G0 SumRanker 46.62 22.39 43.59 2.08 G0 BRIO 47.28 22.93 44.15 **3.84** G0 COLO 46.33 22.15 43.08 1.73 G0 D0 45.54 22.27 42.25 1.24 JGR-G - 46.86 23.18 43.74 0.00 JGR-G JGR-R **47.63 23.59 44.50** 0.64 the ranker separately with the fine-tuned BART as the generator on CNN/DailyMail, we only report their results in this setting. The experimental results are shown in Table 4, where G0 denotes the base generator, i.e. BART, and D0is the ranker after the first ranker training iteration, as described in Section 4.2. Several observations can be seen in the results. First, our JGR achieves the highest score with the inference pipeline. Second, on CNN/DailyMail, the performance gain brought by JGR-R is not as big as other related methods which introduced some extra modules to their models. Third, on CNN/DailyMail, after the joint training in JGR, the performance gain brought by the ranker drops. We think this is because as the generator's performance grows, the quality of candidates rises, causing the ranker harder to pick the best among all candidates. ## 5.3 Impact Of Rewards In this section, we investigate the impact of rewards. We compare different reward settings on CNN/DailyMail. The compared methods are as follows: 1) **Self-critic** is the conventional selfcritical reinforcement-learning method where the rewards are the metric scores ∆(ˆy, y), and the greedy search output is used as baseline (Rennie et al., 2017; Paulus et al., 2018). 2) **Actor-critic** is the RL-based method that trains a critical model to fit the metric scores ∆(ˆy, y), and uses the critical score as the reward to train generator (Konda and Tsitsiklis, 1999; Bahdanau et al., 2017; Le et al., 2022). 3) **JGR-G**only mr/**JGR-G**only rr are our JGR where the generator is trained without the rewards from generator/metrics. The standard NLL loss is added in all the compared methods. The results are shown in Table 5. According to the results, our JGR can outperform traditional RL significantly. Both JGR- | R-1 | R-2 | R-L | AVG | | |--------------|-------|-------|-------|-------| | BART | 44.16 | 21.28 | 40.90 | 35.45 | | Self-critic | 44.14 | 21.20 | 40.95 | 35.43 | | Actor-critic | 45.04 | 21.99 | 41.71 | 36.25 | | JGR-G | 46.86 | 23.18 | 43.74 | 37.93 | | JGR-Gonly mr | 44.20 | 21.37 | 41.04 | 35.54 | | JGR-Gonly rr | 46.76 | 22.99 | 43.81 | 37.85 | Table 5: Results generator trained with different type of ![6_image_3.png](6_image_3.png) rewards on CNN/DailyMail. Gonly mr and JGR-Gonly rr suffer a performance decline compared to standard JGR-G, and the performance of JGR-Gonly mr is far worse than that of JGR-Gonly rr. In addition, the Actor-critical method outperforms the Self-critical method. The above two observations indicate that using rewards from a trained reward model contributes more than using rewards from metrics, and it is better to combine them. In Figure 2, we plot the curves of the dev scores under 3 random runs for the compared methods. As illustrated in the figure, although the standard Self-critical method appears to have a small variance under different random runs, its dev scores are hard to grow while training. The JGR-Gonly rr has a smaller variance than JGR-Gonly mr, however, it fails to achieve a high dev score. Our standard JGR, which combines metric rewards and ranker rewards, not only shows the relatively small variance in randomized trials but also can steadily improve the dev score during training. ## 5.4 Candidate Picking Strategies We examine how different types and numbers of candidates can affect the performance of JGR. We first compare different methods of picking positive candidates and negative candidates when training the ranker. The results are shown in Table 6. The ˆy +=GT denotes the positive candidate ˆy + being always the reference, not the generated samples. The result shows that if the best candidate is always ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) +=GT 45.64 22.27 42.55 36.82 44.20 21.46 41.22 35.63 Yˆ− = BOT(Yˆ) **46.86 23.18 43.74 37.93 47.63 23.59 44.50 38.57** Yˆ− = TOP(Yˆ) 44.16 21.31 41.00 35.49 44.07 21.23 40.91 35.40 Yˆ− = RAND(Yˆ) 44.68 21.65 41.42 35.92 45.80 22.68 42.56 37.01 Yˆ− = TOP-BOT(Yˆ) 44.86 21.80 41.64 36.10 46.12 22.76 42.91 37.26 Table 6: Results of JGR with different candidate picking strategies on CNN/DailyMail. Generator Ranker ![6_image_2.png](6_image_2.png) the reference, the performance of the generator is not as good as the standard JGR, and the ranker's performance is even worse than the generator. This is because the ranker is misled by the reference, thus it may always misclassify the references as the positive candidates, while other candidates sampled by the generator as the negative candidates. As a result, neither the ranker is well-trained, nor it can pass proper rewards to train the generator. The last four lines of Table 6 show the results of methods for picking negative samples, i.e., with the lowest matching scores (BOT(Yˆ), our standard setting), with the highest matching scores (TOP(Yˆ)), randomly pick (RAND(Yˆ)), and half has the highest matching scores and the second half has the lowest matching score (TOP-BOT(Yˆ)). From the results, we can see that our standard setting (BOT(Yˆ)) significantly outperforms other negative candidate picking strategies. In Table 7, we show the performance of JGR with different numbers of sampled candidates when training the generator. According to the results, under a certain range (C = 2 ∼ 8), the performance of JGR goes up as the number of candidates increases. We attribute this to the fact that increasing the number of candidates means that the generator can be optimized on more probabilities from candidates, which is to some extent a way of data augmentation. However, the performance does not grow as desired when the number of candidates becomes too large. ## 5.5 **Advanced Metrics And Human Evaluation** A model trained with RL objective may succeed in the metrics it uses as the reward function but perform poorly in other metrics. We hope to in- | BERTScore | FactCC | QE | | |-------------|----------|-------|-------| | BART | 88.47 | 57.54 | 50.56 | | JGR-G | 88.90 | 60.33 | 52.09 | | JGR-R | 88.96 | 61.59 | 52.18 | Table 8: Performance on BERTScore, FactCC, and QuestEval. | JGR-G wins | Tie | JGR-G loses | | |--------------|-------|---------------|----| | Inform. | 58 | 3 | 39 | | Fact. | 61 | 7 | 32 | | Read | 45 | 15 | 40 | vestigate whether JGR, which uses the RL objective to train its generator, suffers from the same problem. Firstly, we use three advanced metrics, namely BERTScore (Zhang* et al., 2020), FactCC (Kryscinski et al., 2020), and QuestEval (Scialom et al., 2021a), to evaluate JGR on CNN/DailyMail. BERTScore measures the semantic similarity of the predicted summary and ground-truth reference. FactCC and QuestEval use a trained language model to measure the factual consistency between the generated summary and input source document. According to the results shown in Table 8, JGR-G and JGR-R both achieve higher BERTScore than BART, indicating that they can generate summaries with better semantic quality. For FactCC and QuestEval, which measure factual consistency, JGR-G and JGR-R also surpass the BART baseline. We also conduct a human evaluation on CNN/DailyMail3. Following Blenderbot v2 (Roller et al., 2021), we randomly picked 100 cases from the CNN/DailyMail test set and asked the annotators to explicitly compare which generated text is better for each pair of summaries generated by JGR-G and BART, rather than assign an evaluation score. This explicit comparison can avoid the per annotator bias in numerical scores (e.g., annotators who tend to give generous scores), and remedy many of the issues of sequential effects such as contrasting with a previous case. Three aspects corresponding to the generation quality are evaluated, namely informativeness (Inform.), factual consistency (Fact.), and readability (Read.). As shown in Table 9, JGR-G beats BART in 58 cases w.r.t informativeness and 61 cases w.r.t. factual consistency, indicating that JGR-G performs better than 3More details about human evaluation are in Appendix D. ![7_image_0.png](7_image_0.png) Table 10: Results of JGR and JGR without joint training on CNN/DailyMail. ![7_image_1.png](7_image_1.png) BART on informativeness and factual consistency. For readability, JGR can generate summaries as readable as BART. To conclude, though trained with reinforcement learning aimed at optimizing ROUGE score, JGR still does not compromise on other aspects of summary quality, including semantic similarity, factual consistency, informativeness, and readability. ## 5.6 Does Joint Training Matter? To see how our proposed joint (iterative) training of the generator and ranker affects JGR, we compare the performance of our JGR and the variant that trains the generator in the same reinforcement learning paradigm as the JGR while fixing the ranker after fully training it (JGRw/o joint) 4. As the results shown in Table 10, JGRw/o joint is far worse than JGR, and JGR-Rw/o joint achieves no performance gain over JGR-Gw/o joint, which indicates the importance of the iterative training. To take an indepth look, we analyze the distribution of rewards. We first draw the curves of the Wasserstein distance between ranker rewards and metrics rewards at each training interval for JGR and JGRw/o joint. As illustrated in Figure 3(a), the Wasserstein distances of JGR are hovering within a range, while the Wasserstein distances of JGRw/o joint are growing extremely high, which means the distribution of ranker rewards and metrics rewards are quite different in JGRw/o joint. Therefore we think that JGR-Rw/o joint might not assign the proper rewards to the sampled candidates, due to it not being jointly trained. 4More details are given in Appendix E. We also analyze the diversity of sampled candidates for JGR-G and JGR-Gw/o joint. We use selfBLEU5to measure the diversity of sampled candidates. A larger self-BLEU score means a lower diversity of the sampled candidates. We show the curves of the average self-BLEU score for generated candidates at each training interval in Figure 3(b). From the figure, we can see that the selfBLEU of JGRw/o joint increases rapidly after the generator is trained 1000 steps, while the same situation never happens in JGR. It indicates that if the ranker is not jointly trained with the generator, the rewards it feeds back to the generator will cause the generator to sample candidates that are more and more similar to each other, making the training of JGR harder. On the contrary, joint training can erase this phenomenon and help to keep a certain level of diversity in sampled candidates, thus leading to better training. ## 5.7 More Discussions Due to the page limit, we show more discussions about JGR compared to reinforcement learning, GAN, data augmentation in Appendix A, the impact of decoding strategies in Appendix C. ## 6 Conclusion In this paper, we propose a novel Joint training of Generator and Ranker framework, namely JGR, for natural language generation. Both the generator and ranker of our JGR can achieve state-of-the-art results on several benchmarks in the areas of summarization, question generation, and dialog. We also analyze our JGR in several aspects and find that: First, the rewards from the ranker work better than the rewards from the direct metrics such as BLEU, but combining them together helps the training become more stable. Second, during training, letting the ranker be trained on the candidates generated by the generator exclusively is even better than previous approaches using ground-truth as positive examples. Third, more candidates being sampled during training can lead to better performance, which is consistent with the findings from data augmentation. Fourth, though trained with reinforcement learning aimed at optimizing automatic evaluation metrics, JGR still does not compromise on other aspects of generation quality. Finally, the joint training paradigm helps the 5We introduce the computation of self-BLEU in Appendix B. generator sample candidates with higher diversity, which in turn contribute to the training. ## Acknowledgements Weizhou Shen and Xiaojun Quan were supported by the National Natural Science Foundation of China (No. 62176270), the Guangdong Basic and Applied Basic Research Foundation (No. 2023A1515012832), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X355). ## Limitations So far JGR has only been evaluated on the domains of summarization, conversational summarization, question generation, and dialog. It should be evaluated on a wider range of benchmarks, such as machine translation and code generation. And we have not explored JGR's performance with extralarge language models such as GPT-3. We will evaluate JGR on the above points in the future. Because the generator of JGR samples candidates using auto regressive sampling, it may occupy relatively longer computational time and larger memory then the conventional MLE training. Though the performance of JGR is satisfactory, we still want to improve its computational costs. We will try non-auto regressive sampling and other improvements such as parameter sharing in the future. ## Ethics Statement All the experiments are conducted on publicly available datasets, which don't include any private information. Our work doesn't involve identity characteristics or any gender and racial discrimination. ## References Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, and Xipeng Qiu. 2022. CoLo: A contrastive learning based re-ranking framework for one-stage summarization. In *Proceedings of the 29th* International Conference on Computational Linguistics, pages 5783–5793, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In *International* Conference on Learning Representations. Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. PLATO: Pre-trained dialogue generation model with discrete latent variable. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 85–96, Online. Association for Computational Linguistics. Wei Chen, Yeyun Gong, Song Wang, Bolun Yao, Weizhen Qi, Zhongyu Wei, Xiaowu Hu, Bartuer Zhou, Yi Mao, Weizhu Chen, Biao Cheng, and Nan Duan. 2022. DialogVED: A pre-trained latent variable encoder-decoder model for dialog response generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4852–4864, Dublin, Ireland. Association for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. Eldan Cohen and Christopher Beck. 2019. Empirical analysis of beam search performance degradation in neural sequence models. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning* Research, pages 1290–1299. PMLR. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc. Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. GSum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830–4842, Online. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352, Vancouver, Canada. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, and Yann N Dauphin. 2016. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Advances in Neural Information* Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693– 1701. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘ Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Vijay Konda and John Tsitsiklis. 1999. Actor-critic algorithms. Advances in neural information processing systems, 12. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Sylvain Lamprier, Thomas Scialom, Antoine Chaffin, Vincent Claveau, Ewa Kijak, Jacopo Staiano, and Benjamin Piwowarski. 2022. Generative cooperative networks for natural language generation. *arXiv* preprint arXiv:2201.12320. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven CH Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. arXiv preprint arXiv:2207.01780. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Bohan Li, Yutai Hou, and Wanxiang Che. 2022a. Data augmentation approaches in natural language processing: A survey. *AI Open*, 3:71–90. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, PoSen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022b. Competition-level code generation with alphacode. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-ting Sun. 2017. Adversarial ranking for language generation. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, et al. 2020. Glge: A new general language generation evaluation benchmark. arXiv preprint arXiv:2011.11928. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yixin Liu, Zi-Yi Dou, and Pengfei Liu. 2021. RefSum: Refactoring neural summarization. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1437–1448, Online. Association for Computational Linguistics. Yixin Liu and Pengfei Liu. 2021. SimCLS: A simple framework for contrastive learning of abstractive summarization. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 1065–1072, Online. Association for Computational Linguistics. Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics. Clara Meister, Ryan Cotterell, and Tim Vieira. 2020. If beam search is the answer, what was the question? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2173–2185, Online. Association for Computational Linguistics. Richard Yuanzhe Pang, He He, and Kyunghyun Cho. 2021. Amortized noisy channel neural machine translation. *arXiv preprint arXiv:2112.08670*. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In International Conference on Learning Representations. Weizhen Qi, Yeyun Gong, Yu Yan, Can Xu, Bolun Yao, Bartuer Zhou, Biao Cheng, Daxin Jiang, Jiusheng Chen, Ruofei Zhang, et al. 2021. Prophetnet-x: largescale pre-training models for english, chinese, multilingual, dialog, and code generation. arXiv preprint arXiv:2104.08006. Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Mathieu Ravaut, Shafiq Joty, and Nancy F Chen. 2022. Summareranker: A multi-task mixture-of-experts reranking framework for abstractive summarization. arXiv preprint arXiv:2203.06569. Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2825–2835. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In *2017* IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1179–1195. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021a. QuestEval: Summarization asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Thomas Scialom, Paul-Alexis Dray, Jacopo Staiano, Sylvain Lamprier, and Benjamin Piwowarski. 2021b. To beam or not to beam: That is a question of cooperation for language gans. Advances in neural information processing systems, 34:26585–26597. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. *CoRR*, abs/1512.02433. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pretraining for language generation. In *Proceedings of* the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5926–5936. PMLR. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In *Advances in Neural Information Processing* Systems, volume 12. MIT Press. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. *CoRR*, abs/1610.02424. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. *Proceedings of the AAAI Conference on Artificial Intelligence*, 31(1). Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, and Weizhu Chen. 2021. Adversarial retriever-ranker for dense text retrieval. *arXiv* preprint arXiv:2110.03611. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In *Proceedings of the 37th International Conference on* Machine Learning, volume 119 of *Proceedings of* Machine Learning Research, pages 11328–11339. PMLR. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online. Association for Computational Linguistics. ## A Discussion In this section, we discuss the relations between our JGR and several popular methods, including reinforcement learning (RL), generative adversarial networks (GAN), and data augmentation. ## A.1 Jgr & Rl Some previous RL works, i.e., (Shen et al., 2015; Rennie et al., 2017; Paulus et al., 2018) proposed to use ∆(ˆy, y) to compute reward R(ˆy) directly which doesn't combine ranking scores as feedback signals. However, we argue that the ranking score calculated by the ranker model can provide more semantic-relevant information than the matching scores (e.g., BLEU or ROUGE) which are purely based on the surface match. In the ablation study, we also demonstrate that the proposed approach is superior to other configurations in terms of training stability and performance. Some other RL works Bahdanau et al. (2017); Le et al. (2022) introduced actor-critic frameworks (Konda and Tsitsiklis, 1999), which jointly train an actor and a critic, are similar to our JGR framework. However, they have not considered the contrastive rewards between different candidates given one input. Different from these works, JGR allows the generator to sample several i.i.d. candidates and be optimized simultaneously on these candidates at each training step. This improvement makes the reward of a sampled candidate contain contrastive information from the candidates from the same candidate set. Furthermore, it effectively raises the number of diverse chains of probabilities on which the generator can be optimized. In Table 5, we compare our JGR-G with the simple self-critical that uses metric rewards, and the actorcritic baseline that the critic is trained to fit the metric score ∆(ˆy, y). The empirical results show that trained with the JGR framework, the generator model can surpass those trained with previous RL-based methods well used in the NLG area. ## A.2 Jgr & Gan From the perspective of the composition of a framework, both JGR and GAN contain a generator and a critic. In GAN, the critic is the discriminator that aims at discriminating the real candidate from the candidate pool. While in JGR, the critic is the ranker that aims to re-rank the candidates generated by the generator. The main difference between JGR and GAN comes from the training objective. Let the Gθ denotes the generator, and Dϕ denotes the discriminator/ranker. GAN trains Gθ and Dϕ with the min-max objective: $$\begin{array}{c}{{\cal J}_{G_{\theta}D_{\phi}}=\min_{\theta}\max_{\phi}E_{{\bf y}^{+}\sim p_{\rm nuc}(\cdot|{\bf x})}[\log p_{D_{\phi}}({\bf y}^{+},{\bf x})]}\\ {\qquad\qquad+E_{\hat{\bf y}^{-}\sim p_{G_{\theta}}(\cdot|{\bf x})}[\log(1-p_{D_{\phi}}(\hat{\bf y}^{-},{\bf x}))]}\end{array}\tag{10}$$ In Eq. 10, y + is the ground-truth output of input x, and yˆ− is the candidate texts sampled by the generator. This is different from the setting of JGR, where both y + (denoted as yˆ + in JGR) and yˆ− are sampled from pGθ (·|x). To implement GAN in NLG, according to (Yu et al., 2017), the policy gradient is used and the reward assigned to yˆ− is logpDϕ (yˆ−, x). Note that the reward is always positive, therefore GAN essentially raises the probability of the generator outputs, regardless of the quality of the outputs. On contrary, as computed in Eq. 8, there are both positive and negative rewards in JGR, which means that JGR not only encourages the generator to generate good candidates but also punishes the generator when generating bad candidates. | R-1 | R-2 | R-L | AVG | | |--------|-------|-------|-------|-------| | BART | 44.16 | 21.28 | 40.90 | 35.45 | | GANstd | 43.68 | 20.81 | 40.45 | 34.98 | | GANmod | 42.93 | 20.66 | 39.87 | 34.49 | | JGR-G | 46.86 | 23.18 | 43.74 | 37.93 | Table 11: Results generator in JGR and two kinds of GANs. Table 11 shows the performance of generators in JGR and GAN on CNN/DailyMail, where GANstd is the standard GAN setting that y + is the groundtruth text and GANmod is our modified version of GAN that y + is replaced by the best candidate sampled by the generator, i.e., yˆ +. As shown in the table, our JGR surpasses the GAN methods, and the performance of GANstd and GANmod can not even surpass the model trained on optimizing the standard NLL loss, indicating that the GAN methods are not suitable for all NLG tasks. The GANmod performs worse than GANstd, showing that for the min-max objective of GAN, it is not a good choice to letting yˆ + as the positive sample, which is contrary to what we found in JGR. ## A.3 Jgr & Data Augmentation Data augmentation methods aim to improve the models' performance by adding modified or synthesized data to the existing training data (Li et al., 2022a). For natural language generation tasks, denote the augmented dataset as Dˆ, where Dˆ contains several augmented samples (xˆ, yˆ), the training object for model in the augmented data is: $${\mathcal{L}}_{\mathrm{DA}}=-\sum_{({\hat{\mathbf{x}}},{\hat{\mathbf{y}}})\in{\hat{\mathcal{D}}}}\sum_{t}\log p_{G_{\theta}}({\hat{y}}_{t}|{\hat{y}}_{<t},{\hat{\mathbf{x}}})\quad(11)$$ The above equation is similar to JGR's reinforcement learning loss in Eq. 8. Both of them optimize the generator by maximizing the log-likelihood of synthesized data. Therefore, from this perspective, we can regard our JGR as a way of data augmentation where the synthesized data is sampled from the generator and the log-likelihood is re-scaled by the rewards. | R-1 | R-2 | R-L | AVG | | |-------|-------|-------|-------|-------| | BART | 44.16 | 21.28 | 40.90 | 35.45 | | DAsep | 44.37 | 21.24 | 41.18 | 35.60 | | DAmix | 44.27 | 21.38 | 41.04 | 35.56 | | JGR-G | 46.86 | 23.18 | 43.74 | 37.93 | We designed two simple but effective data augmentation methods named DAsep and DAmix. Both of DAsep and DAmix use a fine-tuned generator G0 to generate one summary yˆ for each input x in original training set D using beam search, the collection of all (x, yˆ) is treat as the augmented training data Dˆ. After that, 1) DAsep fine-tunes G0 firstly on Dˆ and then on D, 2) DAmix further fine-tunes G0 on the mixture of Dˆ and D. We compare the performance of DAsep and DAmix with our JGR on CNN/DailyMail, with BART as the generator, the results are shown in Table 12. As shown in the results, both DAsep and DAmix can further improve the performance of BART, verifying the effect of data augmentation. However, the performance gain brought by data augmentation is far less than that brought by JGR. ## B Computation Of Self-Bleu Given a candidate set Yˆ = {ˆy 1, ˆy 2*, ...,* ˆy C} sampled from the generator, the self-BLEU score for Yˆ is computed as the average of mutual BLEU scores of all candidate pairs: $$\sum_{\begin{array}{c}\mbox{BLEU}(\hat{\mathbf{y}}^{i},\hat{\mathbf{y}}^{j})\\ \mbox{self-BLEU}(\hat{\mathbf{y}})=\frac{\hat{\mathbf{y}}^{i},\hat{\mathbf{y}}^{j}\in\hat{\mathbf{y}};i\neq j}{C(C-1)}\end{array}}\tag{12}$$ A higher self-BLEU score means the sampled candidates are more similar to each other, in other words, a lower diversity of the sampled candidates. It is another way to assess the diversity of sampled candidates by computing the proportion of the number of distinct n-grams in the total number of tokens for the sampled candidates of an input sequence. We refer to this metric as selfDistinct-n where n refers to n-grams. The higher self-Distinct-n corresponds to the higher diversity of sampled candidates. Like Figure 3(b), we show ![13_image_0.png](13_image_0.png) the curves of the average self-Distinct-2 for generated candidates at each training interval in Figure 4. From the figure, we can see that the self-Distinct-2 of JGRw/o joint drops rapidly after the generator is trained 1000 steps, while the self-Distinct-2 keeps hovering in a relatively high range for JGR. This phenomenon aligns with what we found when applying self-BLEU and further enhances our conclusion in Section 5.6. ## C Decoding Strategies We study the impact of different decoding strategies during inference. Two decoding strategies are ![13_image_1.png](13_image_1.png) compared, namely beam search and group beam search (Vijayakumar et al., 2016). We also compare different beam sizes. The results of ROUGE-1 score with beam search on CNN/DailyMail are shown in Figure 5. ![14_image_0.png](14_image_0.png) when using the normal beam search. However, the performance of JGR-R can rise as the beam size increases. This indicates that increasing the beam size can raise the probability of JGR-R ranking a better candidate to the top among all the candidates decoded by JGR-G. Figure 6 shows the results with diverse beam search. Firstly we can find that with diverse beam search the JGR system can not achieve comparable results with JGR using normal beam search, and the performance of JGR-G begins to drop when beam size exceeds 4. We can still observe that the performance of JGR-R rises as the beam size increases. However, since the performance of JGRG keeps declining, the performance ascent of JGRR is not as significant as that of JGR-R with the normal beam search. ## D Details Of Human Evaluation We conduct a human evaluation on CNN/DailyMail. Following Blenerbot v2 (Roller et al., 2021), we ask the annotators to explicitly compare which generated text is better for each pair of generated outputs, rather than assign an evaluation score. This explicit comparison can avoid the per annotator bias in numerical scores (e.g., annotators who tend to give generous scores), and remedy many of the issues of sequential effects such as contrasting with a previous case. We randomly picked 100 cases from the CNN/DailyMail test set, each case was organized as <Doc, Summary \#1, Summary \#2> where Doc means the source document, Summary \#1 and Summary \#2 mean the summaries generated by JGR and BART. The annotators were asked to compare Summary \#1 and Summary \#2 on three aspects given at the end of each case. To avoid the stereotype of annotators that Summary \#1 or Summary \#2 is better according to previous cases, we randomly shuffle the summaries in each case, which means that Summary \#1 is not necessarily from JGR or BART, and so as Summary \#2. Each picked case was annotated by 3 annotators, and they worked individually without communication. Given a certain human evaluation metric on one case, the comparison result is obtained by the following rules: - If more than or equal to two annotators think JGR has won in that metric, then JGR wins. - If more than or equal to two annotators think BART has won in that metric, then BART wins. - Otherwise, the comparison result is marked as a tie. We evaluate JGR and BART from three aspects, namely informativeness (Inform.), factual consistency (Fact.), and readability (Read.). The results are shown in Table 9. Note that since we use direct comparison, the number of "tie" cases may be fewer than some works that conduct human evaluation through assigning scores. ## E Details Of Jgr**W/O Joint** To implement JGRw/o joint, we first fully train the generator with the negative likelihood loss. Then we use this generator to generate candidates and fully train the ranker with the objective described in Eq. 6. Then we train the generator again using the same RL paradigm as JGR with the reward from the ranker. The only different between JGRw/o joint and JGR is that JGRw/o joint does not incorporate the iterative training. ## F Details Of The Benchmarks And Evaluation Metrics CNN/DailyMail (Hermann et al., 2015) is a benchmark for summarization. Both extractive and abstractive summarization models can be applied on CNN/DailyMail. Since our JGR focuses on text generation, we treat CNN/DailyMail as an abstractive summarization task. There are two versions: anonymized and non-anonymized. We use the nonanonymized dataset See et al. (2017). The evaluation metrics are Rouge-1, Rouge-2, and Rouge-L. SAMSum (Gliwa et al., 2019) is a benchmark for conversational summarization, whose inputs are the concatenation of dialog context. The evaluation metrics are Rouge-1, Rouge-2, and Rouge-L. SQuAD 1.1 (Rajpurkar et al., 2016) is originally an machine reading comprehension dataset. We follow the data split and pre-processing as done by Du et al. (2017); Zhao et al. (2018); Liu et al. (2020), to make it a question generation dataset, which treats the concatenation of the answer span and article as the input, and the question as the target output. The evaluation metrics are Rouge-L, Bleu-4, and METEOR. PersonaChat (Zhang et al., 2018) contains about 160K utterances. Given the multi-turn conversations and persona profile, the model learns to generate the response. The evaluation metrics are Bleu-1, Bleu-2, and the ratio of distinct unigrams and bigrams in the generated responses (Distinct-1 and Distinct-2). The statistics of all benchmarks are shown in Table 13. | Benchmark | |Train| | |Dev| | |Test| | |Src.| | |Tgt.| | |---------------|-----------|---------|----------|----------|----------| | CNN/DailyMail | 287,113 | 13,368 | 11,490 | 822.3 | 57.9 | | SAMSum | 14,731 | 818 | 819 | 124.1 | 23.4 | | SQuAD 1.1 | 75,722 | 10,570 | 11,877 | 149.4 | 11.5 | | PersonaChat | 122,499 | 14,602 | 14,056 | 120.8 | 11.8 | Table 13: The statistics of the benchmarks. |Src.| means the average number of tokens for each source input. |Tgt.| means the average number of tokens for each target text. For evaluation on CNN/Daily and SAMSum, we use the python rouge score package: https://pypi.org/project/rouge-score/. For evaluation on SQuAD 1.1, we follow the evaluation scripts open-sourced by Liu et al. (2020) at https://github.com/microsoft/ ProphetNet/tree/master/GLGE_baselines/ script/script/evaluate/qg. For evaluation on PersonaChat, we follow the evaluation scripts open-sourced by Liu et al. (2020) at https://github.com/microsoft/ ProphetNet/tree/master/GLGE_baselines/ script/script/evaluate/personachat. ## G Hyper-Parameters Of Fine-Tuning On Benchmarks. The hyper-parameters for our JGR on each benchmark are shown in Table 14. | CNN/DailyMain | SAMSum | SQuAD 1.1 | PersonaChat | | |----------------------------------------------------------|--------------------------------|-------------------------------|----------------------|------| | Warming-up G0 | | | | | | # Epochs | 5 | 5 | 20 | 5 | | Learning rate | 5e-5 | 5e-5 | 5e-5 | 5e-5 | | Batch size | 96 | 128 | 96 | 96 | | Max source length | 1024 | 1024 | 600 | 700 | | Max target length | 100 | 100 | 65 | 70 | | First Ranker training iteration | | | | | | # Epochs | 3 | 20 | 3 | 3 | | Learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | | Warm-up ratio/steps | 0.2 | 500 steps | 0.2 | 0.3 | | Batch size | 64 | 64 | 64 | 32 | | Max source length | 512 | 512 | 500 | 500 | | # Candidates sampled for G0 | 16 | | | | | # Negative candidates | 2 | | | | | ∆(ˆy, y) | 0.02(R-1)+0.05(R-2)+0.025(R-L) | 0.02(R-L)+0.04(B-4)+0.04(MTR) | 0.02(B-1)+0.025(B-2) | | | JGR training | | | | | | # Epochs | 3 | 10 | 3 | 3 | | # JGR-R steps per iteration | 500 | 231 steps (1 epoch) | 250 | 500 | | # JGR-G steps per iteration | 500 | 231 steps (1 epoch) | 250 | 500 | | JGR-G learning rate | 5e-5 | 1e-5 | 5e-5 | 5e-5 | | JGR-R learning rate | 1e-5 | 5e-6 | 1e-5 | 1e-5 | | Batch size | 64 | 64 | 32 | 64 | | # Candidates sampled for JGR-R | 16 | | | | | # Negative candidates for JGR-R | 2 | | | | | # Candidates sampled for JGR-G | 8 | | | | | Beam size when inference | 16 | | | | | ∆(ˆy, y) | 0.02(R-1)+0.05(R-2)+0.025(R-L) | 0.02(R-L)+0.04(B-4)+0.04(MTR) | 0.02(B-1)+0.025(B-2) | | | Table 14: The hyper-parameters of JGR on each benchmark. | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Appendix F ✓ B1. Did you cite the creators of artifacts you used? Section 4 and Appendix F ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix F ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and Appendix F ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We do not discuss them, but cite the original papers of these artifacts. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix F ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix F ## C ✓ **Did You Run Computational Experiments?** Section 4 And Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 and Appendix G The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 and Appendix G ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We follow the previous works and only report the metrics scores ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4. and Appendix F D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5.4 and Appendix E ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We have not reported the full text of instructions in the paper, but we provided them in our anonymous open-source link https://anonymous.4open.science/r/jgr-anonymous-F597. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? This is the business behavior at the company level. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We have not discussed this in the paper, but we provided it in our anonymous open-source link https://anonymous.4open.science/r/jgr-anonymous-F597. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. We did not provide data but conducted a human evaluation.
eyal-etal-2023-multilingual
Multilingual Sequence-to-Sequence Models for {H}ebrew {NLP}
https://aclanthology.org/2023.findings-acl.487
Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoder-only models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for large LMs in morphologically rich languages (MRLs) such as Hebrew. We demonstrate this by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, for which we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a separate, specialized, morpheme-based, decoder. Using this approach, our experiments show substantial improvements over previously published results on all existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs.
# Multilingual Sequence-To-Sequence Models For Hebrew Nlp Matan Eyal Hila Noga Roee Aharoni ![0_Image_0.Png](0_Image_0.Png) Idan Szpektor Reut Tsarfaty Google Research {matane,hilanoga,roeeaharoni,szpektor,reutt}@google.com ## Abstract Recent work attributes progress in NLP to large language models (LMs) with increased model size and large quantities of pretraining data. Despite this, current state-of-the-art LMs for Hebrew are both under-parameterized and under-trained compared to LMs in other languages. Additionally, previous work on pretrained Hebrew LMs focused on encoderonly models. While the encoder-only architecture is beneficial for classification tasks, it does not cater well for sub-word prediction tasks, such as Named Entity Recognition, when considering the morphologically rich nature of Hebrew. In this paper we argue that sequence-to-sequence generative architectures are more suitable for large LMs in *morphologically rich languages* (MRLs) such as Hebrew. We demonstrate this by casting tasks in the Hebrew NLP pipeline as text-to-text tasks, for which we can leverage powerful multilingual, pretrained sequence-to-sequence models as mT5, eliminating the need for a separate, specialized, morpheme-based, decoder. Using this approach, our experiments show substantial improvements over previously published results on all existing Hebrew NLP benchmarks. These results suggest that multilingual sequence-to-sequence models present a promising building block for NLP for MRLs. ## 1 Introduction In recent years, large pretrained language models showed impressive results in a variety of NLP tasks (Devlin et al., 2019; Raffel et al., 2020), domains (Beltagy et al., 2019; Araci, 2019), and languages (Antoun et al., 2020; Chan et al., 2020). These models were trained in a self-supervised fashion on large corpora with language modeling objectives. By doing this, models are taking advantage of information available in the training data without any access to explicit labels (Petroni et al., 2019). Recent works (Kaplan et al., 2020; Hoffmann et al., 2022) argue that language models capabilities scale as a power-law with model size, dataset size, and the amount of compute used for training. Following the success such models achieved on English benchmarks, analogous language-specific models were developed to improve benchmark results in a variety of languages such as Arabic and French to name a few (Antoun et al., 2020; Martin et al., 2020). Hebrew NLP was no different, with a number of encoder-based BERT variations proposed, specifically HeBERT (Chriqui and Yahav, 2022), AlephBERT (Seker et al., 2022), and recently AlephBERTGimmel (ABG), a variation of AlephBERT with a much larger vocabulary (Guetta et al., 2022). Despite the scaling laws proposed in Kaplan et al. (2020), all of the Hebrew variations of BERT are trained with a relatively small set of pretraining data, and are under-parameterized. In terms of training data, HeBERT and AlephBERT (and similarly ABG) were trained on 10.5GB and 16GB of Hebrew pretraining data respectively. mT5 (Xue et al., 2021) in comparison, was trained on mC4, where its public replica (Dodge et al., 2021) is a 27TB collection of natural text in 101 languages drawn from the public Common Crawl, 66GB of 7700 which is Hebrew (4.125x more Hebrew training data compared to AlephBERT). In terms of parameterization and model size, ABG, the largest Hebrew LM, has 60x and 945x fewer parameters compared to English T5 XXL (Raffel et al., 2020) and GPT3 (Brown et al., 2020), respectively, language models that were released two years earlier. Crucially for downstream use, although English T5 is a much larger model than available Hebrew language models, it can still be fine-tuned on common GPUs. See Appendix A for more details. In addition to scale differences, all previous Hebrew LMs use an encoder-only architecture, even though the morphological complexity of Hebrew and other *morphologically rich languages* (MRLs)1 pose challenges for the efficacy of this model. Consider, for instance, the task of POS tagging. Assigning POS tags for the phrase "babayit halavan"2 requires to initially segment the phrase to its morphemes and only then assign each morpheme its matching POS tag. Since the number of input tokens does not match the number of output tags (2 input words and 5 output tags, one for each morpheme), a one-to-one token-to-tag classification head, as commonly employed in encoder-only models, is not feasible. The same problem appears in semantic tasks like Question-Answering (QA) and Named Entity Recognition (NER). For example, the named entity for "babayit halavan" is "habayit halavan". This goes beyond what encoder-only models can do by requiring the model to label a string that is not part of the input text. To overcome this architectural obstacle in encoder-only models, the authors of AlephBERT and ABG (Seker et al., 2022; Guetta et al., 2022) used Brusilovsky and Tsarfaty (2022)'s three-step segmentation and tagging approach: contextualize the input, pass resulting embeddings to an LSTM decoder, which then generates the segmentation separated by a space symbol, in a char-by-char fashion. Then they pass the whitespace representation to a classification head. While effective for morpho-syntactic tasks, these additional components do not enable full generative capabilities, and are not pretrained, therefore the representation of morphemes cannot enjoy the pretrained LMs ad-1MRLs, in contrast configurational languages (e.g. English), express grammatical functions at the word level via phenomena such as inflectional affixes, pronominal clitics, etc. 2Hebrew transcribed to English. Translated as "In the White House". The phrase is made of the morphemes beha-bayit ha-lavan (in-the-house the-white), but written and pronounced as "babayit halavan" without explicit boundaries. vantages. The departure point of this work is that, in contrast to pre-trained encoders, sequence-to-sequence models can simply take the raw text as input and for any sequence labeling task, generate the morphemes and tags in a sequence. In POS tagging, for example, the generated output can be: be»ADP@@ha»DET@@bayit»NOUN ha»DET@@lavan»ADJ where "@@" acts as a morpheme delimiter within a word and "»" is the morpheme-tag delimiter. See Sec. 3 for more details. For tasks such as QuestionAnswering, we can simply generate the target word forms without explicitly going through a segmentation phase. This change in approach to using sequence-to-sequence modeling is relevant for all MRLs, and in this paper we demonstrate its efficacy and effectiveness specifically for Hebrew. This work thus identifies the challenge of current Hebrew LLMs as a *three-faceted* problem: Underparameterization, limited training data, and the use of a suboptimal pre-training architecture.3 To address these three challenges at once, we propose using mT5 (Xue et al., 2021), a large multilingual sequence-to-sequence model that was pretrained on a vast amount of multilingual data including a significant amount of Hebrew data.4 To adapt classification, span prediction, and token/morpheme classification tasks to mT5's text-to-text paradigm, we propose the text-only formulations illustrated in Figure 1. Subsequently, we report here that this paradigm change produces empirical improvements on all tasks evaluated compared to previous state-of-the-art, some of which are dramatic, as a 27.9 F1 increase in Hebrew Question-Answering. ## 2 Modeling We use mT5 (Xue et al., 2021), a multilingual generative text-to-text version of T5 (Raffel et al., 2020), trained simultaneously on 101 languages. We evaluate mT5 on all its available sizes - Small, Base, Large, XL and XXL - ranging from 300M to 13B parameters. Subsequently, we propose casting all Hebrew NLP tasks for which evaluation benchmarks exist as text-to-text tasks: the input text is fed into the model, and targets are produced in a generative manner. 3So far we mentioned encoder-only models as an example for suboptimal modeling choices for MRLs, but this is also the case when using poor tokenization, small vocabularies etc. 4It is beyond the scope of this paper to examine the factors that contributed to its improved performance, see Sec. 6. ![2_image_0.png](2_image_0.png) In contrast to text-to-text formulations of classification and span prediction, token classification is not as common in the literature, and specifically when the tokens consist of multiple morphemes, as is the case in MRLs. For example, in POS tagging for MRLs, each morpheme is assigned a POS tag, therefore multiple tags are assigned per word. As a result, a generative model cannot simply generate tag predictions one after the other, but it requires to first segment the text and only then label it accordingly. E.g., An unsatisfactory generation for "habayit" is *DET, NOUN* as we cannot recover which morpheme belongs to which tag. An acceptable model output, on the other hand, is *ha-DET,* lavan-ADJ as we can recover that ha was tagged with a DET and *lavan* with a ADJ. Throughout our experiments we tested a number of different textto-text formulations. The best formulations for the tasks at hand are depicted in Fig. 1. ## 3 Experiments Goal The goal of this study is to assess the performance of a sequence-to-sequence large language model, specifically mT5, that was trained on a large quantity of multilingual data, compared to existing Hebrew language models. Models We fine-tuned different sizes of mT5 (Small to XXL) on all Hebrew tasks in a single-task fashion for 4096 steps, with a constant learning rate of 1e-3. For test set evaluation, we used the bestperforming checkpoint from the development set, as tasks usually converge earlier. We compared the mT5 models against YAP (More et al., 2019), 5 mBERT (Devlin et al., 2019), HeBERT (Chriqui and Yahav, 2022), AlephBERT (Seker et al., 2022) and ABG (Guetta et al., 2022). ## 3.1 Tasks We assembled an evaluation suite of Hebrew benchmarks composed of the following tasks: QA (Keren and Levy, 2021), NER (Bareket and Tsarfaty, 2021; Mordecai and Elhadad, 2005), Sentiment Analysis (Amram et al., 2018), and the morpho-syntactic tasks of segmentation, POS tagging and lemmatization from Sade et al. (2018), where we used the the latest dataset version, compatible with the ABG experiments (Guetta et al., 2022). ## 3.1.1 Question-Answering Keren and Levy (2021) introduced ParaShoot, a Hebrew Question-Answering dataset which was created using the format and crowdsourcing methodology of SQuAD (Rajpurkar et al., 2016). We report token-level F1 and Exact Match scores as no morpheme boundaries are available. ParaShoot scores are from Keren and Levy (2021). The input is constructed by concatenating the context and question, with the output being the answer. We also conducted manual evaluation of different mT5 models on this dataset to evaluate the impact of model sizes, see details in Appendix B. ## 3.1.2 Named Entity Recognition Bareket and Tsarfaty (2021) created NEMO, a NER add-on annotation for the Hebrew UD corpus (Sade et al., 2018). The authors proposed two dataset versions: token-level, where entities correspond to whitespace boundaries, similarly to BMC (Mordecai and Elhadad, 2005), and morpheme-level, with morpheme-based boundaries. The authors additionally revised the common NER evaluation procedure by comparing predicted and target entities on the surface form, boundaries and entity types, but not char positions. Thus, we train the seq-to-seq model to simply generate all of the sentence entities and their labels one after the other. ## 3.1.3 Sentiment Analysis Correspondingly with previous work, we report F1 scores for Amram et al. (2018), a sentiment analysis dataset curated by annotating Facebook user comments with positive/negative/neutral tags.6In our sequence-to-sequence formulation the encoder receives raw text with the decoder generating one of three labels that correspond to the positive, negative and neutral tags. We use special tokens to ensure that generation only requires a single token. 6We use Seker et al. (2022) refined version which does not include leaks between split sets. | Model | Segmentation | POS Tagging | Lemmatization | |-------------|----------------|---------------|-----------------| | YAP | 93.64 | 90.13 | 78.6 | | mBERT | 96.07 | 93.14 | - | | HeBERT | 97.90 | 95.80 | - | | AlephBERT | 97.88 | 95.81 | - | | ABG | 98.09 | 96.22 | - | | mT5 - Small | 94.83 | 94.55 | 89.96 | | mT5 - Base | 96.34 | 95.9 | 92.09 | | mT5 - Large | 96.76 | 95.58 | 92.21 | | mT5 - XL | 98.32 | 96.91 | 95.13 | | mT5 - XXL | 98.67 | 97.46 | 95.53 | ## 3.1.4 Word Segmentation, Pos Tagging And Lemmatization Sade et al. (2018) manually validated the UDv2 version of the Hebrew treebank resulting in a set of morpho-syntactic tasks. Aligned to previous work we report word segmentation and POS tagging. We also evaluate our model on the lemmatization task and compare it to YAP (More et al., 2019), an open-source Hebrew parser. In accordance with previous work in Hebrew, we report aligned MultiSet (mset) scores. To produce the output for all these tasks we use two additional tokens: "@@" is the morpheme delimiter within a word and "»" is the morpheme-tag delimiter. E.g., segmentation and POS tagging of "habayit halavan" should result in the following sequences, be@@ha@@bayit ha@@lavan and be»ADP@@ha»DET@@bayit»NOUN ha»DET@@lavan»ADJ, respectively. ## 4 Results Tables 1,2 summarize our empirical findings. Our results demonstrate a marked improvement over previously published results on existing Hebrew benchmarks. mT5 produces the biggest performance boost for the QA task of ParaShoot, with mT5-base already surpassing baseline models and mT5-XXL outperforming AlephBERT by 27.9 F1 points. For NER, mT5 produces better results than evaluated baselines on both of the dataset annotation levels. The largest performance boost comes in NEMO's morpheme-level version where mT5 learns to segment and label entities in an end-to-end fashion. For sentiment analysis, mT5 outperforms the baseline models by a small fraction, however, manual error analysis we performed shows that 34% of its errors are annotation errors and for further 30% our annotators were not able to decide on the correct label. We conclude that work towards a cleaner, more challenging sentiment analysis dataset in Hebrew is needed. For segmentation and POS tagging we report error reduction of 30.3% and 32.8% compared to previous state-of-the-art. For the lemmatization task we report an increase of 16.93 mset F1 points compared to YAP. All of these are an important step towards closing the gap in morphosyntactic tasks compared with other languages. ## 5 Related Work HeBERT (Chriqui and Yahav, 2022) is the first pretrained transformer-based language model trained on Hebrew Wikipedia and OSCAR (Ortiz Suárez et al., 2020) for the task of user-generated sentiment analysis. AlephBERT (Seker et al., 2022) was pretrained on the same copora in addition to a very large number of Hebrew tweets. Guetta et al. (2022) tackled the extreme data sparseness in MRLs lexica (Tsarfaty et al., 2020) by pretraining with roughly 2.5x of AlephBERT vocabulary size, leading to performance improvements. Orthogonally, Keren et al. (2022) proposed using char-level LMs to mitigate the same sparseness problem, however results were inconclusive. Xue et al. (2021) showed that mT5 outperforms baseline models on a number of multilingual datasets but did not directly evaluate on Hebrew. Alternatively, monolingual Hebrew LM papers only compared against mBERT (Devlin et al., 2019) as the sole multilingual baseline. ## 6 Limitations mT5, compared with previous Hebrew LMs, is bigger, pretrained on more multiligual data, and learning to segment and tag in an end-to-end manner. While it was beyond the scope of this paper to pretrain new LMs and study which factors contributed to the improved performance, identifying these factors will be useful for determining the most effective approach for future work. While larger mT5 models perform better than available LMs, they require more powerful hardware accelerators and take longer to train and infer. However, this is a reasonable trade-off from pretraining designated monolingual models from scratch, a more expensive task by itself. Additionally, the inclusion of data from 101 languages in the training of mT5 may have negatively impacted its performance on Hebrew, as some of the data may not have been relevant or beneficial to this particular language. Future work will need to address this issue by training a monolingual Hebrew LM in order to further improve performance for Hebrew. An inherent risk in sequence-to-sequence models is that they can generate inconsistent text with respect to the input text (Lee et al., 2018; Rohrbach et al., 2018). While potentially sensitive in different applications, a number of evaluation frameworks have been suggested to reduce the number of such "hallucinations" (Honovich et al., 2021, 2022). Another limitation of our evaluation framework is that, for lack of available datasets, we did not evaluate mT5 on purely generative tasks such as summarization and paraphrasing. ## 7 Conclusions All of the Hebrew LMs to date are encoder-only models, which could not directly generate morpheme sequences, and thus necessitate a specialized monolingual decoder. In this work we propose to take advantage of mT5, a publicly available multilingual large language model that was trained on a considerable amount of multilingual and Hebrew data. Additionally the generative approach of text-to-text modeling is more aligned with the morphological challenges inherent in Hebrew and by that dispense with the need for specially-tuned decoders. We fine-tuned and evaluated mT5 on a set of Hebrew downstream tasks and report dramatic improvements. Subsequently, we propose that multilingual sequence-to-sequence models provide a more suitable pretraining alternative for MRLs, compared with the smaller, monolingual, encoderonly models. ## 8 Acknowledgements We thank Dan Bareket and Eylon Guetta from Bar Ilan University for their help in sharing the UD and NEMO data. ## References Adam Amram, Anat Ben David, and Reut Tsarfaty. 2018. Representations and architectures in neural sentiment analysis for morphologically rich languages: A case study from Modern Hebrew. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2242–2252, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In *Proceedings of the 4th* Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association. Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063. Dan Bareket and Reut Tsarfaty. 2021. Neural modeling for named entities and morphology (NEMO2). Transactions of the Association for Computational Linguistics, 9:909–928. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Idan Brusilovsky and Reut Tsarfaty. 2022. Neural token segmentation for high token-internal complexity. arXiv preprint arXiv:2203.10845. Branden Chan, Stefan Schweter, and Timo Möller. 2020. German's next language model. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6788–6796, Barcelona, Spain (Online). International Committee on Computational Linguistics. Avihay Chriqui and Inbal Yahav. 2022. Hebert & hebemo: a hebrew bert model and a tool for polarity analysis and emotion recognition. *INFORMS Journal on Data Science*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jesse Dodge, Maarten Sap, Ana Marasovic, William ´ Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286–1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Eylon Guetta, Avi Shmidman, Shaltiel Shmidman, Cheyn Shmuel Shmidman, Joshua Guedalia, Moshe Koppel, Dan Bareket, Amit Seker, and Reut Tsarfaty. 2022. Large pre-trained models with extra-large vocabularies: A contrastive analysis of hebrew bert models and a new one to outperform them all. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 161–175, Dublin, Ireland. Association for Computational Linguistics. Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. q 2: Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7856–7870, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Omri Keren, Tal Avinari, Reut Tsarfaty, and Omer Levy. 2022. Breaking character: Are subwords good enough for mrls after all? *arXiv preprint* arXiv:2204.04748. Omri Keren and Omer Levy. 2021. ParaShoot: A Hebrew question answering dataset. In Proceedings of the 3rd Workshop on Machine Reading for Question Answering, pages 106–112, Punta Cana, Dominican Republic. Association for Computational Linguistics. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. NeurIPS 2018 Workshop on Interpretability and Robustness for Audio, Speech, and Language. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. CamemBERT: a tasty French language model. In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics, pages 7203–7219, Online. Association for Computational Linguistics. Naama Ben Mordecai and Michael Elhadad. 2005. Hebrew named entity recognition. *MONEY*, 81(83.93):82–49. Amir More, Amit Seker, Victoria Basmova, and Reut Tsarfaty. 2019. Joint transition-based models for morpho-syntactic parsing: Parsing strategies for MRLs and a case study from Modern Hebrew. Transactions of the Association for Computational Linguistics, 7:33–48. Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît Sagot. 2020. A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703–1714, Online. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035–4045, Brussels, Belgium. Association for Computational Linguistics. Shoval Sade, Amit Seker, and Reut Tsarfaty. 2018. The Hebrew Universal Dependency treebank: Past present and future. In *Proceedings of the Second* Workshop on Universal Dependencies (UDW 2018), pages 133–143, Brussels, Belgium. Association for Computational Linguistics. Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Greenfeld, and Reut Tsarfaty. 2022. AlephBERT: Language model pre-training and evaluation from sub-word to sentence level. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 46–56, Dublin, Ireland. Association for Computational Linguistics. Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7396– 7408, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. ## A Running T5 On Common Gpus AlephBertGimmel (Guetta et al., 2022), the largest Hebrew LM to date, is roughly the same size as BERT base (Devlin et al., 2019), and even though T5 is 60 times larger than AlephBertGimmel, we do not need to horizontally scale our hardware accelerators by 60 to accommodate it. As models have grown in size, hardware accelerators have also become more advanced. T5 Small, Base and Large can all be fine-tuned on a 2016 Nvidia P100 or 2017 Nvidia V100 GPU accelerators. T5 XL and XXL can be fine-tuned on the 2020 Nvidia A100 GPU, the same accelerator used for pretraining AlephBERTGimmel. Given the widespread availability of these GPU accelerators, we argue that the T5 models we evaluate in this work can be easily fine-tuned and deployed nowadays. ## B Qualitative Evaluation Of Mt5 On The Question-Answering Task The mT5-small model performs similarly to previous state-of-the-art models on the QuestionAnswering task of ParaShoot (Keren and Levy, 2021). We conducted a qualitative analysis of mT5- XXL compared with mT5-small, as a way to analyse the impact of model size while holding other factors constant, and in order to compare to the performance of previous state-of-the-art models. We ran our mT5 experiments using 3 seeds with the best performing model, mT5-XXL, achieving 77.99 F1 and 50.63 EM scores. Our worst performing model, mT5-small, reached 47.67 F1 and 24.39 EM scores. From the 519 exact match prediction mT5-XXL model made, 167 of which mT5-small ![6_image_0.png](6_image_0.png) received F1 scored of 0. Based on a manual evaluation of the errors made by mT5-small, it can be concluded that the model often struggled with comprehending the fundamental meaning of the question in many instances. As an illustrative example, here the model mixes *when* and *where*: Context:7 לאחר תבוסת צרפת במסגרת המערכה על צרפת החליט מפקד הצבא הצרפתי במושׁבה פול לואי לזאנטיוM להמשׁיK בלחימה לצד צרפת החופשׁית! Question:8 מתי החליט מפקד הצבא להמשׁיK בלחימה לצד צרפת החופשׁית ! The gold and mT5-XXL prediction is:9 לאחר תבוסת צרפת במסגרת המערכה על צרפת! mT5 small's model predicted:10 .במושׁבה פול לואי לזאנטיוM! As known to be a problem with generative models, both mT5 models made several hallucination errors, returning answers that were not part of the original context. Additionally, mT5-XXL failed to answer 49 questions correctly which mT5-small was able to provide accurate responses for them. However, for only three of these questions, mT5- XXL received an F1 score of 0. Upon manual evaluation of these errors, it was found that two of them are alternative correct answers. 7Context translated to English: After France's defeat in The Campaign for France, the commander of the French army in the colony, Paul Legentilhomme, decided to continue fighting with the Free French Forces 8Question translated to English: When did the army's commander decide to continue fighting with the Free French Forces? 9The gold and mT5-XXL prediction translated to English: After France's defeat in the campaign for France 10mT5 small's model predicted translated to English: In colony Paul Legentilhomme ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✓ A2. Did you discuss any potential risks of your work? 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 3,4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
song-etal-2023-multilingual
Multilingual Knowledge Graph Completion from Pretrained Language Models with Knowledge Constraints
https://aclanthology.org/2023.findings-acl.488
Multilingual Knowledge Graph Completion (mKGC) aim at solving queries in different languages by reasoning a tail entity thus improving multilingual knowledge graphs. Previous studies leverage multilingual pretrained language models (PLMs) and the generative paradigm to achieve mKGC. Although multilingual pretrained language models contain extensive knowledge of different languages, its pretraining tasks cannot be directly aligned with the mKGC tasks. Moreover, the majority of KGs and PLMs currently available exhibit a pronounced English-centric bias. This makes it difficult for mKGC to achieve good results, particularly in the context of low-resource languages. To overcome previous problems, this paper introduces global and local knowledge constraints for mKGC. The former is used to constrain the reasoning of answer entities , while the latter is used to enhance the representation of query contexts. The proposed method makes the pretrained model better adapt to the mKGC task. Experimental results on public datasets demonstrate that our method outperforms the previous SOTA on Hits@1 and Hits@10 by an average of 12.32{\%} and 16.03{\%}, which indicates that our proposed method has significant enhancement on mKGC.
## Multilingual Knowledge Graph Completion From Pretrained Language Models With Knowledge Constraints Ran Song1,2, Shizhu He3,4, Shengxiang Gao1,2, Li Cai5, Kang Liu3,4, Zhengtao Yu1,2 ∗ , and Jun Zhao3,4 1 Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China 2 Yunnan Key Laboratory of Artificial Intelligence, Kunming, China 3 The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 4 School of Artificial Intelligence, University of Chinese Academy of Science, Beijing, China 5 Meituan, Beijing, China {song_ransr}@163.com, {shizhu.he,kliu,jzhao}@nlpr.ia.ac.cn, caili03@meituan.com, {gaoshengxiang.yn,ztyu}@hotmail.com ## Abstract Multilingual Knowledge Graph Completion (mKGC) aim at solving queries like (*h, r,* ?) in different languages by reasoning a tail entity t thus improving multilingual knowledge graphs. Previous studies leverage multilingual pretrained language models (PLMs) and the generative paradigm to achieve mKGC. Although multilingual pretrained language models contain extensive knowledge of different languages, its pretraining tasks cannot be directly aligned with the mKGC tasks. Moreover, the majority of KGs and PLMs currently available exhibit a pronounced English-centric bias. This makes it difficult for mKGC to achieve good results, particularly in the context of low-resource languages. To overcome previous problems, this paper introduces global and local knowledge constraints for mKGC. The former is used to constrain the reasoning of answer entities, while the latter is used to enhance the representation of query contexts. The proposed method makes the pretrained model better adapt to the mKGC task. Experimental results on public datasets demonstrate that our method outperforms the previous SOTA on Hits@1 and Hits@10 by an average of 12.32% and 16.03%, which indicates that our proposed method has significant enhancement on mKGC. ## 1 Introduction Knowledge graphs are collections of entities and facts, and utilized as a valuable resource in a variety of natural language processing (NLP) tasks, such as Question Answering and Recommender Systems (Shah et al., 2019; Du et al., 2021; Wang et al., 2019). The language-specific nature of many NLP tasks necessitates to consider the knowledge ∗ Corresponding author ![0_image_0.png](0_image_0.png) Golden Anwser: 鳥取市営サッカー Axis Bird Stadium 鳥取市営サッカー場 (A Japanese Stadium) Yodoko Sakura Stadium Golden Anwser: Halbautomatik Semi-automatic Golden Anwser: Irak Figure 1: The top part introduces unbalance language distribution for DBpedia. The low part shows the sampling comparison results of Prix-LM model and our method. The type of prediction entity and the correct answer are shown in brackets and red font, respectively. Our approach exhibits superior consistency and accuracy in generating answers. expressed in a particular language. For example, multilingual question answering needs multilingual knowledge graphs (Zhou et al., 2021). The utilization of multilingual knowledge graphs (mKGs) with a vast amount of knowledge in multiple languages, such as DBpedia (Lehmann et al., 2015), Wikidata (Vrandeciˇ c and Krötzsch ´ , 2014), can be advantageous in plenty of NLP tasks (Zhou et al., 2021; Fang et al., 2022). There is a significant amount of potential facts that have not been captured in current knowledge graphs, resulting in their incompleteness (Chen et al., 2020). To address this issue, various studies have proposed for Knowledge Graph Completion (KGC) to automatically discovery potential facts through observed facts (Bordes et al., 2013), rules (Meilicke et al., 2019) and language models (Lv et al., 2022). In fact, as shown in Figure 1, there is more English-centric knowledge than other languages, so that it is difficult to leverage knowledge graphs on non-English tasks. For example, English-centric commonsense reasoning tasks obtain better development and performance than other languages (Lin et al., 2021a). And the knowledge coverage of nonEnglish knowledge graphs is even worse, it will poses challenges for traditional KGC methods to achieve superior performance. Nowadays, pretrained language models (PLMs) learn various knowledge modeling capabilities (Petroni et al., 2019; Jiang et al., 2020) from massive unlabeled data. And most studies have demonstrated that the knowledge contained within PLMs can significantly improve the performance of downstream tasks (Li et al., 2021; Lin et al., 2021b). Most recently, Prix-LM (Zhou et al., 2022) approached mKGC as an end-to-end generative task using multilingual PLMs. For example, for predicting the missing entity of the query (86th All Japan Football Championship, Stadium, ?) (see Figure 1), Prix-LM converts the query into a sequence with pre-defined template, which is then processed by an encoder to generate a query representation. The decoder then uses this representation to generate the final answer *Axis Bird Stadium*. Despite the successes achieved through the combination of PLMs and the generative paradigm, there remain limitations for mKGC. On the one hand, the gap between the pretraining task and the KGC task may contribute to the limitations. It arise that the answers generated by Prix-LM are ambiguous in type. On the other hand, languages and tokens that occur more frequently in the pretraining data have richer representations. Linguistic bias for KGs and PLMs would arise that entities in low-resource languages are difficult to be represented, resulting answer incorrect. As illustrated in Figure 1, the query *(86th All Japan Football Championship, stadium, ?)* expects a response of the type stadium, but the top-ranked answers from Prix-LM are diverse, and the top answer is incorrect. We argue that the incorporation of knowledge constraints into the generation process can increase PLMs suitability for mKGC tasks. We categorize knowledge effective for mKGC into global and local knowledge. Global Knowledge limit the types of answers based on building the relationship of entity and relation representations. This helps to ensure that the generated answers are semantically and logically consistent with the intent of query. On the other hand, local knowledge in PLMs can enhance the ability to comprehend the interconnections between the sub-tokens within the query. This helps the model to better understand the context of query and generate more accurate answers. Incorporating knowledge constraints into the generative process brings two advantages for mKGC: 1) It makes PLMs to better adapt to mKGC task. 2) It enables PLMs to learn more effective representations from low-resource data. In this paper, we propose to incorporate the global and local knowledge into the answer generation process through two knowledgeable tasks. To learn global knowledge, special tokens ([H],[R],[T]) are introduced as semantic representations of head entity, relation, and tail entity in a triple. A scoring function measures the plausibility of the resulting facts, such as ||h[H] + h[R] − h[T]||L1/2 . Since the same special token is used in each triple in different languages, trained models are able to learn knowledge reasoning ability beyond language boundaries. To capture local knowledge, we consider the representation of answer and each word of query as two separate distributions P(Hq) and P(H[T]), and then use an estimator to estimate and maximize the mutual information between them I(Hq; H[T]). The local knowledge serves to augment the query representations for trained model through the utilization of minimal amounts of data. The experimental results on seven language knowledge graph from DBpedia show that our proposed method achieves significant improvement as compared to Prix-LM and translated-based methods. We publicize the dataset and code of our work at https: //github.com/Maxpa1n/gcplm-kgc. In short, our main contributions are as follows: - We attempt to utilize diverse knowledge constraints to enhance the performance of PLMbased mKGC. It effectively addresses the inconsistency of PLM and mKGC task, and alleviates language and data bias from PLMs and KGs. - Our proposed method can enrich query representation and enhance answer generation by ![2_image_0.png](2_image_0.png) Context features from another query Answer Generation L M el H Los Angeles Lakers [E] introducing global knowledge constraints for entity placeholders and mutual information constraints for other contextual symbols. - Our proposed method outperforms the PrixLM (Zhou et al., 2022) in both mKGC and cross-lingual entity alignment, as shown by experiments on a public dataset. The performance of our method on Hits@1, Hits@3, and Hits@10 shows an average improvement of 12.32%, 11.39%, and 16.03%, respectively. ## 2 Basic Model A knowledge graph G = (R, E) is a collection of connected information about entities, often represented using triples (*h, r, t*) where r ∈ R is relation and h, t ∈ E are entities. Prix-LM is an important work of mKGC and is also used as the basic model in this paper. Prix-LM transfer link prediction from discriminative task to generative task for mKGC. The goal of mKGC is to generate the missing tail entity, which may contain multiple tokens, for the query (*h, r,* ?) of different languages. The use of template is employed as a means of transforming queries into textual sequences that can be encoded by PLMs. The template includes special tokens, which serve to identify the specific role of each element within the query triple: <s>[H]Xh</s></s>[R]Xr</s></s>[T]Xt[E] where <s> is beginning token of sentence and </s> is the separator, both are applied in PLMs, as known as [CLS] and [SPE]. [H], [R] and [E] are additional special tokens for the representation of head, relation and tail. [E] is the end-of-sequence token. Xh ∈ {x h 1 , x h 2 , x h 3 , ..., x hn} are text words of head entity, Xr and Xtin the same way. The training goal is to generate the tail entity Xt by giving the sequence containing the head entity Xh and relation Xr. For example, for the query *(LeBron James, team member of, ?)*, the constructed sequence is <s>[H] *LeBron James*</s></s>[R] *team* member of </s></s>[T], and the target of mKGC is generate *Los Angeles Lakers* [E]. The process is as follows: $$P_{\theta}(X_{t}|X_{h},X_{r})=\prod_{x_{i}\in X_{t}}^{x_{i}}p(x_{i}|x_{<i},\theta)\quad\quad(1)$$ where θ is the pretrained model parameter. According to the mechanism of causal language model, the probability of i-th token depend on previous token representation hi−1: $$p(x_{i}|X_{<i})=\operatorname{softmax}(\mathbf{Wh}_{i-1})$$ $$\mathbf{x}(\mathbf{Wh}_{i-1})$$ p(xi|X<i) = softmax(Whi−1) (2) where W is causal language model decoder from PLMs. The utilization of PLMs for generating answers directly can be subject to language bias, resulting in ambiguous and incorrect answers. The representation of the special token [T] is a crucial factor in $$<\!\!/{\mathrm{s}}\!>\!\![\![{\mathrm{T}}]X_{t}]$$ determining the quality of the generated answers. To improve the representation of the [T] token, we have implemented two supplementary strategies aimed at incorporating additional knowledge into its representation. ## 3 The Proposed Model In this section, we describe the components of our proposed approach. The architecture of the model is depicted in Figure 2. Our approach comprises four key components: a query encoder, a global knowledge constraint, a local knowledge constraint, and an answer generation module. These components operate in tandem to generate accurate and coherent answers for given queries. ## 3.1 Triple Encoder We leverage the PLM to encode the triple and an attention mask to control the access to each subtoken in the sequence during training process. We use previous template to convert a triple (*h, r, t*) to a sequence S(h,r,t) ∈ {Xh, Xr, Xt, Xa}, and Xa is special token. The attention mask mechanism allows the query sequence to be seen as the source text and the answer entity as the target text. The process as following: $$\mathrm{PLM}(S_{(h,r,t)})=H$$ $${\mathrm{\boldmath~on~}}{\mathrm{\boldmath~of~\triple~}}{\mathrm{\boldmath~is~}}H$$ where hidden representation of triple is H ∈ {h [H], h h 1 , .., h [R], h r1 , ..., h [T], h t1 , ..., h [E]}. The attention mask is a matrix that specifies whether each subtoken should be attended or ignored, as illustrated in Figure 3. By making special tokens only visible to their own subtokens, model can effectively separate each role in a triple. And the mask matrix M add in attention score calculated by query Q, key K, value V: $$\mathbf{M}=\begin{cases}0,&\text{allow to attend}\\ -\infty,&\text{prevent from attending}\end{cases}\tag{4}$$ $$\mathbf{A}=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}}+\mathbf{M})\mathbf{V}\tag{5}$$ where $\mathbf{Q}$, $\mathbf{K}$, $\mathbf{V}\in\mathbb{R}^{l\times d}$, $l$ is length of the input $${\mathrm{}}^{(4)}$$ where Q, K, V ∈ R sequence, and d is the hidden size. ## 3.2 Global Knowledge Constraint To bridge the gap between the pretraining task and the KGC task, we introduce the global knowledge build logical relationship between entities. Unlike previous approaches such as Prix-LM, our method does not rely on cross-lingual links for equivalent entities to learn shared knowledge in different languages. Instead, shared knowledge between languages is learned through the global knowledge constraint, which is inspired by embedding-based methods. We leverage the TransE framework in our model, and methods such as CompleX, RotatE are also applicable. The goal of the global knowledge constraint is to represent entities and relation in a semantic space and enforce the translational principle: h + r ≈ t: $$\|\mathbf{h}_{[H]}+\mathbf{h}_{[R]}\|=\|\mathbf{h}_{[T]}\|\qquad\qquad(\mathbf{6})$$ where h[H], h[R], h[T] are special tokens representation, and ∥.∥ is L1 norm. And a triple global knowledge score is described by: $$s c o r e(h,r,t)=\|\mathbf{h}_{[H]}+\mathbf{h}_{[R]}-\mathbf{h}_{[T]}\|\quad(7)$$ We use the same special tokens for different languages. The following loss function is used to optimize the model. $${\mathcal{L}}_{p}=\sum_{g_{j}\in G}^{g_{j}}\sum_{(h,r,t)_{i}\in g_{j}}^{(h,r,t)_{i}}(s c o r e(h_{i},r_{i},t_{i})+\gamma)\quad(8)$$ $$(3)$$ where G is all language knowledge graphs set, and γ is correction factor. ![3_image_0.png](3_image_0.png) ℎ 3 ℎ [R] 1 2 3 [T] 1 2 [E] $$({\mathfrak{S}})$$ ## 3.3 Local Knowledge Constraint The local knowledge enables the model to learn more accurately for generated answers with lowresource data. Therefore, we consider establishing the connection between query and answer in a triple. Specifically, we view the the representation of query words Hq and tail entity H[T] as two distributions and maximizing the mutual information between them I(Hq, H[T]). The theoretical foundation for this idea is provided by MIEN (Belghazi et al., 2018), which demonstrates that mutual information follows a parametric lower-bound: $$I(H_{q};H_{[T]})\geq\hat{I}_{\theta}(H_{q};H_{[T]})$$ Inspired from previous Mutual Information Maximization (Tschannen et al., 2019; Zhang et al., 2020) (MIM) method in unsupervised learning, we take the local features, represented by Hq, and the global features, represented by H[T], as the inputs for MIM. Benefit from the mask mechanism and PLM's powerful learning capability, we do not strictly distinguish the parameter of encoder and decoder different from previous works. In this work, we select a Jensen-Shannon MI estimator to parameterize Mutual Information: $$\tilde{I}_{\theta}^{(JSD)}(H_{q},H_{[T]}):=$$ $$E_{\mathbb{P}}[-sp(T_{\theta}(H_{q},H_{[T]}))]\tag{10}$$ $$-E_{\mathbb{P}\times\mathbb{P}}[sp(T_{\theta}(H_{q}^{\prime},H_{[T]}))]$$ where $H_{q}\in\{\mathbf{h}_{1}^{h},\ldots,\mathbf{h}_{m}^{h},\mathbf{h}_{1}^{r},\ldots,\mathbf{h}_{n}^{r}\}$ is query words representation, m is head entity length, n is relation length. H[T] ∈ {h[T]} is tail entity representation. Tθ is a discriminator function support by the PLM parameters. H ′ q is representation sampled from other query in the same min batch. And P = Pe make guarantee the expectation easy to calculated. sp(x) = log(1 + e x) is the softplus activation function. The learning object is to make PLM estimate and maximize the Mutual Information: $$\theta=\operatorname{argmax}_{\theta}{\frac{1}{|G|}}\sum_{b_{j}\in G}^{b_{j}}{\hat{I}}_{\theta}^{(J S D)}(H_{q}^{j},H_{[T]}^{j})\quad(11)$$ where bj is mini batch from training dataset. To optimize model by gradient descent, we set loss function as following: $${\mathcal{L}}_{E}=\sum_{b_{j}\in G}^{b_{j}}(E_{\mathbb{P}}^{j}-E_{\mathbb{P}}^{j})\qquad\qquad(12)$$ where the E j P is expectation for query and tail entity. The local knowledge constraint within PLM enhance its capacity to obtain rich representations of queries and tail entities, particularly in situations where training data is limited. ## 3.4 Answer Generation Module Follow the paradigm that given a serialized query and generate answer token, we use the casual language model with PLM. The generation loss function as Cross Entropy Loss function: $${\mathcal{L}}_{G}=\sum_{(h,r,t)_{i}\in G}^{i}\sum_{x_{j}\in X_{i}^{t}}^{x_{j}}x_{j}l o g(f(x_{<j}))\quad\quad(13)$$ $$(9)$$ where the f(·) is like Formula 2, xj is subtoken of tail entity. In training process, the model would generate answer with global and local knowledge, we define the loss for model as: $${\mathcal{L}}={\mathcal{L}}_{G}+\alpha{\mathcal{L}}_{P}+\beta{\mathcal{L}}_{E}$$ $$(14)$$ where α and β are hyperparameter. The mask mechanism achieved that all subtokens of tail entity be trained in one round. ## 3.5 Inference During the inference phase of our model, we utilize an autoregressive approach to generate the tokens of tail entity for given query. This autoregressive approach involves predicting the next token based on the previous tokens. The query (*h, r,* ?) be transferred to a sequence Xq and generating the answer entity by trained model. The process as following: $$x_{i}=\operatorname*{argmax}_{x_{i}}P(x_{i}|X_{q}\cap x_{1}\cdots\cap x_{i-1})\quad(15)$$ where xi ∈ Xt. Additionally, we assume a closedworld setting and utilize constrained beam search to restrict the final output to a predefined set of possibilities, in order to ensure the validity of the generated answer. ## 4 Experiments In this section, we evaluate the effectiveness of our approach on tasks related to mKGC and Entity Alignment for mKGs. To further understand the role of the various knowledge-gathering strategies in our method, we also conduct ablation experiments. Additionally, we provide case studies to demonstrate the superior performance of our method on specific examples. These experiments and analyses provide insight into the strengths and limitations of our approach for addressing challenges in mKGC for sparse knowledge graphs. MODEL DE FI FR HU IT JA TR AVG TransE 0.00 0.01 0.02 0.03 0.04 0.02 s 0.06 0.02 ComplEx 4.09 2.45 2.50 3.28 2.87 2.41 1.00 2.65 RotatE 6.72 5.87 8.40 16.27 6.91 6.21 6.85 8.17 Prix-LM (Single) 12.86 19.81 18.01 28.72 16.21 19.81 23.79 19.88 Prix-LM 14.32 18.78 16.47 29.68 14.32 18.19 21.57 19.04 Ours 17.54 20.74 18.34 30.91 14.98 22.05 25.20 21.39 TransE 6.14 6.54 6.60 14.91 5.95 7.22 8.20 7.93 ComplEx 8.47 5.28 5.19 6.70 4.31 4.68 2.11 5.24 RotatE 10.52 7.42 14.62 21.75 12.11 9.75 11.29 12.49 Prix-LM (Single) 23.09 28.75 24.75 38.44 25.32 29.02 33.05 28.91 Prix-LM 23.68 29.54 23.15 39.80 25.46 27.01 31.45 28.58 Ours 30.40 29.74 26.36 44.18 27.03 30.79 35.48 31.99 TransE 17.54 17.80 15.26 29.00 14.16 20.65 19.35 19.10 ComplEx 9.35 8.21 8.91 16.96 8.76 8.23 5.24 9.38 RotatE 14.61 8.61 19.49 28.31 18.48 14.44 17.13 17.29 Prix-LM (Single) 33.82 38.91 34.04 47.31 36.61 38.81 38.50 38.28 Prix-LM 33.91 41.29 32.25 46.23 35.18 36.12 37.50 37.49 Ours 41.81 43.44 35.15 58.00 39.15 42.45 44.55 43.50 ## 4.1 Datasets And Evaluation Metrics | Hits@1 Hits@3 Hits@10 | |-------------------------| To evaluate our method, we utilize the Link Prediction dataset provided by Prix-LM (Zhou et al., 2022) and split it by the closed-world setting. The dataset consists of data from DBpedia, a large multilingual knowledge graph, and the amount of data is shown in Table 2. We ensure that entities and relations appearing in the validation and test sets are included in the training set. We introduce a ratio between entities and triples as a measure of the knowledge density of the dataset. This ratio has a lower bound of 0.5, which indicates that there are no cross-links between triples. The ratio of our dataset is much lower than that of publicly available datasets. The evaluation metrics we use are standard Hits@1, Hits@3, and Hits@10, which are commonly used in the evaluation of KGC methods. ## 4.2 Implementation Details In our experiments, we used XLM-R (Base) as the base pre-trained language model and did not introduce any additional parameters beyond those provided by XLM-R. The model was implemented using the Huggingface Transformers library (Wolf et al., 2020) and the hyperparameters α and β were set to 0.001 and 0.005. The learning rate and batch size were selected from the sets {4e-5, 5e-5} and {128, 256}. And the maximum length of a triple sequence was 35. The model was trained using a single Nvidia RTX 3039 GPU. ## 4.3 Multilingual Knowledge Graph Completion Our method for mKGC was compared to various embedding-based methods and Prix-LM on seven languages KG, as shown in Table 1. The results show that our method outperformed Prix-LM on the metrics of Hits@1, Hits@3, and Hits@10, with average improvements of 12.32%, 11.39%, and 16.03%, respectively. These improvements suggest that the integration of both global and local knowledge significantly enhances the effectiveness of the mKGC task, leading to a higher ability to accurately predict missing triple in KG. It is worth noting that the low knowledge density in the training set can hinder the performance of traditional embedding-based methods, which rely on the presence of sufficient training data to learn | DE | FI | FR | HU | IT | JA | TR | AVG | | | |---------------------|---------------|----------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------| | Training | Entity | 39,842 36,892 106,955 27,765 86,988 68,279 29,120 56,549 | | | | | | | | | Relation | 1,544 | 945 | 2,358 | 999 | 1,539 | 2,542 | 1,008 | 1,562 | | | Triple | 27,014 28,040 | 83,940 24,193 66,904 50,164 24,013 43,467 | | | | | | | | | Entity | 501 | 766 | 2,452 | 988 | 2,240 | 1,206 | 749 | 1,272 | | | Validation Relation | 122 | 142 | 362 | 142 | 257 | 303 | 101 | 204 | | | Triple | 264 | 435 | 1,407 | 614 | 1,286 | 671 | 432 | 730 | | | Testing | Entity | 649 | 916 | 2,687 | 1,154 | 2,499 | 1,414 | 822 | 1,449 | | Relation | 135 | 147 | 377 | 154 | 271 | 322 | 95 | 214 | | | Triple | 342 | 511 | 1,559 | 731 | 1,461 | 789 | 496 | 841 | | | T/E Ratio | 0.69 | 0.79 | 0.81 | 0.92 | 0.80 | 0.76 | 0.76 | 0.79 | | | model | zh-km | zh-th | zh-lo | zh-my | | |---------|---------|---------|---------|---------|-------| | H1 | Prix-LM | 65.02 | 22.42 | 62.15 | 41.56 | | Ours | 67.25 | 24.07 | 62.15 | 43.49 | | | H3 | Prix-LM | 68.37 | 26.87 | 64.27 | 46.44 | | Ours | 69.23 | 27.65 | 64.80 | 47.65 | | | H10 | Prix-LM | 70.15 | 30.50 | 66.12 | 49.08 | | Ours | 70.15 | 30.35 | 66.27 | 50.73 | | meaningful relationships between entities. In contrast, the use of PLMs, as employed in our method, can effectively address the issue of data sparsity and still achieve notable impact on performance. Overall, these results demonstrate the effectiveness of our approach in comparison to the use of PLMs alone for mKGC. ## 4.4 Cross-Lingual Entity Alignment To assess the generalizability of the proposed method, we conduct a comparison on the entity alignment task. As shown in Table 3, we compared the proposed method with the Prix-LM. This comparison allowed us to assess the performance of the proposed method on a different task and determine its potential for use in a wider range of applications. The results show our method The results of the comparison indicate that our proposed method outperforms the Prix-LM in most of the evaluation indicators. This suggests that our method is able to generalize well to different tasks and is capable of achieving improved performance on the entity alignment task. Counterintuitively, the results show that languages with fewer resources tend to yield better performance. This may be due to the fact that the relationship between low-resource entity pairs is relatively simple and easier for the model to learn. ## 4.5 Ablation Experiment Our proposed method introduces a novel approach for extracting both global and local knowledge through the use of a scoring function function and the maximization of mutual information. As shown in Table 4, we conducted an extensive comparison with various alternatives for scoring function and mutual information estimation, showcasing the superior performance of the proposed method. And | Model | Hits@1 | Hits@3 | Hits@10 | |--------------|----------|----------|-----------| | Prix-LM | 19.04 | 28.58 | 37.49 | | Ours | 21.39 | 31.99 | 43.50 | | w/o local | 21.11 | 31.64 | 42.23 | | w/o global | 19.45 | 29.51 | 42.71 | | w/o mask | 20.61 | 30.12 | 42.11 | | Ours+RotatE | 21.15 | 31.43 | 43.32 | | Ours+ComplEx | 19.71 | 30.52 | 41.87 | | Ours+GAN | 20.71 | 31.36 | 43.14 | | Ours+DV | 18.98 | 29.61 | 41.60 | we also verified the effect of different module, our findings indicate that the use of global features is better than local features, and that the difference between local and global results on the H10 metric is minimal. This supports our expectation that local features improve the ranking of entity types. Using the tasks matrix reduces some of the noise and allows faster convergence, which is the key to improving performance. We will include these results in our revised manuscript and provide a detailed discussion on their implications. ## 4.6 Answer Length Comparison As shown in Figure 5, we compared the performance of the proposed method on answers of different lengths to assess its robustness. The results of this comparison demonstrate that the proposed method exhibits strong performance across a range of answer lengths, indicating its ability to handle diverse inputs effectively. The results show that our method outperforms the baseline in terms of Hits values for answers of various lengths, with particularly strong performance on short answers. ## 4.7 Case Study As shown in Figure 4, we compare the performance of our method with Prix-LM on a set of real examples. The predicted answers generated by both methods are presented and analyzed in order to evaluate the effectiveness of each approach for mKGC. The results of these case studies provide additional evidence for the effectiveness of our approach in comparison to the baseline model. Our analysis of the top three cases reveals that our method produces a higher number of predictions of the same type as the correct answer compared to Language Query Answer Prediction by Prix-LM Prediciton by Ours ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) PlayStation 2 | PlayStation 4 | Game Boy Advance | N-Gage | PlayStation | Playstation 3 | PlayStation Vita | Xbox 360 | Nintendo DS | IOS PlayStation Vita | PlayStation 2 | PlayStation | PlayStation 4 | Xbox | PlayStation Portable | Playstation 2 | Xbox 360 | Microsoft Windows | Playstation 3 halálozási hely, ?) Szovjetunió Luxembourg | Celje | Athén | Roma | Braunschweig | Szécsény | Rostock | Molde | Ħamrun | Rodolfo Szovjetunió | Jeruzsálem | Lengyelország | Kolozsvár | Temesvár | Kairó | Szentgotthárd | Szarajevó | Szentpétervár | Szaloniki João Pessoa | New York | La Paz | Estonya | Portekiz | João Paulo | Rio de Janeiro | João Havelange | Letonya | Fukuoka; Brezilya | Portekiz | Rio de Janeiro | Minas Gerais | São Luís | Arjantin | Porto | Salvador | Fortaleza | Porto Alegre the baseline model. This finding suggests that our approach effectively addresses the task bias and demonstrates the adaptability of the PLM for the KGC task. Despite, the predicted answer types in the bottom three examples are all same, our method is able to accurately identify the correct answer. This demonstrates the robustness and effectiveness of our approach in generating accurate results even in situations where the predicted answers type are similar. ## 5 Related Work 5.1 Embedding-Based Methods For Kgc There has been a amount of research focused on developing embedding-based methods for finding potential knowledge within a knowledge graph (Wang et al., 2017; Dai et al., 2020). These methods typically involve representing entities and relations within the graph as low-dimensional vector embeddings. Such like TransE (Bordes et al., 2013) makes entity and relation vectors follow the translational principle h + r = t. The choice of scoring function and the specific vector space used can have a significant impact on the performance of the method, including RotatE(Sun et al., 2019), TransH (Wang et al., 2014), HolE(Nickel et al., 2016), ComplEx(Trouillon et al., 2016). However embedding-based methods may not fully consider the background knowledge that is implicit in the text associated with entities and relations. ## 5.2 Pretrained Language Models For Kgc Recently, some research leverage pretrained language models to complete KGC task. There methods represent entities and relations by PLMs, and score high for positive triplets (Lv et al., 2022; Kim et al., 2020). This manner enables the introduction of knowledge that has already been learned in PLMs. ![7_image_3.png](7_image_3.png) To fully utilize the PLM, some research focus on generative paradigm for knowledge graph construction (Ye et al., 2022). GenKGC (Xie et al., 2022) transforms knowledge graph completion into a sequence-to-sequence generation task base on pretrained language model and propose relationguided demonstration and entity-aware hierarchical decoding. COMET (Bosselut et al., 2019) propose the Commonsense Transformer to generate commonsense automatically. KGT5 (Saxena et al., 2022) consider KG link prediction as sequence-tosequence tasks base on a single encoder-decoder Transformer. It reduce model size for KG link prediction compare with embedding-based methods. While previous efforts to utilize PLMs for KGC have demonstrated effectiveness, they have not fully considered the inherently knowledge-based nature of KGC tasks. This oversight may hinder the full potential of such models in addressing the unique challenges and requirements of KGC. ## 6 Conclusion Our work improve the multilingual knowledge graph completion performance base on PLM and generative paradigms. We propose two two knowledgeable tasks to integrate global and local knowledge into answer generation given a query. The global knowledge improves the type consistency of the generated answers. Local knowledge enhances the accuracy of answer generation. We conducted experiments and the results showed that the proposed method is better than the previous model. ## 7 Limitations While our approach effectively predicts the relationships between entities in a knowledge graph, there are limitations in the scope of knowledge graph resources that can be modeled. The knowledge graph contains a vast array of resources, including attributes, descriptions, and images, which are not easily captured by embedding-based methods, but can be effectively modeled using PLMs. To improve the compatibility of KGC with actual needs, it is necessary to consider a broader range of data types in the knowledge graph and develop complementary methods to effectively incorporate them. ## 8 Ethics Statement This paper proposes a method for Multilingual Knowledge Graph Completion, and the experiments are conducted on public available datasets. As a result, there is no data privacy concern. Meanwhile, this paper does not involve human annotations, and there are no related ethicalconcerns. ## 9 Acknowledgements This work was supported by the National Natural Science Foundation of China (U21B2027, 61972186, 62266027, 62266028, U1936207, 61976211) and Strategic Priority Research Program of Chinese Academy of Sciences (No.XDA27020000). This research work was supported by the Youth Innovation Promotion Association CAS, Yunnan Provincial Major Science and Technology Special Plan Projects (202202AD080004, 202202AD080003, 202103AA080015), and General Projects of Basic Research in Yunnan Province (202301AS070047). ## References Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. 2018. Mutual information neural estimation. In *International conference* on machine learning, pages 531–540. PMLR. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Zhe Chen, Yuehan Wang, Bin Zhao, Jing Cheng, Xin Zhao, and Zongtao Duan. 2020. Knowledge graph completion: A review. *Ieee Access*, 8:192435– 192456. Yuanfei Dai, Shiping Wang, Neal N Xiong, and Wenzhong Guo. 2020. A survey on knowledge graph embedding: Approaches, applications and benchmarks. Electronics, 9(5):750. Jinhua Du, Yan Huang, and Karo Moilanen. 2021. Knowledge-aware leap-lstm: Integrating prior knowledge into leap-lstm towards faster long text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12768– 12775. Yuwei Fang, Shuohang Wang, Yichong Xu, Ruochen Xu, Siqi Sun, Chenguang Zhu, and Michael Zeng. 2022. Leveraging knowledge in multilingual commonsense reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3237–3246. Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In *Proceedings of the 2018 conference on empirical methods in* natural language processing: system demonstrations, pages 139–144. Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? *Transactions of the Association for* Computational Linguistics, 8:423–438. Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 1737–1743. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Junyi Li, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2021. Pretrained language models for text generation: A survey. *arXiv preprint arXiv:2105.10311*. Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, and Xiang Ren. 2021a. Common sense beyond english: Evaluating and improving multilingual language models for commonsense reasoning. arXiv preprint arXiv:2106.06937. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021b. Pretrained transformers for text ranking: Bert and beyond. Synthesis Lectures on Human Language Technologies, 14(4):1–325. Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, and Jie Zhou. 2022. Do pretrained models benefit knowledge graph completion? a reliable evaluation and a reasonable approach. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3570–3581. Christian Meilicke, Melisachew Wudage Chekol, Daniel Ruffinelli, and Heiner Stuckenschmidt. 2019. Anytime bottom-up rule learning for knowledge graph completion. In *Proceedings of the 28th International* Joint Conference on Artificial Intelligence, pages 3137–3143. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 30. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473. Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. arXiv preprint arXiv:2203.10321. Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. 2019. Kvqa: Knowledgeaware visual question answering. In *Proceedings of* the AAAI conference on artificial intelligence, volume 33, pages 8876–8884. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *International conference on machine learning*, pages 2071– 2080. PMLR. Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. 2019. On mutual information maximization for representation learning. In *International Conference on Learning Representations*. Denny Vrandeciˇ c and Markus Krötzsch. 2014. Wiki- ´ data: a free collaborative knowledgebase. *Communications of the ACM*, 57(10):78–85. Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, and Zhongyuan Wang. 2019. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. In *Proceedings of the 25th ACM* SIGKDD international conference on knowledge discovery & data mining, pages 968–977. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724– 2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI conference on artificial intelligence, volume 28. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Xin Xie, Ningyu Zhang, Zhoubo Li, Shumin Deng, Hui Chen, Feiyu Xiong, Mosha Chen, and Huajun Chen. 2022. From discrimination to generation: Knowledge graph completion with generative transformer. arXiv preprint arXiv:2202.02113. Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022. Generative knowledge graph construction: A review. *arXiv preprint arXiv:2210.12714*. Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1601–1610. Wenxuan Zhou, Fangyu Liu, Ivan Vulic, Nigel Collier, ´ and Muhao Chen. 2022. Prix-LM: Pretraining for multilingual knowledge base construction. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 5412–5424, Dublin, Ireland. Association for Computational Linguistics. Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, and Daxin Jiang. 2021. Improving zero-shot cross-lingual transfer for multilingual question answering over knowledge graph. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5822–5834. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✗ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.2 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
wang-etal-2023-towards-better
Towards Better Hierarchical Text Classification with Data Generation
https://aclanthology.org/2023.findings-acl.489
Hierarchical text classification (HTC) focuses on classifying one text into multiple labels, which are organized as a hierarchical taxonomy. Due to its wide involution in realistic scenarios, HTC attracts long-term attention from both industry and academia. However, the high cost of hierarchical multi-label annotation makes HTC suffer from the data scarcity problem. In view of the difficulty in balancing the controllability of multiple structural labels and text diversity, automatically generating high-quality data for HTC is challenging and under-explored. To fill this blank, we propose a novel data generation framework tailored for HTC, which can achieve both label controllability and text diversity by extracting high-quality semantic-level and phrase-level hierarchical label information. Experimental results on three benchmarks demonstrate that, compared with existing data augmentation methods, the data generated from our method can bring the most significant performance improvements of several strong HTC models. Extensive analysis confirms that the improvements yielded by our proposed method do correlate to the enhancement of label controllability and text diversity.
# Towards Better Hierarchical Text Classification With Data Generation Yue Wang1,2∗**, Dan Qiao**1∗ , Juntao Li1† , Jinxiong Chang2, Qishen Zhang2, Zhongyi Liu2, Guannan Zhang2, **Min Zhang**1 1School of Computer Science and Technology, Soochow University 2Ant Group ywangnlp@stu.suda.edu.cn ## Abstract Hierarchical text classification (HTC) focuses on classifying one text into multiple labels, which are organized as a hierarchical taxonomy. Due to its wide involution in realistic scenarios, HTC attracts long-term attention from both industry and academia. However, the high cost of hierarchical multi-label annotation makes HTC suffer from the data scarcity problem. In view of the difficulty in balancing the controllability of multiple structural labels and text diversity, automatically generating high-quality data for HTC is challenging and under-explored. To fill this blank, we propose a novel data generation framework tailored for HTC, which can achieve both label controllability and text diversity by extracting high-quality semanticlevel and phrase-level hierarchical label information. Experimental results on three benchmarks demonstrate that, compared with existing data augmentation methods, the data generated from our method can bring the most significant performance improvements of several strong HTC models. Extensive analysis confirms that the improvements yielded by our proposed method do correlate to the enhancement of label controllability and text diversity. 1 ## 1 Introduction Hierarchical Text Classification (HTC) is a representative multi-label text classification problem, aiming to assign one text with multiple labels in a given label hierarchical taxonomy. HTC is widely involved in realistic scenarios, e.g., news classification (Lewis et al., 2004; Sandhaus, 2008), science paper classification (Kowsari et al., 2017), Ecommerce (Gao, 2020). To solve such an important and challenging task, adequate high-quality labeled data is indispensable. However, multi-labeled data ∗Equal contribution. Work is done during the internship of Yue Wang at Ant Group. †Corresponding author 1Our codes are publicly available at https://github. com/wangyuenlp/Data-Generation-for-HTC. annotation is usually expensive, and the hierarchical structure of multi-label further makes such annotation unaffordable. Thus, the primary goal of our approach is to deal with the data scarcity problem. Existing works (Kowsari et al., 2017; Shimura et al., 2018; Banerjee et al., 2019; Zhou et al., 2020; Wang et al., 2022c,d; Jiang et al., 2022) focus on enhancing the model ability to relieve the need for annotated data, but few of them address this problem by automatically generating high-quality data. Recently, since generative Pre-trained Language Models (PLMs) have achieved surprising performance (Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2020), generative data augmentation has drawn more and more attention (Anaby-Tavor et al., 2020; Schick and Schütze, 2021; Wang et al., 2022a; Meng et al., 2022). With the help of the rich knowledge obtained from the large-scale data in the pre-training stage, generative data augmentation can further improve the quality of the generated data, which can better alleviate the data scarcity problem. However, generating high-quality data for HTC is still an under-explored problem. We argue that two main challenges exist for this problem: the need for label controllability and text diversity. If the meaning of the generated data is out of the control of the given labels, it is usually noisy data assigned with wrong labels, which does little help and even harms HTC models. Besides, the generated data may not improve the generalization ability of HTC models if the expression is similar to the original text. Intuitively, restricted label constraints can ensure label controllability but might impede the text diversity of the generated data, especially in HTC, which has multiple label constraints in a hierarchical structure (Kumar et al., 2021). Therefore, achieving a good balance between controllability and diversity is the key to generating high-quality data for HTC. To improve label controllability and text diversity of generated data, we propose a novel data generation framework for HTC, which aims to enrich the input information of the data generation model. Specifically, we first design the semanticlevel and phrase-level label information enhanced prompt, consisting of label names and phrases extracted from origin training samples. Although label names can improve label controllability, to a certain extent, only leveraging label names makes the model inevitably generate similar data since the same input label name needs to create hundreds and even thousands of data samples. Thus, the phrases extracted from original train samples can be used as a supplement to label names. To ensure label controllability and text diversity, we need the phrases to represent the critical information that can infer the origin label correctly and simultaneously avoid common expressions. Therefore, to deal with these problems, we propose a hierarchical label information enhanced keyword extractor to extract the most relevant phrase to the label meaning. Finally, to further improve the label controllability, we introduce consistency filtering to filter out the low-quality generated data, which makes the HTC model predict different labels compared to the controlled labels. Experimental results on three benchmarks confirm the effectiveness of our proposed method. Furthermore, both quantitative and qualitative analyses show the superiority of our method over existing data augmentation methods in comparing label controllability and text diversity. In a nutshell, our contributions are as follows: - To the best of our knowledge, we are the first to explore the effectiveness of generative data augmentation methods on the HTC problem. - We propose a data generation framework for HTC, which leverages both semantic-level and phrase-level hierarchical label information to enhance the label controllability and text diversity of the generated data. - We confirm the effectiveness of our methods over several HTC models on three benchmarks, which can bring 1.39, 1.37, and 1.25 Macro-F1 improvements of BERT-base, RoBERTa-base, and state-of-the-art HTC model HGCLG on WOS, respectively. ## 2 Related Work Hierarchical Text Classification Compared to multi-label text classification, due to the unique label hierarchical taxonomy, the previous work on HTC focus on fully using the hierarchical information of the label taxonomy. Kowsari et al. (2017); Shimura et al. (2018); Wehrmann et al. (2018); Banerjee et al. (2019) train different classifiers for different nodes or levels and transfer the knowledge of the parent nodes' classifiers to the child nodes, which are called local approaches (Zhou et al., 2020). Different from local approaches, global approaches treat HTC as a flat multi-label text classification problem. Due to the lack of hierarchical information in the classifier, the global approaches focus on utilizing hierarchical information to further improve performance. Mao et al. (2019) use hierarchical information to enhance the Label Assignment Policy and propose a deep reinforcement learning-based general framework for HTC; Wu et al. (2019) utilize a meta-learner to deal with the complexity and dependencies of different labels; Aly et al. (2019) introduce capsule networks for HTC and utilize label co-occurrence to initialize weight better; Deng et al. (2021) propose text-label mutual information maximization and label prior matching to filter out irrelevant information and learn better hierarchy-aware representations; Rojas et al. (2020) define HTC as a sequence-to-sequence problem and utilize an auxiliary synthetic task and external knowledge to improve performance further. Recently, there are more and more focus on leveraging a structure encoder to model the hierarchical information (Zhou et al., 2020; Chen et al., 2021; Wang et al., 2021a, 2022c,d; Jiang et al., 2022). Different from previous work, we focus on introducing generative data augmentation to address the data scarcity problem of HTC and use a structure encoder to model hierarchical label information to improve the quality of the generated data. Generative Data Augmentation With the development of text generation models, generative data augmentation gains more and more attention from the community. Existing works apply generative data augmentation methods to various NLP downstream tasks, including multi-class text classification (Malandrakis et al., 2019; Liu et al., 2020; Anaby-Tavor et al., 2020), multi-label text classification (Zhang et al., 2020), text entailment (Vu et al., 2021; Wang et al., 2022b), relation extraction (Lee et al., 2021), sequence labeling (Wang et al., 2022a), intent classification and slot tagging (Lee et al., 2021; Rosenbaum et al., 2022), etc. Besides, there is also a line of work fo- ![2_image_0.png](2_image_0.png) cused on generating new data without fine-tuning PLMs to solve various tasks on the zero-shot setting, also called dataset generation (Schick and Schütze, 2021; Wang et al., 2021b; Meng et al., 2022; Ye et al., 2022). However, the effectiveness of generative data augmentation methods on the HTC problem is under-explored. Apart from the difference in applied tasks, existing works also differ in model architecture, objective functions, and learning paradigms. Specifically, Malandrakis et al. (2019) propose a conditional variational autoencoders-based controlled text generation model for data augmentation; Liu et al. (2020) utilize reinforcement learning to guide the conditional generation; Vu et al. (2021) leverage data generation model to generate unlabeled synthetic data and conduct self-training; Lee et al. (2021) propose an example extrapolator to generate new labeled synthetic data from existing examples. Wang et al. (2022a) propose a soft prompt-based data generation model for low-resource scenarios. Wang et al. (2022b) unify diverse NLP tasks into a text-to-text format to pre-train a multi-task data generation PLM. In this work, we focus on using generative data augmentation to bring benefits to HTC models in the full-supervised setting and improve the quality of generated data by incorporating hierarchical label information. ## 3 Method In this section, we first give the task formulation for HTC and the goal of data generation. Then, we introduce our framework consisting of the semanticlevel and phrase-level label information enhanced prompt, hierarchical label information enhanced keyword extractor and consistency filtering. ## 3.1 Task Formulation To the HTC task, given the sample (, ) from the training set , the HTC model is trained to predict according to the text . Here is a subset of label set , which is organized as a directed acyclic graph = (, ), where the edge set denotes the hierarchical information of . The data generation model aims to make the final HTC model perform better than . To achieve this goal, can utilize to generate , whose number is usually larger than . The training data of consists of both and . ## 3.2 Overview Of Our Framework Our framework consists of three models: the data generation model (Sec. 3.3), the keyword extractor (Sec. 3.4), and the filter model (Sec. 3.5). We show an illustration of our framework in Figure 1. In general, we first use to train the keyword extractor and the filter model . Then we use to extract keywords ˆ for each instance in . Afterward, to train the data generation model , we organize the label names and extracted keywords ˆ of each instance in as the input sequence and the original text as the target sequence. In the inference stage of , we utilize the sampling-based decode strategy to generate different texts according to the label names and extracted keywords ˆ of instances in . Finally, we use the filter model to remove low-quality generated data. In the following, we will describe our framework in detail. ## 3.3 Semantic-Level And Phrase-Level Label Information Enhanced Prompt To improve label controllability and text diversity, we propose the semantic-level and phrase-level label information enhanced prompt consisting of label names and keywords extracted from the samples of . Both label names and keywords can improve label controllability by providing important label information to from the semantic and phrase levels. Besides, due to the diversity of phrase expressions under the same semantics, providing keywords extracted from different samples with the same label can also help to generate diverse data. Specifically, given the sample (, ), we organize the input of the as *'generate* with label: *; generate with keywords:* ˆ' and the target output as , where ˆ refers the keywords extracted from the . In the inference stage, we follow the former prompt format to generate candidate data , whose keywords are also extracted from samples of . We shuffle the order of keywords at both the training and inference stage to avoid the over-fitting problem. ## 3.4 **Hierarchical Label Information Enhanced** Keyword Extractor The quality of keywords is important to balance label controllability and text diversity. Biased phrases irrelevant to the label meaning may mislead the data generation model and harm the label controllability. Excessively unimportant phrases unrelated to the labels of samples may control to generate stern expression, which may do harm to diversity. To improve the quality of extracted keywords, we propose the hierarchical label information enhanced keyword extractor. For the sample (, ), we want to extract sub-sequence ˆ, where ˆ consists of the most relevant keywords for the corresponding multi-label . Specifically, we first use Graphormer (Ying et al., 2021) to model the hierarchical information = (, ) and get the label embedding for each node ∈ . Formally: $$\{l_{1},l_{2},\ldots,l_{n}\}=G r a p h o r m e r(Y,E).$$ Next, for the given input , we get the text embedding of each token by: $$\{t_{1},t_{2},\ldots,t_{n}\}=B E R T_{e m b}(x),$$ where () denotes using BERT (Devlin et al., 2019) to encode the given sentence . We then utilize the attention mechanism to catch the relevance between token and label as: $Q_{i}=t_{i}W_{Q},K_{j}=l_{j}W_{K},A_{ij}=\frac{Q_{i}K_{j}^{T}}{\sqrt{d_{h}}}$, $W_{Q}\in\mathbb{R}^{d_{h}\times d_{h}}$ and $W_{Q}\in\mathbb{R}^{d_{h}\times d_{h}}$ as where ∈ R two weight matrices. Afterward, we use GumbelSoftmax (Jang et al., 2016) to calculate the probability that is the keyword of class by: $$P_{i j}=\mathrm{gumbel\_softmax}\,(A_{i1},A_{i2},\ldots,A_{i k})_{j}\,,$$ which satisfies Í∈ = 1. Therefore, the relevance score between token and a multi-label can be calculated as: $$P_{i}=\sum_{j\in y}P_{i j}.$$ Intuitively, if ˆ contains the most relevant information, a classifier can infer the origin label correctly from ˆ and predict null labels from the rest of the sequence. This intuition guides us to design the final loss function L. Specifically, with the calculated for each token , we first select the sequence of the most relevant keywords ˆ and the sequence of the least relevant keywords ˆ. ˆ contains the top 15% tokens with the highest while ˆ contains 15% token with the lowest . For better training, we generate ˆ and ˆ from by masking the rest of the tokens in practice. Finally, we encourage the classifier to assign ˆ with the original multi-label and ˆ with all-zero label by the final loss function: $$T({\hat{x}}_{n}),y_{n o n}),$$ ## L = ℓ( (ˆ), ) + ℓ( (ˆ), ), where ℓ denotes the binary cross entropy loss for multi-label classification. We jointly update the parameter of the graph network ℎ and the text encoder . After training, for each sentence , we directly use ˆ as the extracted keywords ˆ for each sample at the inference stage. ## 3.5 Consistency Filtering Although the above methods improve the label controllability as far as possible, they may still generate low-quality samples whose meaning is contradicted the assigned label. This phenomenon is inevitable due to the pursuit of text diversity. To filter out these samples, we introduce consistency filtering (Anaby-Tavor et al., 2020). Specifically, we first use the training set to train an HTC model . Then, we use to predict the labels of and compare the predicted labels with their assigned labels. Only the generated samples whose predicted labels by are consistent with assigned labels will be kept, constituting . We combine with to train the target HTC model from scratch. ## 4 Experiments 4.1 Datasets And Metrics We conduct experiments on three HTC benchmarks: Web-of-Science (WOS) (Kowsari et al., 2017), The New York Times Annotated Corpus (NYT) (Sandhaus, 2008), and Reuters Corpus Volume I Version 2 (RCV1-V2) (Lewis et al., 2004). WOS is a science paper classification dataset, while NYT and RCV1-V2 are news classification datasets. We show the statistic of three benchmarks in Table 1. We follow Zhou et al. (2020); Chen et al. (2021); Wang et al. (2022c) to pre-process the data and report the Macro-F1 and Micro-F1 as the metrics of HTC models. ## 4.2 Baselines To confirm the effectiveness of our methods, we compare their improvements with different data augmentation methods on multiple representative HTC models. Our baselines can be grouped into two types: HTC and data augmentation methods. | Dataset | Train | Dev | Test | Classes | Avg(𝑦𝑖) | D | |-----------|---------|-------|---------|-----------|-------------|-----| | WOS | 30,070 | 7,518 | 9,397 | 141 | 2.0 | 2 | | NYT | 23,345 | 5,834 | 7,292 | 166 | 7.6 | 8 | | RCV1-V2 | 20,833 | 2,316 | 781,265 | 103 | 2.2 | 4 | HTC. We choose three baselines: **(1) BERT** (Devlin et al., 2019), **(2) RoBERTa** (Liu et al., 2019), and **(3) HGCLR** (Wang et al., 2022c). BERT and RoBERTa are popular PLMs, which are widely used as text encoders. HGCLR is a state-of-the-art HTC model which utilizes contrastive learning to incorporate hierarchical label information. Data Augmentation. We select five strong data augmentation methods for comparisons: (1) EDA (Wei and Zou, 2019) is a rule-based data augmentation method that uses four simple editing operations to disturb texts; **(2) Back Translation (BT)** (Sennrich et al., 2016) first translate the texts into other languages and then translate them to the original language; **(3) LAMBADA** (AnabyTavor et al., 2020) is a generative data generation method that generates new data according to label names; **(4) GDA** (Zhang et al., 2020) utilize generative PLMs to generate label-invariant perturbations on the texts; **(5) PromDA** (Wang et al., 2022a) is a state-of-the-art generative data augmentation method that uses an efficient fine-tuning technique to train the data generation model. ## 4.3 Implementation Details We implement all the data generation and HTC models with the open-sourced toolkit *Transformers*2(Wolf et al., 2020). For data generation models, we take the T5 (Raffel et al., 2020) as our backbone PLM and conduct fine-tuning from the publicly available checkpoint *t5-large* 3. In the fine-tuning stage, we use Adafactor (Shazeer and Stern, 2018) optimizer to update all parameters of T5 and set the learning rate as 1e-3 and batch size as 32. We fine-tune data generation models for 10 k steps, evaluate the perplexity on the development set every 1 k steps, and save the model with the lowest perplexity. In the inference stage, 2https://github.com/huggingface/transformers 3https://huggingface.co/t5-large | Method | WOS | NYT | RCV1-V2 | | | | |-------------------------------------|---------------|---------------|----------------|---------------|---------------|----------------| | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | Macro-F1 | Micro-F1 | | | BERT-base (Devlin et al., 2019) | 79.87 ± 0.41 | 85.96 ± 0.23 | 65.69 ± 1.04 | 78.20 ± 0.21 | 66.85 ± 0.80 | 85.86 ± 0.14 | | +EDA (Wei and Zou, 2019) | 79.82 ± 0.32 | 85.87 ± 0.29 | 65.16 ± 0.50 | 77.17 ± 0.45 | 67.02 ± 0.12 | 85.47 ± 0.32 | | +BT (Sennrich et al., 2016) | 80.09 ± 0.52 | 86.02 ± 0.23 | 65.43 ± 0.39 | 77.46 ± 0.08 | 66.41 ± 0.54 | 85.38 ± 0.26 | | +LAMBADA (Anaby-Tavor et al., 2020) | 80.38 ± 0.33 | 86.28 ± 0.24 | 66.69 ± 0.26 | 78.18 ± 0.44 | 66.35 ± 0.57 | 86.00 ± 0.03 | | +GDA (Zhang et al., 2020) | 80.35 ± 0.37 | 86.26 ± 0.27 | 66.20 ± 0.71 | 78.04 ± 0.40 | 66.30 ± 0.77 | 85.59 ± 0.26 | | +PromDA (Wang et al., 2022a) | 80.05 ± 0.29 | 85.95 ± 0.33 | 65.94 ± 0.36 | 77.92 ±0.08 | 66.60 ± 0.38 | 85.34 ± 0.35 | | +Ours | 81.26 ± 0.23↑ | 86.77 ± 0.17↑ | 66.85 ± 0.43↑ | 78.34 ± 0.46↑ | 67.41 ± 0.49↑ | 86.06 ± 0.37 ↑ | | RoBERTa-base (Liu et al., 2019) | 79.94 ± 0.82 | 86.24 ± 0.45 | 67.77 ± 0.38 | 79.55 ± 0.27 | 68.63 ± 0.53 | 86.81 ± 0.14 | | +EDA (Wei and Zou, 2019) | 80.37 ± 0.58 | 86.32 ± 0.31 | 66.50 ± 0.38 | 78.48 ± 0.39 | 66.71 ± 1.44 | 86.10 ± 0.13 | | +BT (Sennrich et al., 2016) | 80.30 ± 0.26 | 86.38 ± 0.10 | 67.18 ± 0.67 | 78.90 ± 0.24 | 66.67 ± 0.82 | 86.47 ± 0.28 | | +LAMBADA (Anaby-Tavor et al., 2020) | 80.54 ± 0.35 | 86.49 ± 0.18 | 67.75 ± 0.32 | 79.41 ± 0.38 | 66.84 ± 0.45 | 86.47 ± 0.42 | | +GDA (Zhang et al., 2020) | 80.42 ± 0.42 | 86.57 ± 0.23 | 67.54 ± 0.36 | 79.43 ± 0.23 | 68.03 ± 0.31 | 86.50 ± 0.10 | | +PromDA (Wang et al., 2022a) | 80.15 ± 0.61 | 86.04 ± 0.31 | 67.32 ± 0.38 | 79.01 ± 0.13 | 67.43 ± 0.71 | 86.34 ± 0.17 | | +Ours | 81.31 ± 0.12↑ | 86.96 ± 0.13↑ | 68.16 ± 0.25↑ | 79.64 ± 0.17↑ | 68.50 ±0.42 | 86.84 ± 0.12↑ | | HGCLR (Wang et al., 2022c) | 80.82 ± 0.20 | 86.63 ± 0.27 | 66.90 ± 0.60 | 78.48 ± 0.48 | 66.64 ± 0.93 | 86.04 ± 0.12 | | +EDA (Wei and Zou, 2019) | 80.85 ± 0.45 | 86.78 ± 0.44 | 66.08 ± 0.36 | 77.53 ± 0.29 | 66.72 ± 0.32 | 85.20 ± 0.19 | | +BT (Sennrich et al., 2016) | 79.63 ± 0.61 | 85.89 ± 0.38 | 65.42 ± 0.50 | 77.37 ± 0.35 | 67.03 ± 0.57 | 85.72 ± 0.26 | | +LAMBADA (Anaby-Tavor et al., 2020) | 81.25 ± 0.24 | 87.08 ± 0.17 | 66.78 ± 0.42 | 78.22 ± 0.31 | 66.55 ± 0.37 | 85.58 ± 0.06 | | +GDA (Zhang et al., 2020) | 80.99 ± 0.50 | 86.84 ± 0.19 | 66.25 ± 0.37 | 77.99 ± 0.31 | 65.30 ± 0.49 | 85.13 ± 0.25 | | +PromDA (Wang et al., 2022a) | 80.60 ± 0.37 | 86.63 ± 0.23 | 66.08 ± 0.44 | 78.02 ± 0.10 | 66.88 ± 0.34 | 85.72 ± 0.20 | | +Ours | 82.07 ± 0.18↑ | 87.36 ± 0.08↑ | 67.69 ± 0.52 ↑ | 79.03 ± 0.24↑ | 67.90 ± 0.82↑ | 86.25 ± 0.20↑ | we use Nucleus Sampling (Holtzman et al., 2019) to generate diverse data. Specifically, we set Topp as 0.9 and get five independently sampled outputs for one input text. For all data augmentation methods, we augment data 5 times the number of samples in the original training set. For EDA, we use the official implentation 4. For BT, we use the open-sourced English to France machine translation model *Helsinki-NLP/opus-mt-en-fr* 5and France to English model *Helsinki-NLP/opus-mt-fren* 6. For HTC models, we use *bert-base-uncased*7 and *roberta-base*8as the initial checkpoints for BERT and RoBERTa respectively and completely follow Wang et al. (2022c) to set hyper-parameters. We report the average performance of HTC models using five different random seeds. ## 4.4 Main Results We report the performance of all baselines and our proposed method in Table 2. From the results, we can find that EDA and BT fail to improve the performance of the HTC models significantly, which shows simple perturbations on the original texts cannot help to improve the generalization ability of HTC models in the full-supervised setting. Existing generative data augmentation methods can help some HTC models achieve better performance, but the improvements are marginal and fail to improve performance on NYT and RCV1-V2. This phenomenon shows that generating diverse data satisfied with multiple label constraints for HTC is still challenging. With the help of the data generated from our proposed method, HTC models can achieve better performance in almost all settings. The improvements are more significant than other data augmentation methods. Besides, we also find the improvements of Macro-F1 brought by generative data augmentation methods are better than the Micro-F1, which we speculate results from the long-tail labels suffering from a more severe data scarcity problem. All these results confirm the effectiveness of our proposed methods to deal with the data scarcity problem for HTC. ## 4.5 Model Ablation We conduct a model ablation study and report the results in Table 3. We find that only providing label names or keywords to the data generation models harms the performance of HTC models. This result confirms our hypothesis that we need to deliver both semantic-level and phrase-level label information to enhance the ability of the data generation models. We also replace the proposed keyword extractor with yake (Campos et al., 2018a,b, 2020), a state-of-the-art unsupervised keyword extractor. The results show that without the help of the hier- Method **Macro-F1 Micro-F1** BERT 79.87 ± 0.41 85.96 ± 0.23 Ours 81.26 ± 0.23 86.77 ± **0.17** -remove keywords 80.38 ± 0.33 86.28 ± 0.24 -remove label names 80.21 ± 0.08 86.13 ± 0.14 -replaced with yake 80.73 ± 0.49 86.59 ± 0.20 -replaced with T5-base 80.87 ± 0.12 86.58 ± 0.17 Table 3: Model ablation results. We conduct experiments based on BERT-base on the WOS dataset and report the average performance of 5 different runs. ± denotes the standard deviation. Method **Self-BLEU Macro-F1 Micro-F1** EDA (Wei and Zou, 2019) 0.8132 79.82 ± 0.32 85.87 ± 0.29 BT (Sennrich et al., 2016) 0.9905 80.09 ± 0.52 86.02 ± 0.23 LAMBADA (Anaby-Tavor et al., 2020) 0.7184 80.38 ± 0.33 86.28 ± 0.24 GDA (Zhang et al., 2020) 0.6726 80.35 ± 0.37 86.26 ± 0.27 PromDA (Wang et al., 2022a) 0.6915 80.05 ± 0.29 85.95 ± 0.33 Ours 0.5987 81.26 ± 0.23 86.77 ± **0.17** archical label information, the extracted keywords may not be the words most relevant to the assigned labels, which harms the quality of the generated data. Besides, we also replace *t5-large* (770M parameters) with *t5-base* (220M parameters) 9to study the effect of the backbone generative PLMs. With the decrease in parameters, the performance of our proposed method drops slightly. Despite the drop, it still performs better than other data generation methods, which use *t5-large* as the backbone, compared to the results reported in Table 2. Furthermore, without the help of the unified data generation prompt format, Macro-F1 drops more significantly than Micro-F1, which shows our method can transfer the knowledge from head labels to longtail labels and thus can mitigate the sparse label distributions. ## 4.6 The Number Of Generated Data We also study the effect of the number of generated data. The results are shown in Figure 2. We can observe that when the number of generated data is smaller than 5 times the number of original train data, with the increase of the number of generated data, the performance of the HTC model improves. When bigger than 5 times the original training data, no steady growth trend is observed. Therefore, we generate 5 times the original training data in our experiments. Besides, we find that the change of Method w/o CF with CF Macro-F1 Micro-F1 **Macro-F1 Micro-F1** ![6_image_0.png](6_image_0.png) EDA (Wei and Zou, 2019) 79.87 ± 0.31 85.83 ± 0.27 79.82 ± 0.32 85.87 ± 0.29 BT (Sennrich et al., 2016) 79.95 ± 0.39 85.96 ± **0.23** 80.09 ± 0.52 86.02 ± 0.23 LAMBADA (Anaby-Tavor et al., 2020) 78.51 ± 0.37 85.55 ± 0.29 80.38 ± 0.33 86.28 ± 0.24 GDA (Zhang et al., 2020) 78.49 ± 0.51 85.34 ± 0.33 80.35 ± 0.37 86.26 ± 0.27 PromDA (Wang et al., 2022a) 78.92 ± 0.42 85.45 ± 0.31 80.05 ± 0.29 85.95 ± 0.33 Ours 79.37 ± 0.36 85.69 ± 0.34 81.26 ± 0.23 86.77 ± **0.17** Table 5: The effect of Consistency Filtering (denoted as ![6_image_1.png](6_image_1.png) CF). We report the average performance of the BERTbase on the WOS dataset when using these augmented data and with 5 random seeds. ± denotes the standard deviation. Figure 2: The effect of the number of generated data. We change the number of data generated from our proposed methods and report the performance of the BERTbase on WOS. **The number of generated data** refers to how many times we generate compared to the original training data. All results are the average of 5 runs. Macro-F1 is more significant than Micro-F1. We speculate it also results from the generated data doing more help to the long-tailed data. ## 4.7 The Diversity Of The Generated Data We use Self-BLEU (Zhu et al., 2018) as the metric to conduct a quantitative analysis of the diversity of the generated data. Because we want the generated data to diversify as much as possible under the condition of meeting the need for label controllability, we report the Self-BLEU score of filtered data after using consistency filtering. From the results in Table 4, we can observe that the Self-BLEU score of EDA and BT is significantly higher than generative data augmentation methods, and our proposed method achieves the lowest Self-BLEU score. These results further confirm the effectiveness of our proposed method to improve text diversity while ensuring label controllability. ## 4.8 The Effect Of Consistency Filtering We also study the effect of consistent filtering and report the results in Table 5. EDA and BT achieve no improvements after consistency filtering, which 9https://huggingface.co/t5-base | Method | Performance | Total Extra Cost | Filter Model Cost | Data Generation Model Cost | Keyword Extractor Cost | |----------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|---------------------|------------------------------|--------------------------| | Macro-F1 | Micro-F1 | | | | | | BERT-base (Devlin et al., 2019) | 79.87 ± 0.41 85.96 ± 0.23 | - | - | - | - | | EDA (Wei and Zou, 2019) | 79.82 ± 0.32 85.87 ± 0.29 4 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 | - | - | | | | BT (Sennrich et al., 2016) | 80.09 ± 0.52 86.02 ± 0.23 4 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 | - | - | | | | LAMBADA (Anaby-Tavor et al., 2020) 80.38 ± 0.33 86.28 ± 0.24 30 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 26 GPU hours on 80GB NVIDIA A100 | - | | | | | | GDA (Zhang et al., 2020) | 80.35 ± 0.37 86.26 ± 0.27 30 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 26 GPU hours on 80GB NVIDIA A100 | - | | | | | PromDA (Wang et al., 2022a) | 80.05 ± 0.29 85.95 ± 0.33 30 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 26 GPU hours on 80GB NVIDIA A100 | - | | | | | Ours | 81.26 ± 0.23 86.77 ± 0.17 34 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 26 GPU hours on 80GB NVIDIA A100 4 GPU hours on 80GB NVIDIA A100 | | | | | Table 6: Computational cost analysis of different data augmentation methods. **Total Extra Cost** represents the total extra computational cost caused by different data augmentation methods compared to when data augmentation is not used, which is the sum of filter model cost, data generation model cost, and keyword extractor cost. To show the trade-off between performance and cost, we also report the performance of the BERT-base on the WOS dataset with these different data augmentation methods. ± denotes the standard deviation. | Original Label: | Medical, Idiopathic Pulmonary Fibrosis | |-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Original Text: | the present study investigated the effects of diesel exhaust (de) on an experimental model of bleomycin (blm)-induced lung injury and fibrosis in mice. blm was intravenously administered to both nrf2(+/+) and nrf2(-/-) c57bl/6j mice on day 0. the mice were exposed to de for 56 days from 28 days before the blm injection to 28 days after the blm injection. inhalation of de induced significant inhibition of airway clearance function and the proinflammatory cytokine secretion in macrophages, an increase in neutrophils, and severe lung inflammatory injury, which were greater in nrf2(-/-) mice than in nrf2(+/+) mice. in contrast, inhalation of de was observed to induce a greater increase of hydroxyproline content in the lung tissues and significantly higher pulmonary antioxidant enzyme mrna expression in the nrf2(+/+) mice than in nrf2(-/-) mice. de is an important risk factor, and nrf2 regulates the risk of a de inhalation induced immune response during blm lung injury and fibrosis in mice. | | EDA Aug Data: | the present study investigated the effects of diesel exhaust de on an experimental model part of bleomycin blm induced lung injury and fibrosis in mice blm was intravenously administered to both nrf and nrf c bl j diamond state mice on day the mice were exposed to de for days make from days before the blm injection to days after the blm injection inhalation of de part induced significant inhibition of airway clearance function discrimination and the proinflammatory cytokine secretion in depicted object macrophages an increase in neutrophils and severe lung inflammatory injury which clarence day were canvass greater in nrf mice than in nrf mice in contrast inhalation of de was observed to induce a greater increase of hydroxyproline content in the lung tissues and significantly higher pulmonary antioxidant enzyme mrna expression angstrom in the nrf mice secernment than division in nrf mice de is an open demarcation important risk factor growth and nrf regulates the risk of a de inhalation induced immune response during blm lung injury and fibrosis significant in mice | | BT Aug Data: | The effects of diesel exhaust gas (de) on an experimental model of bleomycin (blm)- and fibrosis induced lung lesions in mice were studied. blm was administered intravenously to both nrf2(+/+) and nrf2(-) c57bl/6j mice on day 0. mice were exposed for 56 days from 28 days prior to blm injection at 28 days after blm injection. inhalation of the significantly induced inhibition of respiratory clearance function and proinflammatory cytokine secretion in macrophages, an increase in neutrophils and severe pulmonary lesions, which were higher in nrf2(–) mice than in nrf2(+/+) mice. | | Yake Keywords: | present study investigated, diesel exhaust, model of bleomycin, investigated the effects,effects of diesel, experimental model, mice, blm, blm injection, lung, injury, days, present, fibrosis, lung inflammatory injury | | Yake Aug Data: | the present study investigated the effects of diesel exhaust (de) on lung fibrosis using a model of blm injection. we observed that de causes lung fibrosis in mice by observing an increase in de lung fibrosis and that de cannot be seen until a week or more after the injection. compared to the model of de, de causes lung fibrosis and fibrosis with similar effect, and also reduces lung fibrosis without affecting lung function. the experimental model of de was able to detect changes in lung fibrosis in mice with de through three weeks, seven weeks and seven months. the data indicate that de causes lung fibrosis in mice mainly through increased de of lung fibrosis. it was also shown that de causes lung fibrosis in mice by increasing de lung fibrosis and that de causes a loss of lung function in mice. these results suggest that de causes lung fibrosis in mice by using de. the present study has the potential to help design treatment programs for de, thus giving consideration to de process the blm pipeline. (c) 2017 the authors. published by elsevier ltd. | | Ours Keywords: | bleomycin, lung, injury, fibrosis, intravenously, inhalation, cytokine, neutrophils, inflammatory, pulmonary | | Ours Aug Data: | background: respiratory disease may induce acute respiratory distress syndrome (ards) in allergic individuals. recent studies have shown that bronchopulmonary infarction is associated with ards. the purpose of this study is to examine the effect of lung injury on the innate immune system and to identify the underlying mechanisms in a patient with ards. methods: this study was conducted in two hospitals. the first was a single-centre randomized trial comparing two treatment groups: an exposure to a stw 3 tidal cardiopulmonary exercise (tph-ip) and a control group that received regular bleomycin inhalation for 6 weeks. the second was a three-level mixed-effects model that involves three different innate immune cells in response to tph2 and cytokines. the inhalation of the tph2 and tph3 cells was studied by in vitro cytokine profiling. data were analyzed by the chi-square test, pearson correlation coefficients, and poisson's ratio. results: the tph2 and tph3 groups induced different levels of apoptosis, neutrophil and macrophage infiltration, and proliferation. the lung injury induced lower respiratory tract injury than the control group, which were asymptomatic for the dna damage in the tph2 cells. the tph2 cell infusion induced the highest tph2 and cytokine concentrations, and the tph3 cells infiltrating the lung tissue. the lung injury caused dna damage, as confirmed by the cytokine profiling. conclusions: tph2 and tph3 cells inhalation induced higher ards and correlated with the inflammatory response, as determined by the mrna expression and protein levels in vivo. we observed that the neutrophils and macrophages were induced to respond in the same way to tph2. we therefore conclude that tph2 plays a crucial role in the ards through triggering cellular cytokine activation. | shows that simple perturbations may not make texts deviate from the original semantics. Therefore, few noise data with wrong labels exist in the data augmented from EDA and BT. Although generative data augmentation methods achieve significant improvements than EDA and BT after consistency filtering, they even perform worse before consistency filtering. The phenomenon may result from the noise data with wrong labels, which is inevitably due to the pursuit of diversity. These results show that consistency filtering is indispensable for generative data augmentation methods to balance label controllability and text diversity. Besides, compared to existing data generation methods, our proposed method can achieve the best performance with or without the help of consistency filtering, which further confirms its effectiveness. ## 4.9 Computation Cost Analysis We conduct a computation cost analysis of different data augmentation methods, which is reported in Table 6. Despite the small extra computational cost, EDA and BT fail to improve the performance of the HTC models significantly. Furthermore, with similar computation costs, other generative data augmentation methods cannot bring as many improvements as our methods. The results show that increasing computational cost is not a sufficient condition for improving performance and further embodies the technical contribution of our method. ## 4.10 Case Analysis In Figure 7, we conduct a case analysis on the extracted keywords and generated data. To the data augmented from EDA and BT, there is a lot of overlap with the original text, while generative data augmentation methods rarely. To the extracted keywords, we can observe that yake may draw some words that are irrelevant to the label information, e.g., 'present study investigated, investigated the effects, days, present'. These excessively useless keywords can not improve label controllability and hinder the diversity of the generated data, which causes the generated data to have very similar semantics to the original sample. With the help of the hierarchical label information and the loss function we designed, the keywords extracted by our methods can pick up the most important information that causes the original text to be annotated as the original label. Therefore, the data generated by our proposed methods can satisfy the label constraints and have differences with the original texts from both semantic and grammatical levels. We show more generated data in appendix C. ## 5 Conclusion HTC is an important and challenging task, which usually suffers from the data scarcity problem because of the high cost of hierarchical multi-label annotation. With the development of generative PLMs, generating high-quality data for HTC becomes possible. In this paper, to deal with the data scarcity problem, we explore the effect of generative augmentation methods on HTC models. In order to improve the label controllability and text diversity, we propose a novel data generation framework for HTC, which consists of semanticlevel and phrase-level label information enhanced prompt, hierarchical label information enhanced keyword extractor and consistency filtering. Despite the challenge of generating high-quality data for HTC, our proposed framework can achieve a balance between the controllability of multiple hierarchical labels and text diversity and improve the performance of several strong HTC models, which is demonstrated by experimental results on three HTC benchmarks and comprehensive analysis. ## 6 Limitations Despite the effectiveness of our proposed method, it still has two main limitations: **(1).** Generative data augmentation methods need to use the original HTC training set to fine-tune the backbone generative PLMs. Then, they need to go through an inference stage to generate data. Both the training and inference stage need more GPU resources, which increase carbon emissions. Although the data generation is usually complete offline and does not improve the time cost of online progress, we leave how to relieve the need for GPU resources as future directions. **(2).** Although we conduct experiments on three widely used HTC benchmarks, the language of all these benchmarks is English, which has limited morphology. The effectiveness of our proposed method on language with varied morphology needs to be further confirmed. ## 7 Ethics Statement The cases shown in this paper are generated automatically from different data augmentation methods, which do not represent the viewpoints of the authors. Due to social bias and lack of professional knowledge, the data generated from generative PLMs may contain misleading and toxic information, which needs to be addressed before being applied to realistic scenarios. Besides, all data generated from our proposed methods are only for scientific research. Finally, we provide comprehensive details of our model implementation and upload the source code, which guarantees the reproducibility of our experimental results. ## Acknowledgements This work is supported by the National Science Foundation of China (NSFC No. 62206194), the Natural Science Foundation of Jiangsu Province, China (Grant No. BK20220488), and JSSCBS20210661. This work is also supported by Beijing Academy of Artificial Intelligence (BAAI) and Ant Group through Ant Innovative Research Program. ## References Rami Aly, Steffen Remus, and Chris Biemann. 2019. Hierarchical multi-label classification of text with capsule networks. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics: Student Research Workshop, pages 323– 330. Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 7383–7390. Siddhartha Banerjee, Cem Akkaya, Francisco PerezSorrosal, and Kostas Tsioutsiouliklis. 2019. Hierarchical transfer learning for multi-label text classification. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 6295–6300. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *arXiv preprint arXiv:2005.14165*. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. *Information Sciences*, 509:257–289. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Mário Jorge, Célia Nunes, and Adam Jatowt. 2018a. A text feature based automatic keyword extraction method for single documents. In *European* conference on information retrieval, pages 684–691. Springer. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Mário Jorge, Célia Nunes, and Adam Jatowt. 2018b. Yake! collection-independent automatic keyword extractor. In *European Conference on Information Retrieval*, pages 806–810. Springer. Haibin Chen, Qianli Ma, Zhenxi Lin, and Jiangyue Yan. 2021. Hierarchy-aware label semantics matching network for hierarchical text classification. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4370–4379. Zhongfen Deng, Hao Peng, Dongxiao He, Jianxin Li, and Philip Yu. 2021. HTCInfoMax: A global model for hierarchical text classification via information maximization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3259–3265, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Dehong Gao. 2020. Deep hierarchical classification for category prediction in e-commerce system. In Proceedings of The 3rd Workshop on e-Commerce and NLP, pages 64–68. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In International Conference on Learning Representations. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. *arXiv* preprint arXiv:1611.01144. Ting Jiang, Deqing Wang, Leilei Sun, Zhongzhi Chen, Fuzhen Zhuang, and Qinghong Yang. 2022. Exploiting global and local hierarchies for hierarchical text classification. *arXiv preprint arXiv:2205.02613*. Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, Matthew S Gerber, and Laura E Barnes. 2017. Hdltex: Hierarchical deep learning for text classification. In 2017 16th IEEE international conference on machine learning and applications (ICMLA), pages 364–371. IEEE. Sachin Kumar, Eric Malmi, Aliaksei Severyn, and Yulia Tsvetkov. 2021. Controlled text generation as continuous optimization with multiple constraints. Advances in Neural Information Processing Systems, 34:14542–14554. Kenton Lee, Kelvin Guu, Luheng He, Tim Dozat, and Hyung Won Chung. 2021. Neural data augmentation via example extrapolation. arXiv preprint arXiv:2102.01335. David D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361–397. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Ruibo Liu, Guangxuan Xu, Chenyan Jia, Weicheng Ma, Lili Wang, and Soroush Vosoughi. 2020. Data boost: Text data augmentation through reinforcement learning guided conditional generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9031–9041. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 90–98. Yuning Mao, Jingjing Tian, Jiawei Han, and Xiang Ren. 2019. Hierarchical text classification with reinforced label assignment. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 445–455. Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. *arXiv* preprint arXiv:2202.04538. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Kervy Rivas Rojas, Gina Bustamante, Arturo Oncevay, and Marco Antonio Sobrevilla Cabezudo. 2020. Efficient strategies for hierarchical text classification: External knowledge and auxiliary tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2252–2257. Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. 2022. LINGUIST: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 218–241, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Evan Sandhaus. 2008. The new york times annotated corpus ldc2008t19. Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6943– 6951. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In *International Conference on Machine Learning*, pages 4596–4604. PMLR. Kazuya Shimura, Jiyi Li, and Fumiyo Fukumoto. 2018. Hft-cnn: Learning hierarchical category structure for multi-label short text categorization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 811–816. Tu Vu, Minh-Thang Luong, Quoc Le, Grady Simon, and Mohit Iyyer. 2021. Strata: Self-training with task augmentation for better few-shot learning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5715– 5731. Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, and Di Wang. 2021a. Concept-based label embedding via dynamic routing for hierarchical text classification. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5010–5019. Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng, and Daxin Jiang. 2022a. Promda: Prompt-based data augmentation for low-resource nlu tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4242–4255. Yufei Wang, Jiayi Zheng, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, and Daxin Jiang. 2022b. Knowda: All-in-one knowledge mixture model for data augmentation in few-shot nlp. *arXiv preprint* arXiv:2206.10265. Zihan Wang, Peiyi Wang, Lianzhe Huang, Xin Sun, and Houfeng Wang. 2022c. Incorporating hierarchy into text encoder: a contrastive learning approach for hierarchical text classification. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7109–7119. Zihan Wang, Peiyi Wang, Tianyu Liu, Yunbo Cao, Zhifang Sui, and Houfeng Wang. 2022d. Hpt: Hierarchyaware prompt tuning for hierarchical text classification. *arXiv preprint arXiv:2204.13413*. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021b. Towards zero-label language learning. *arXiv* preprint arXiv:2109.09193. Jonatas Wehrmann, Ricardo Cerri, and Rodrigo Barros. 2018. Hierarchical multi-label classification networks. In *International conference on machine learning*, pages 5075–5084. PMLR. Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382–6388. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 conference on empirical methods in natural language* processing: system demonstrations, pages 38–45. Jiawei Wu, Wenhan Xiong, and William Yang Wang. 2019. Learning to learn and predict: A meta-learning approach for multi-label classification. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4354–4364. Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. Zerogen: Efficient zero-shot learning via dataset generation. *arXiv preprint arXiv:2202.07922*. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and TieYan Liu. 2021. Do transformers really perform badly for graph representation? *Advances in Neural Information Processing Systems*, 34:28877–28888. Danqing Zhang, Tao Li, Haiyang Zhang, and Bing Yin. 2020. On data augmentation for extreme multi-label classification. *arXiv preprint arXiv:2009.10778*. Jie Zhou, Chunping Ma, Dingkun Long, Guangwei Xu, Ning Ding, Haoyu Zhang, Pengjun Xie, and Gongshen Liu. 2020. Hierarchy-aware global model for hierarchical text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1106–1117. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1097–1100. | Noisy | Method | Macro-F1 | Micro-F1 | |---------|----------|--------------|--------------| | 0% | BERT | 79.87 ± 0.41 | 85.96 ± 0.23 | | 0% | Ours | 81.26 ± 0.23 | 86.77 ± 0.17 | | 10% | BERT | 78.92± 0.50 | 85.81 ± 0.44 | | 10% | Ours | 80.99± 0.49 | 86.64 ± 0.41 | ## A Human Evaluation For Filter Model To further confirm the performance of the model, we conduct a human evaluation of 200 sampled generated data and compare the consistency between human judgments and model predictions. According to the human annotation, 200 sampled generated data consist of 191 correct samples and 9 noisy samples. Our filter model keeps 187 out of 191 correct samples and only 2 noisy samples, which confirms the effectiveness of the model. ## B Noisy Label To examine the performance when there are noisy labels in the original data, we add 10% noisy data in the original training set of WOS and conduct experiments on BERT-base. The results show that the performance keeps almost unchanged with our proposed method but drops significantly without our method. We hypothesize the stability comes from the "Semantic-Level and Phrase-Level Label Information Enhanced Prompt," which can unify the format of model inputs to keep the generalization ability obtained in the pre-training stage and utilize both the information of label names and keywords to avoid disturbing from noisy label names as much as possible. ## C Case Analysis We show more data generated by our proposed methods in Table 9, 10 and 11. We can observe that the data generated by our proposed method is diverse at both semantic and grammatical levels and satisfies the meaning of the original labels. | Original Label: | CS, Cryptography | |-----------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Original Text: | a new method for obtaining random bijective s-boxes based on discrete chaotic map is presented. the proposed method uses a discrete chaotic map based on the composition of permutations. the obtained s-boxes have been tested on the number of criteria, such as bijection, nonlinearity, strict avalanche criterion, output bits independence criterion, equiprobable input/output xor distribution and maximum expected linear probability. the results of performance test show that the s-box presented in this paper has good cryptographic properties. the advantage of the proposed method is the possibility to achieve large key space, which makes it suitable for generation of s-boxes for larger values of n. also, because this method uses discrete chaotic map based on the composition of permutations which has finite space domain, there is no need for discretization of continuous values of chaotic map, so the process of generation of s-boxes is not affected by approximations of any kind. | | Augmented Text 1: | in this paper, we aim to propose a novel model to predict the performance of the random number generator (rn) in cryptographic protocols. the model consists of three steps. the first step is to produce a logistic map describing the input space; the second step is to introduce a random number matrix representing the input space. then, the generator is constructed by reusing the logistic map. the second step is to utilize the xor function of the input data to obtain the information about the generator. the results show that the proposed model can describe the sensitivity of the proposed model against some of the weaknesses of the rn model in cryptography and in cryptographic protocols. furthermore, this model makes the model very robust against some kinds of attacks. this work is the first step towards the analysis of the rn model's performance. the results of the proposed model are compared with those obtained from an analysis of the existing models. the proposed model is a promising alternative to the current schemes which seek for the reduction of the computational costs by employing a multivariate function. (c) 2016 elsevier b.v. all rights reserved. | | Augmented Text 2: | this paper describes a new algorithm that produces random and non-random matrices by varying the type of input data. the new technique is used to achieve a low complexity mapping of the input data, which makes it very suitable for implementation in real physical systems. the experimental results of the proposed approach are presented as follows: the first one uses a linear matrix function to represent the input data, while the second one uses a non-random matrix that captures the independent distribution of input data. the third one uses a linear matrix function to capture the sensitivity of the input data. in order to analyze the performance of the proposed method, an experimental setup is constructed by varying the type of input data and the input data. the new method can be applied to several fields, like cryptography, information security, etc., for the identification of the hidden hidden data. the new method is presented as a way to enhance the security level of a given setup. | | Augmented Text 3: | with the increasing popularity of rfid-based cryptography techniques, it is becoming more attractive to use them in applications such as biometrics and neurobiology. one promising approach is to exploit the emergence of quantum-inspired random number generators (qisr) with the aim of enhancing the security of the qisr network. in this paper, we investigate the performance of two qisr systems based on the qisr constructs. firstly, we build a model of the qisr by modifying the dqr, i.e. the first one, composed by six independent qisrs, to determine the qisr axiom. then, we establish a linear correlation between the derived qisrs and the parameters of the proposed model. based on these correlations, the model is transformed to the standard qisr model. using the proposed model, we show that the performances of these two schemes are quite comparable. according to our results, the two schemes can be regarded as promising alternative approaches to generate the qisr network. we conclude that using the proposed model makes the qisr system promising for achieving the goal of generating a small but effective network. | | Augmented Text 4: | the main aim of the paper is to evaluate the performance of the entropy-based fuzzy multi-objective matrix (muxm) based on multivariate linear equations in which the input is a linear map whose matrix is generated by a subspace. the paper makes use of an evolutionary algorithm for generating recursive matrix in the presence of input a/k. a model that adopts a set of feasible solutions is presented. to quantify the invariance of the matrix, a graphical model has been employed, where the matrix has a recursive function that makes the n-norm less linear. the resulting map can thus be used as a nonlinear objective function in which the matrix can be reduced to an equivalent hamiltonian matrix of the same size. an example is given, where the matrix is transformed into a permutative matrix, whose dimension is the sum of the squares of n - 1 and the square of the sum of the squares of the matrix. the paper makes the case that the matrix is equivalent to the squares of the matrix, where n is the number of variables that can be represented by the matrix. moreover, the obtained entropy-based fuzzy multi-objective matrix is practically applicable for obtaining the random matrix of the matrix. a sensitivity analysis on the parameters is also presented, and the resulting results are in good agreement with those of the existing experimental results of similar models. | | Augmented Text 5: | this paper proposes a model for assessing the performance of an integrated energy-efficient routing (ier) protocol in bijective switching networks. the model is based on the use of the taylor series dynamic routing (tdr) with fixed points and distributed compensation mechanisms. the nonlinearity of the tdr protocol makes it suitable for high-power systems. furthermore, it has a very small linear number of operational points, which makes it suitable for small-scale operation such as power systems with a small number of participants. as an application example, we present a philipino implementation of the ieee 14-bus system. the obtained results show that a small number of participants can be identified as successful in the proposed method and the optimal routing parameters can be determined according to the availability of a good routing network. the use of a forward-backward shift from the tdr protocol to the forward routing makes the proposed method more effective for small-scale networks. the benefits of the proposed method are as follows: first, it makes it possible to remove the dependency on the router set-point; second, it reduces the number of participants, which makes it suitable for microcontrollers. lastly, it makes it possible to obtain a closed-loop control scheme which makes it possible to select and control the number of agents. the presented model is applied to the ieee 14-bus system to investigate the ieee 14-bus system's performance and the effectiveness of its implementation in relation to the conventional solutions. compared with the conventional solution, the proposed method shows an average reduction in energy consumption by 84% and improvement in the energy efficiency by 36% and 44%. the proposed approach is a feasible and promising alternative for the energy-efficient routing of networks with low-power systems. (c) 2017 elsevier b.v. all rights reserved. | | Table 9: Case study of the samples generated by our proposed methods. | | | Original Label: | Medical, Ankylosing Spondylitis | |------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Original Text: | ankylosing spondylitis (as) is a chronic systemic inflammatory disease. via autoimmune mediators, as can damage the auditory system similar to other systems. otoacoustic emission studies in as patients showed that the damage that causes hearing loss was in the outer hair cells. the medial olivocochlear (moc) reflex is used to evaluate the moc efferent system (moes), which includes the outer hair cells. the aim of this study was to evaluate the presence of subclinical damage in the inner ear with the aid of the moc reflex test in as patients with no hearing complaints. thirty-four patients with as and a control group of 30 healthy volunteers with similar demographic characteristics were evaluated in the study. otoacoustic emission responses, moc reflex results, and frequency-specific and total suppression findings were compared between the groups. the relationship between clinical and laboratory findings for the as patients, and the moc reflex data were also investigated. reduced moc reflex response (p = 0.04) and suppression (p = 0.019) were detected in as patients. when the clinical and laboratory findings for the as patients and the moc reflex test results were compared, a significant correlation was found only between the moc reflex and the erythrocyte sedimentation rate. the results showed that as can damage the inner ear, especially the moes, and can reduce the moc reflex response without clinical hearing loss. | | Augmented Text 1: | introduction: the incidence of spondyloarthritis (spa) has been increasing in recent years, and the disease activity is aggravated in patients with spa. this study was conducted to explore the influence of the inflammatory markers il-10 and il-12 on spa patients' clinical and radiological findings, and to explore the underlying molecular mechanisms of spa. methods: sixty-seven patients with spa (39 males and 38 females) were included in the study. all patients underwent a clinical and laboratory evaluation. they were clinically evaluated based on ph, the study included the evaluation of the ph, the assessment of the c-reactive protein (crp), and the arthrocyte sedimentation rate (esr). the results were compared to the international classification of diseases, 9th revision (icd-9) scores. results: the results showed that the frequency of esr was higher in the patients with spa than in the control group (p 0.001), and the frequency was higher in the spa group than in the control group (p 0.001). the esr score of the spa group was higher than that of the control group (p 0.001). the prp, crp, and esr levels of the patients with spa showed no significant differences between the groups. compared to the control group, the crp and esr values of the patients with spa were higher than that in the control group (p 0.001, respectively). we also performed a comparison of the crp and crp scores of the patients with spa. conclusions: the study showed that the presence of il-10 and il-12 levels increases the risk for spa. | | Augmented Text 2: | introduction: the study was performed to investigate the correlation between the inflammatory markers scdc50 and the rheological parameters of ankylosing spondylitis (as) patients. materials and methods: we retrospectively included 70 as patients and 70 healthy controls, matched according to the egyptian criteria. all patients had at least one of the evaluated laboratory findings: 1) erythrocyte sedimentation rate (esr) in the right hemisphere, 2) somatosensory pathway (lps in the right hemisphere, 3) sa in patients with peripheral neuropathy, 4) mri signal in the left hemisphere (lmi), 5) ankylosing spondylitis (ssa) patients, 6) c-reactive protein (crp) in the right hemisphere (rho), and 7) molecule in the left hemisphere. the patients were divided into four groups according to the presence or absence of the clinical signs of as. all the patients underwent evaluations for the presence of ankylosing spondylitis based on the scdc50, lps, crp, ankylosing spondylitis (spa), or the absence of any clinical signs of the disease. results: the mean values of the scdc50 and the rheological parameters in the left hemisphere were not different according to the clinical signs. the rheological parameters of the left hemisphere were not significantly different according to the presence of the clinical signs of as patients. however, the lps and the crp parameters of the right hemisphere were statistically different according to the presence or absence of the clinical signs of as. compared with the lps group, the lps group showed an increase in the scdc50 values and a decrease in the crp values of all the patients with as compared with the control group. the levels of cytochrome p450 2a1 (cyp2a1), alpha-smooth muscle actin (alpha-sma), and the rheological parameters of the egyptian hemisphere were found to be higher in the as | | Augmented Text 3: | objectives: the objective of this study was to evaluate the impact of arthroscopic hearing loss on patients with inflammatory rheumatic diseases (ird). methods: the study comprised 2 groups of patients who were treated with hy-procedural therapy of ird, including those with ankylosing spondylitis (as), diffuse ird (dii), and focal ird (fii). patients were clinically assessed with a modified chi-square test and paired t-test. analysis of a subset of the data set (n = 636) was performed to analyse the frequency peaks of acoustic signals in the resonant frequencies (mean +/- sd: 2.85) in the acoustic group and from the dii-positive group (n = 634). results: the microscopic examination of the patients with spondylitis showed that otoacoustic signals in the vestibular system are decreased, whereas acoustic signal is increased. the percentages of patients with dii and fii patients were mainly found in the otoacoustic group (mean +/- sd: 4.8%). the percentages of patients with dii and fii patients were not significantly different from controls (mean +/- sd: 0.35 +/- 0.08, p = 0.023). after controlling for the influence of the sympathetic modulation, the percentage of patients with dii were not significantly different (mean +/- sd: 1.4%; p = 0.032). the percentages of patients with otoacoustic modulation tended to be higher in patients with dii than in controls (mean +/- sd: 1.41%; p = 0.002). conclusions: the study did not show an increased proportion of patients with ird who had hearing loss associated with the acoustic modulation. this suggests that the presence of inflammatory mediators associated with the pain associated with the hearing loss, is not significantly different from the effect of the disease itself. | | Augmented Text 4: | background: the aim of this study was to investigate the differences in the etiology of otoacoustic dyspnoea (otoacoustic dyspnoea) between patients with active ankylosing spondylitis (as) and healthy controls (hcs). methods: in this prospective, observational study, 58 as patients and 59 hcs were prospectively recruited. the medical records of all participants were analysed for the presence of the presence of the following: bmi, total qtl, eosinophil count, mrna and protein levels; and the presence of otoacoustic echogenicity. results: the median age of the patients was 35.9 years (range: 14-62 years). most patients (81.5%) were male, with mean (sd) age 23.8 (5.2) years. the mean etiology of as was found to be naive (66.6%), with mild (28.3%), moderate (22.4%) and severe (26.8%) clinical manifestations. the most common clinical manifestation was etiology of as (39.3%). all the patients showed recurrent etiologies of as (88.4%). otoacoustic echogenicity was found in 32 (34.7%) of the patients, being not observed in 37 (28.9%) of them. the moc showed a significant decrease in frequency of the erythrocyte sedimentation rate (esr) values in as patients. conclusions: the results of this study show that as patients and hcs have similar clinical phenotypes with high frequency of the etiology of as. our study can provide baseline data for the investigation of the role of the etiology of as and of its possible mechanisms. | | Augmented Text 5: | background: spondyloarthritis (sa) is a chronic inflammatory rheumatic disease of the axial skeleton. fibroblasts (fibroblasts) of the neck are common to the disease. in the past two decades, fibroblasts of patients with sa have been widely studied. to date, there has been little insight into the underlying molecular pathophysiology of sa in the west. objective: we aimed to investigate the role of fibroblasts in the pathogenesis of sa in a cohort of patients with sa. methods: in this retrospective observational study, 30 consecutive patients with sa and 21 healthy individuals were included. clinical and laboratory data were collected; fibroblasts recombinant fibroblasts were analysed by a sybr amplification kit. results: in a subset of 14 patients, fibroblasts were identified as the primary cell type in the most abundant cells. fibroblasts from the most abundant cell type in the patients were found to be more abundant. additionally, the distribution of fibroblasts was altered in many patients. fibroblasts from the most abundant cell type (m2-m3) were identified as the primary cell type of fibroblasts. the most abundant cells of the fibroblasts were oligodendrocyte glycoprotein (og) and cd29. the expression of fibroblast neoplastic cell marker, erythrocyte sedimentation rate (esr), and the expression of the transcription factor cd44 induced by fgf-2 was detected in two patients. sa patients exhibited an increase in the number of fgf-2 mrna transcripts with the highest mrna levels after fgf-2 treatment. the frequency of sa cell differentiation was not different between the two groups. the frequency of cd29 expression was higher in the fgf-2 group than that in the other two groups. fgf-2 was higher in the og group and the frequency of cd34 expression was higher than that of the other patients. conclusion: fgf-2 is a marker for sa and was found to be higher in patients with sa than in healthy | | Table 10: Case study of the samples generated by our proposed methods. | | | Original Label: | CS, Cryptography | |------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Original Text: | with ever increasing load and insufficient growth of generation and transmission capacities, the operating constraints of modern integrated power system are increasing. it has led to ever increasing dynamic problems of low frequency oscillations in the system that needs to be detected and damped out quickly and efficiently. in electrical power networks small oscillations appear from time to time. these oscillations concern the quantities determining the equilibrium point of the system, and following which, system stability and system behaviors are influenced. the objective of our study is to check the static stability of the high voltage power to small perturbations of electrical network. in this context, after linearizing the system, power system stabilizer (pss) has so far been extensively utilized to mitigate these problems. this paper presents a novel and efficient approach for the optimal tuning of power system stabilizer parameters (pss) using a genetic algorithm (ga) with the eigenvalue-based objective function. the proposed approach is implemented and examined in a system with a single machine connected to an infinite bus via a transmission line. the results of this technique have been verified by eigenvalue analysis and time-domain simulations. the obtained results were evaluated and compared with ones obtained by ordinary gas. eigenvalue analysis and nonlinear system simulations demonstrated the effectiveness of the proposed approach in damping the electromechanical oscillations and enhancing the system dynamic stability. | | Augmented Text 1: | this paper presents a method of determining the electromechanical parameters of the piezoelectric elastomer system (pees) of an electrical network driven by a power source, i.e., a main voltage source. the approach considers the elastomer system as a nonlinear system composed of several electrical components, the mechanical components of which are represented by mechanical parts, and the electrical components. a typical example is taken from the literature wherein the electromechanical components are modeled as a constant eigenvalue of the power supply voltage. this study was conducted on a simulated example of an existing commercial pee system using a voltage source, thereby determining parameters of its mechanical component. the obtained results show that the use of the pees can reduce the nonlinear problems encountered in determining the electrical network's performance even when the electrical network is sufficiently weak. the present method is also very effective in identifying the most efficient way to improve the efficiency of the system. a case study was performed on a 1.2 mhz pees system using a power system controller. the obtained results are in good agreement with the simulation results. (c) 2014 elsevier ltd. all rights reserved. | | Augmented Text 2: | a problem we encountered in the development of an effective electrical network for solving problems such as the one experienced by a client for the repair of a bus damaged by an electrical short circuit is the one that is insufficient to supply the electricity for the generating station. the problem is formulated as a nonlinear programming problem with three steps: (a) selecting a suitable analytical model to represent the system, (b) determining the parameters of the model, and (c) optimizing the model through the use of an efficient regression technique. we have used the mathematical language matlab/simulink in the electrical network and implemented it into a matlab/simulink environment. we have evaluated the performance of the proposed network model through simulations in terms of the dynamic performance of the proposed model. the obtained results clearly indicate that the proposed model is a promising tool for identifying and assessing the influence of the system parameters on the performance of the electrical network. | | Augmented Text 3: | in recent years, many scientific disciplines have focused on developing energy-efficient strategies as a means to control costs, while avoiding environmental constraints. one such example is the field of electromechanical transient response of the electrical network, where a voltage source is connected to a damping device. this is the case of the system experiencing an excitation of an electrical generator and its oscillating rotor. these systems are subject to an enormous electrical load and this has been evaluated by various studies, including simulations, in order to understand their influence on system dynamics, as well as to understand their relationship with the electrical behaviour of these systems. this work introduces a network-based methodology to investigate the influence of the system parameters on system dynamics, with the goal of determining the ultimate value of the systems that can be optimal for control. simulations of a typical manufacturing facility are conducted in order to characterize the system. the results obtained show that the system can be improved by up to 45% in reducing the overall system power consumption by up to 50%. in addition, a comprehensive analysis of the existing network model is performed to show the relevance of these parameters in determining the optimal solution. (c) 2016 published by elsevier ltd. | | Augmented Text 4: | determining the optimal performance of a power system can be done by reducing the quantities and constraints that the system has to consider in its design. in this paper, a nonlinear electrical network system with multiple energy harvesters and multi-generation sources is evaluated in order to determine the most cost-efficient way of powering a power system. the nonlinear electrical network system has to consider the effects of all the components that are involved in the process of power conversion: the source, the converter, the loads and the electrical network. the main contribution of this paper is to investigate the influence of the system's parameters and their interaction on the power system performance. a simplified model that combines both an electrical network and a fluid-structure-network model is presented. the model is built using the kalman filter, a well-known mathematical technique for non-linear systems. to estimate the optimum performance, a modified inverse-transform (idre) method using the first and second order shift operator is used. moreover, a constraint based on the nonlinear electrical network is used, called the zero-crossing operator, to be adopted in the problem. the parameters are estimated using a multi-step optimization algorithm based on the power system model. the results indicate that in order to find an optimal solution of the network, it is necessary to understand the influence of the parameters in the network and to be able to take into account the influences of the parameters. (c) 2016 elsevier b.v. all rights reserved. | | Augmented Text 5: | the article presents an approach to determine the most cost-efficient way of resolving the problem of energy management of high power grid systems such as, for example, the distribution network or even the entire electrical network. the main objective is to reach the equilibrium point between the nominal and actual value of the active components of the network, thus reaching the most cost-efficient ones. to reach the optimum solution, the paper first proposes a nonlinear recursive method based on the interaction of the kinematic and electromechanical parameters. it then moves on to use an iterative algorithm based on the recursive least squares technique to find the optimal solutions of the optimal problem. furthermore, it uses the ahysteresis model to evaluate the effectiveness of the proposed method. the effectiveness of the proposed method has been evaluated through simulations on a 125kw (ems) power grid connected to a distribution network for a nominal size of 5000mw. the results show that the proposed approach can reduce the energy consumption by up to 4%. (c) 2016 published by elsevier ltd. | | Table 11: Case study of the samples generated by our proposed methods. | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
mirtaheri-etal-2023-history
History repeats: Overcoming catastrophic forgetting for event-centric temporal knowledge graph completion
https://aclanthology.org/2023.findings-acl.490
Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in real-world scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used event-centric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy.
# History Repeats: Overcoming Catastrophic Forgetting For Event-Centric Temporal Knowledge Graph Completion Mehrnoosh Mirtaheri Mohammad Rostami Aram Galstyan Information Sciences Institute University of Southern California mirtaheri@usc.edu {rostami, galstyan}@isi.edu ## Abstract Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in realworld scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used eventcentric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy. ## 1 Introduction Knowledge graphs (KGs) provide a powerful tool for studying the underlying structure of multirelational data in the real world (Liang et al., 2022). They present factual information in the form of triples, each consisting of a subject entity, a relation, and an object entity. Despite the development of advanced extraction techniques, knowledge graphs often suffer from incompleteness, which can lead to errors in downstream applications. As a result, the task of predicting missing facts in knowledge graphs, also known as knowledge graph completion, has become crucial. (Wang et al., 2022; Huang et al., 2022; Shen et al., 2022) KGs are commonly extracted from real-world data streams, such as newspaper texts that change and update over time, making them inherently dynamic. The stream of data that emerges every day may contain new entities, relations, or facts. As a result, facts in a knowledge graph are usually accompanied by time information. A fact in a semantic knowledge graph, such as Yago (Kasneci et al., 2009), may be associated with a time interval, indicating when it appeared and remained in the KG. For example, consider *(Obama, President, United States, 2009-2017)* in a semantic KG. A link between *Obama* and *United states* appears in the graph after 2009, and it exists until 2017. On the other hand, a fact in a Temporal event-centric knowledge graph (TKGs), such as ICEWS (Boschee et al., 2015), is associated with a single timestamp, indicating the exact time of the interaction between the subject and object entities. For example, in an event-centric TKG, (Obama, meet, Merkel) creates a link between Obama and *Merkel* several times within 2009 to 2017 since the temporal links only show the time when an event has occurred. Therefore, eventcentric TKGs exhibit a high degree of dynamism and non-stationarity in contrast to semantic KGs. To effectively capture the temporal dependencies within entities and relations in TKGs, as well as new patterns that may emerge with new data streams, it is necessary to develop models specifically designed for TKG completion. A significant amount of research has been dedicated to developing evolving models (Messner et al., 2022; Mirtaheri et al., 2021; Jin et al., 2020; Garg et al., 2020) for TKG completion. These models typically assume evolving vector representations for ![1_image_0.png](1_image_0.png) entities or relations. These representations change depending on the timestep, and they can capture temporal dependencies between entities. However, these models often assume that the entire dataset is available during training. They do not provide a systematic method for updating model parameters when new data is added. One potential solution is to retrain the model with new data. However, this approach can be resource-intensive and impractical for large-scale knowledge graphs. An alternative approach is to fine-tune the model with new data, which is more time and memory efficient. However, this approach has been shown to be susceptible to overfitting to the new data, resulting in the model forgetting previously learned knowledge, a phenomenon known as catastrophic forgetting (Fig. 1). A limited number of studies (Song and Park, 2018; Daruna et al., 2021; Wu et al., 2021) have addressed this problem for semantic knowledge graphs using continual learning approaches, with TIE (Wu et al., 2021) being the most closely related work to current research. Nevertheless, the development of efficient and effective methods for updating models with new data remains a significant challenge in event-centric Temporal Knowledge Graphs. We propose a framework for incrementally training a TKG completion model that consolidates the previously learned knowledge while capturing new patterns in the data. Our incremental learning framework employs regularization and experience replay to alleviate catastrophic forgetting. We propose a temporal regularization method based on elastic weight consolidation (Kirkpatrick et al., 2017). By estimating an importance weight for every model parameter at each timestep, the regularization term in the objective function 'freezes' the more important parameters from past timesteps, encouraging the use of less important parameters for learning the current task. Additionally, an exponentially decaying hyperparameter in the objective function further emphasizes the importance of the most recent tasks over older ones. Our selective experience replay method uses clustering over the representation of the data points to first capture the underlying structure of the data. The points closest to the clusters' centroid are selected for experience replay. We show that the temporal regularization combined with clustering-based experience replay outperforms all the baselines in alleviating catastrophic forgetting. Our main contributions include: 1. A novel framework for incremental training and evaluation of event-centric TKGs, which addresses the challenges of efficiently updating models with new data. 2. A clustering-based experience replay method, which we show to be more effective than uniform sample selection. We also demonstrate that careful data selection for experience replay is crucial when memory is limited. 3. An augmentation of the training loss with a consolidation loss, specifically designed for TKG completion, which helps mitigate forgetting effects. We show that assigning a decayed importance to the older tasks reduces forgetting effects. 4. A thorough evaluation of the proposed methods through extensive quantitative experiments to demonstrate the effectiveness of our full training strategies compared to baselines. ## 2 Related Work Our work is related to TKG completion, continual learning methods, and recent developments of continual learning for knowledge graphs. ## 2.1 Temporal Knowledge Graph Reasoning TKG completion methods can be broadly categorized into two main categories based on their approach for encoding time information: translationbased methods and evolving methods. Translation-based methods, such as those proposed by (Leblay and Chekol, 2018; García-Durán et al., 2018; Dasgupta et al., 2018; Wang and Li, 2019; Jain et al., 2020), and (Sadeghian et al., 2021), utilize a lower-dimensional space, such as a vector (Leblay and Chekol, 2018; Jain et al., 2020), or a hyperplane (Dasgupta et al., 2018; Wang and Li, 2019), for event timestamps and define a function to map an initial embedding to a time-aware embedding. On the other hand, evolving models assume a dynamic representation for entities or relations that is updated over time. These dynamics can be captured by shallow encoders (Xu et al., 2019; Mirtaheri et al., 2019; Han et al., 2020a) or sequential neural networks (Trivedi et al., 2017; Jin et al., 2020; Wu et al., 2020; Zhu et al., 2020; Han et al., 2020b,c; Li et al., 2021). For example,(Xu et al., 2019) model entities and relations as time series, decomposing them into three components using adaptive time series decomposition. DyERNIE (Han et al., 2020a) propose a non-Euclidean embedding approach in the hyperbolic space. (Trivedi et al., 2017) represent events as point processes, while (Jin et al., 2020) utilizes a recurrent architecture to aggregate the entity neighborhood from past timestamps. ## 2.2 Continual Learning Continual learning (CL) or lifelong learning is a learning setting where a set of tasks are learned in a sequence. The major challenge in CL is overcoming catastrophic forgetting, where the model's performance on past learned tasks is degraded as it is updated to learn new tasks in the sequence. Experience replay (Li and Hoiem, 2018) is a major approach to mitigate forgetting, where representative samples of past tasks are replayed when updating a model to retain past learned knowledge. To maintain a memory buffer storage with a fixed size, representative samples must be selected and discarded. (Schaul et al., 2016) propose selecting samples that led to the maximum effect on the loss function when learning past tasks. To relax the need for a memory buffer, generative models can be used to learn generating pseudosamples. (Shin et al., 2017) use adversarial learning for this purpose. An alternative approach is to use data generation using autoencoders(Rostami et al., 2020; Rostami and Galstyan, 2023a). Weight consolidation is another important approach to mitigate catastrophic forgetting (Zenke et al., 2017; Kirkpatrick et al., 2017). The idea is to identify important weights that play an important role in encoding the learned knowledge about past tasks and consolidate them when the model is updated to learn new tasks. As a result, new tasks are learned using primarily the free learnable weights. In our framework, we combine both approaches to achieve optimal performance. ## 2.3 Continual Learning For Graphs CL in the context of graph structures remains an under-explored area, with a limited number of recent studies addressing the challenge of dynamic heterogeneous networks (Tang and Matteson, 2021; Wang et al., 2020; Zhou and Cao, 2021) and semantic knowledge graphs (Song and Park, 2018; Daruna et al., 2021; Wu et al., 2021). In particular, (Song and Park, 2018; Daruna et al., 2021) propose methods that integrate class incremental learning models with static translation-based approaches, such as TransE (Bordes et al., 2013), for addressing the problem of continual KG embeddings. Additionally, TIE (Wu et al., 2021) develops a framework that predominantly focuses on semantic KGs, and generates yearly graph snapshots by converting a fact with a time interval into multiple timestamped facts. This process can cause a loss of more detailed temporal information, such as the month and date, and results in a substantial overlap of over 95% between consecutive snapshots. TIE's frequency-based experience replay mechanism operates by sampling a fixed set of data points from a fixed-length window of past graph snapshots; for instance, at a given time t, it has access to the snapshots from t−1 to t−5. This contrasts with the standard continual learning practice, which involves sampling data points from the current dataset and storing them in a continuously updated, fixed-size memory buffer. When compared to Elastic Weight Consolidation (EWC), the L2 regularizer used by TIE proves to be more rigid when learning new tasks over time. Furthermore, their method's evaluation is confined to shallow KG completion models like Diachronic Embeddings (Goel et al., 2020) and HyTE (Dasgupta et al., 2018). ## 3 Problem Definition This section presents the formal definition of continual temporal knowledge graph completion. ## 3.1 Temporal Knowledge Graph Reasoning A TKG is a collection of events represented as a set of quadruples G = {(s, r, o, τ )|s, o ∈ E, r *∈ R}*, where E and R are the set of entities and relations, and τ is the timestamp of the event occurrence. These events represent one-time interactions between entities at a specific time. The task of temporal knowledge graph completion is to predict whether there will be an interaction between two entities at a given time. This can be done by either predicting the object entity, given the subject and relation at a certain time, or by predicting the relation between entities, given the subject and object at a certain time. In this case, we will focus on the first method which can be formally defined as a ranking problem. The model will assign higher likelihood to valid entities and rank them higher than the rest of the candidate entities. ## 3.2 Continual Learning Framework For Tempporal Knolwedge Graphs A Temporal knowledge graph G can be represented as a stream of graph snapshots G1, G2*, . . . , G*T arriving over time, where Gt = {(s, r, o, τ )|s, o ∈ E, r ∈ R, τ ∈ [τt, τt+1)} is a set of events occurred within time interval [τt, τt+1). The continual training of a TKG completion method involves updating the parameters of the model M as new graph snapshots, consisting of a set of events, become available over time. This process aims to consolidate previously acquired information while incorporating new patterns. Formally, we define a set of tasks ⟨T1*, . . . ,* TT ⟩, where each task Tt = D*train* t, D*test* t, Dval tis comprised of disjoint subsets of the Gt events, created through random splitting. A continually trained model M can then be shown as a stream of models M = ⟨M1*, . . . ,*MT ⟩, with corresponding parameter sets θ = ⟨θ1, θ2*, ..., θ*T ⟩, trained incrementally as a stream of tasks arrive T = ⟨T1, T2*, ...,* TT ⟩. ## 3.3 Base Model In this paper, we utilize RE-NET (Jin et al., 2020), a state-of-the-art TKG completion method, as the base model. RE-NET is a recurrent architecture for predicting future interactions, which models the probability of an event occurrence based on temporal sequences of past knowledge graphs. The model incorporates a recurrent event encoder to process past events and a neighborhood aggregator to model connections at the same time stamp. Although RE-NET was initially developed for predicting future events (extrapolation), it can also be used to predict missing links in the current state of the graph (interpolation), which is the focus of this study. The model parameterizes the probability of an event p(oτ |s,r) as follows: where es, er ∈ R dare learnable embedding vectors for the subject entity s and relation r. hτ−1(s,r) ∈ R drepresents the local dynamics within a time window (τ − *ℓ, τ* − 1) for (s,r). By combining both the static and dynamic representations, RE-NET effectively captures the semantics of (s,r) up to time stamp (τ − 1). The model then calculates the probability of different object entities oτ by passing the encoding through a multi-layer perceptron (MLP) decoder, which is defined as a linear softmax classifier parameterized by woτ . ## 4 Methodology Our proposed framework is a training approach that can be applied to any TKG completion model. It enables the incremental updating of model parameters with new data while addressing the issues of catastrophic forgetting associated with fine-tuning. To achieve this, we utilize experience replay and regularization techniques - methodologies commonly employed in image processing and reinforcement learning to mitigate forgetting. Additionally, we introduce a novel experience replay approach that employs clustering to identify and select data points that best capture the underlying structure of the data. Furthermore, we adopt the regularization method of EWC, as proposed in [Kirkpatrick et al., 2017], which incorporates a decay parameter that assigns higher priority to more recent tasks. Our results demonstrate that the incorporation of a decay parameter into the EWC loss and prioritizing more recent tasks leads to improved performance. ## 4.1 Experience Replay In the field of neuroscience, the hippocampal replay, or the re-activation of specific trajectories, is a crucial mechanism for various neurological functions, including memory consolidation. Motivated by this concept, the use of experience replay in Continual Learning (CL) for deep neural networks aims to consolidate previously learned knowledge when a new task is encountered by replaying previous experiences, or training the model on a limited subset of previous data points. However, a challenge with experience replay, also known as memory-based methods, is the requirement for a large memory size to fully consolidate previous tasks (Rostami and Galstyan, 2023b). Thus, careful selection of data points that effectively represent the distribution of previous data becomes necessary. In this work, we propose the use of experience replay for continual TKG completion. Specifically, we maintain a memory buffer B which, at time t, contains a subset of events sampled from D*train* 1, D*train* 2, . . . , D*train* t−1. When Task Ttis presented to the model, it is trained on the data points in Dtrain t ∪ B. After training, a random subset of events in the memory buffer, |B| t , are discarded and replaced with a new subset of events sampled from Dtrain t. In this way, at time t, where t tasks have been observed, equal portions of memory with size |B| tare dedicated to each task. A naive approach for selecting a subset of events from a task's training set at time t would be to uniformly sample |B| tevents from D*train* t. However, we propose a clustering-based sampling method that offers a more careful selection algorithm, which is detailed in the following section. ## 4.1.1 Clustering-Based Sampling When dealing with complex data, it is likely that various subspaces exist within the data that must be represented in the memory buffer. To address this issue, clustering methods are employed to diversify the memory buffer by grouping data points into distinct clusters. The centroids of these clusters can be utilized as instances themselves or as representatives of parts of the memory buffer.(Shi et al., 2018; Hayes et al., 2019; Korycki and Krawczyk, 2021). In this study, clustering is applied to the representation of events in the training set in order to uncover the underlying structure of the data and select data points that effectively cover the data distribution. The Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) algorithm (McInnes et al., 2017) is utilized for this purpose. HDBSCAN is a hierarchical, non-parametric, density-based clustering method that groups points that are closely packed together while identifying points in low-density regions as outliers. The use of HDBSCAN over other clustering methods is advantageous due to its minimal requirements for hyperparameters. Many clustering algorithms necessitate and are sensitive to the number of clusters as a hyperparameter. However, HDBSCAN can determine the appropriate number of clusters by identifying and merging dense space regions. Additionally, many clustering algorithms are limited to finding only spherical clusters. HDBSCAN, on the other hand, is capable of uncovering more complex underlying structures in the data. As Algorithm 1: Cluster Experience Replay input: Ct = C 1 t, C 2 t*, . . . ,* C m t(clusters generated with hdbscan from D*train* t sorted in decreasing order of their size; Dtrain t(training set at time t); s (sample size); FindExemplars(C i, k) (Takes a cluster and returns k points closests to the cluster exemplars.) 1 def SelectPoints(Ct, D*train* t, s): 2 Q ← ∅ $$\begin{array}{l}{{Q\leftarrow\emptyset}}\\ {{\mathrm{for~}i\leftarrow1~t o m~{\bf do}}}\\ {{\left|\begin{array}{l}{{r\leftarrow\lceil\frac{\left|C^{i}\right|}{\sum_{j}\left|C^{j}\right|}\times s}\end{array}\right|}}\\ {{\mathcal{X}\leftarrow\mathrm{FindExamples}(C^{i},r)}}\\ {{Q\leftarrow Q\cup(\mathcal{X},r)}}\\ {{S\leftarrow\emptyset}}\\ {{\mathrm{while~}Q\neq\emptyset\;\;\&\;\left|S\right|<s\;{\bf do}}}\\ {{\left|\begin{array}{l}{{\mathcal{X},r\leftarrow Q.\mathrm{pop}()}}\\ {{S\leftarrow S\cup[\mathcal{X}[0]]}}\\ {{Q\leftarrow Q\cup(\mathcal{X}[1:],r-1)}}\end{array}\right.}}\end{array}$$ 7 S ← ∅ 12 **return** S $$\left|\begin{array}{l}{{}}\\ {{}}\\ {{S}}\\ {{\bf{w}}{\bf{h}}}\\ {{}}\\ {{}}\end{array}\right.$$ a result of its ability to identify clusters with offshaped structures, HDBSCAN generates a set of exemplar points for each cluster rather than a single point as the cluster centroid. We represent each event (s, r, o, τ ) ∈ D*train* tas a vector [es : eo] ∈ R 2d, where es and eo represent the d-dimensional embeddings of s and o at time t, respectively. The notation [:] denotes concatenation, creating a |Dtrain t|×2d matrix that represents the training data at time t. In our initial experiments, we found that data representations such as [es : er], where er is the relation embeddings, did not significantly affect the results. Moreover, representing the data as [es : er : eo] led to a bias towards relation representation, causing data points with identical relation types to cluster together. We obtained clusters C 1, C 2*, . . . ,* C m by running HDBSCAN. Our algorithm then selects |B| tevents from these clusters by prioritizing the data points closest to the exemplars and giving precedence to larger clusters. If |B| t < m, data points are chosen only from the first |B| tclusters. Conversely, if |B| t > m, the number of points selected from each cluster will depend on the cluster size, with a minimum of one data point chosen from each cluster. The specifics of this procedure are detailed further in Algorithm 1. ## 4.2 Regularization Regularization-based approaches for CL incorporate a regularization term in the objective function to discourage changes in the weights that are crucial for previous tasks, while encouraging the utilization of other weights. One such approach, Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), estimates the importance of weights using the Fisher Information Matrix. Given a model with parameter set θ previously trained on task A, and a new task B, EWC optimizes the following loss function: $${\mathcal{L}}(\theta)={\mathcal{L}}_{B}(\theta)+\sum_{i}{\frac{\lambda}{2}}F_{i}(\theta_{i}-\theta_{A,i}^{*})^{2}\ \ \ \ (2)$$ Where LB is the loss over task B only and λ determines the importance of the previous task compared to task B. We extend this loss function for continual TKG completion. Given a stream of tasks ⟨T1, T2 *. . . ,* Tt⟩ and incrementally obtained parameter sets ⟨θ1, θ2 *. . . , θ*t⟩, we define the temporal EWC loss functions as follows: $${\mathcal{L}}(\theta_{t})={\mathcal{L}}_{{\mathcal{T}}_{t}}(\theta_{t})+\sum_{\tau=1}^{t-1}\sum_{i}{\frac{\lambda}{2}}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{*})^{2}\ \ (3)$$ Where LTt is the model loss calculated only using Mt and D*train* t, Fτ is the Fisher Information Matrix estimated for Mτ and Tτ and θτ,i is i-th parameter of Mτ . The λ parameter in Equation 3 assigns equal importance to all the tasks from previous time steps, however, in practice, and depending on the application, different tasks might have different effect on the current task making plausibility of adaptive λτ : $${\cal L}(\theta_{t})={\cal L}_{{\cal T}_{t}}(\theta_{t})+\sum_{\tau=1}^{t-1}\sum_{i}\frac{\lambda_{\tau}}{2}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{*})^{2},\tag{4}$$ where $\lambda_{\tau}=\lambda\alpha^{t-\tau}$, $\lambda$ is the overall EWC loss importance, and α < 1 is the decay parameter. ## 4.3 Training And Loss Function The final loss function of our framework, when trained with experience replay and EWC can be summarized as follows: $\mathcal{L}(\theta_{t})=\mathcal{L}_{expr}(\theta_{t})+\lambda\mathcal{L}_{ewc}(\theta_{t}),$ $\mathcal{L}_{expr}(\theta_{t})=\mathcal{L}_{\mathcal{T}_{t}\cup\mathcal{B}}(\theta_{t}),$ (5) $\mathcal{L}_{ewc}=\sum_{\tau=1}^{t-1}\sum_{i}\frac{\alpha^{t-\tau}}{2}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{*})^{2}$ $$7745^{\circ}$$ | avg #quads | | | | | |--------------|--------|-------------|-------------|------------| | Dataset | #tasks | task period | split ratio | train/test | | ICEWS-M | 13 | 1 month | 50/25/25 | 27k/13k | | ICEWS-2M | 13 | 2 month | 50/25/25 | 50k/25k | | GDELT | 21 | 3 days | 60/20/20 | 38k/13k | Table 1: Dataset statistics The replay loss L*expr* is the model loss trained over both the current task's training set Dtrain tand the data points in the memory buffer B. For training in batches, the number of data points selected from D*train* tand B is in proportion to their size. ## 5 Experiments In this section, we explain the evaluation protocol to quantitatively measuring the model catastrophic forgetting. From know TKG datasets, we create two benchmarks for TKG continual learning. We evaluate our proposed training method using the benchmark, compare them with various baselines and show the effectiveness of our approach in alleviating catastrophic forgetting. Finally, we conduct ablation studies on different components of our training method to validate our model. ## 5.1 Datasets We use two datasets: the Integrated Crisis Early Warning System (ICEWS) and the Global Database of Events, Language, and Tone (GDELT). Both datasets contain interactions between geopolitical actors, with daily event dates in the ICEWS dataset and 15-minute intervals in the GDELT dataset. To create benchmarks, we use a one-year period of the ICEWS dataset starting from 01-01-2015 and consider each month as a separate graph snapshot (ICEWS-M). We also use a two-year period from 01-01-2015 to 02-01-2017, dividing it into 13 graph snapshots with 2-month windows (ICEWS2M). We split the events in each snapshot into train, validation, and test sets with a 50/25/25 percent ratio. For the GDELT, we use a 20-day period, dividing it into 3-day windows and split the data into train/test/validation sets with a 60/20/20 percent ratio. Table 1 includes statistics for each benchmark. We assume that all relations and entities are known at all times during training, and no new entities or relations are presented to the model. ## 5.2 Evaluation Setup We start by training M over D*train* 1and use Dval 1 for hyper-parameter tuning. The model Mt with parameter set θt at time step t is first initialized with ![6_image_0.png](6_image_0.png) parameters from the previous time step θt−1. Then Mt parameters are updated by training the model over Dtrain t. The training step can be a simple finetuning, or it can be augmented with data points for experience replay or with the temporal EWC loss. In order to assess the forgetting effect, at time t, we report the average Mt performance over the current and all the previous test sets D*test* 1, D*test* 2, . . . , D*test* t. Precisely, we report the performance at time t as Pt = 1 t Ptj=1 pt,j , where pt,j is the performance of Mt measured by either MRR or Hit@10 over Dtest j. ## 5.3 Comparative Study To evaluate the performance of our incremental training framework, we conduct a comparative analysis with several baseline strategies. These include: - FT: This strategy fine-tunes the model using the original loss function and the newly added data points. - ER: This method applies experience replay (Rolnick et al., 2019) with randomly chosen points. It then fine-tunes the model with both newly added events and events stored in the memory buffer. - EWC (Kirkpatrick et al., 2017): In this strategy, the model is trained with a loss function augmented by an EWC (Elastic Weight Consolidation) loss, as defined in Equation 3. - TIE (Wu et al., 2021): Drawing from TIE's methodology, we incorporated L2 regularization into our objective function and utilized their implementation of frequency-based experience replay. - **Full**: Our comprehensive model is trained using a clustering-based experience replay mechanism, supplemented with a decayed EWC loss. Additionally, we train an upper-bound model, denoted as UPP. During the t-th step of training, this model has access to all training data from all preceding time steps, 1*, . . . , t*. Detailed information about hyperparameter selection and implementation is provided in Appendix A. The results of this experiment, summarized in Fig. 2, demonstrate that our full training framework outperforms all other incremental training strategies in alleviating catastrophic forgetting. The L2 regularization used with TIE proves to be overly restrictive, leading to an even greater performance drop than that observed ![7_image_0.png](7_image_0.png) with the finetuning strategy. Table 2 summarizes the performance of the model at the final training time step on the last test dataset (referred to as 'current'), as well as its average performance across all previous test datasets (referred to as 'average'). Despite a slight dip in performance on the current task, our method consistently delivers a higher average performance. This discrepancy underscores the trade-off inherent in our approach, which is deliberately calibrated to strike a balance between maintaining high performance across all tasks and mitigating the forgetting of prior tasks. ## 5.4 Ablation Study In this section, we present an ablation study to evaluate the effectiveness of our proposed approach. Fig. 3 illustrates the results of various variations of our model, trained on ICEWS-M and evaluated using average MRR as the performance metric. The variations include: (1) Random Experience Replay (RER), where points are randomly sampled uniformly; (2) Clustering-based Experience Replay (CER), where points are sampled using the method described in Section 4.1.1; (3) Regular EWC outlined in Equation 3 (EWC); (4) Decayed Elastic Weight Consolidation (DEWC), using the decayed λ value outlined in Equation 4; and (5) DEWC + CER, which represents our full model. Our results demonstrate that the individual components of our model play a role in enhancing the overall performance, with clustering-based experience replay showing superior performance compared to random experience replay. Additionally, the decayed EWC technique proves to be more effective than the traditional EWC when tasks are assigned equal importance coefficients. For a more in-depth understanding, the detailed results for all datasets used in the ablation study are provided in the Appendix B. ![7_image_1.png](7_image_1.png) In order to demonstrate the effectiveness of the EWC loss with weight decay (as outlined in Equation 4), we are comparing it against three other variations of the EWC loss. We will train the RENET method incrementally, using each variation of the EWC loss separately. The results of this comparison can be seen in Fig. 4, which shows the average MRR score for a model trained incrementally with each loss variation, using the ICEWS-M dataset. The other variations of the EWC loss that we are comparing against include: (i) only using the parameters of the previous task for regularization, and only computing the Fisher Information Matrix for the previous task; (ii) using all previous task parameters for regularization, but giving all tasks the same importance coefficient value λ, and computing the Fisher Information Matrix for each task separately (as outlined in Equation 3); and (iii) a variation similar to the second one, but with the decayed λi values of Equation 4 being assigned to each task randomly. The results in Fig. 4 indicate that using only the parameters of the previous task ![8_image_1.png](8_image_1.png) Figure 4: Comparison of EWC loss variations on model performance. Blue line represents using only the previous task in EWC loss, showing a significant reduction compared to considering all tasks. for regularization performs the worst. Using the same λ value for all tasks has a smoothing effect on the Fisher Information Matrix, and this is why the decayed, permuted λ values perform better. Our proposed loss ultimately outperforms all variations, highlighting the importance of more recent tasks compared to older tasks. As a potential next step, we could investigate learning λ values based on task similarities. ## 5.6 Memory Size And Experience Replay This experiment compares the effectiveness of clustering-based sampling and uniform sampling for experience replay when memory is limited. We use ICEWS-M and run RE-NET with two types of experience replay: (i) random (uniform) sampling (RER) and (ii) clustering-based sampling (CER) using buffer sizes from 2000 to 11000 data points. We evaluated the model performance for M4, M8, and M12 which were trained incrementally with experience replay up to time 4, 8, and 12, respectively. We measure the performance of the model by taking the average MRR score over the first 4, 8, 12 test sets for M4,M8,M12 respectively. Finally, we compare the performance of RER and CER methods by subtracting the RER model performance from the CER model performance, and the results are shown in Fig. 5. The results, shown in Fig. 5, indicate that when memory is very small or very large, there is no significant difference between RER and CER methods; when memory is too small, there is not enough information for the model to have a significant impact on performance, and when memory is too large, important data points are likely to be selected at random. How- ![8_image_0.png](8_image_0.png) ## 6 Conclusion We propose a framework for incrementally training a TKG completion model that consolidates the previously learned knowledge while capturing new patterns in the data. Our incremental learning framework employs regularization and experience replay techniques to alleviate the forgetting problem. Our regularization method is based on temporal elastic weight consolidation that assigns higher importance to the parameters of the more recent tasks. Our selective experience replay method uses clustering over the representation of the data points and selects the data points that best represent the underlying data structure. Our experimental results demonstrate the effectiveness of our proposed approach in alleviating the catastrophic forgetting for the event-centric temporal knowledge graphs. This work is the first step towards incremental learning for event-centric knowledge graphs. Potential future work might involve exploring, and taking into consideration the effect of time on task similarities which might differ for various applications. ## 7 Limitations In this section, we examine the limitations of our approach. Even though our training methodology runs faster and uses less memory than retraining, there remains potential for further scalability optimization. One potential avenue for improvement could involve optimizing the estimation of the Fisher Information Matrix. Furthermore, optimizing the parameters related to the incremental training such as buffer size and regularization coefficient is dependent on the entire time steps rather than the current time steps. Devising a time-efficient way for hyperparameter optimization could be extremely beneficial for this task. Additionally, while our full model has demonstrated some mitigation of the problem of catastrophic forgetting, a significant gap remains between the upper performance bound and the performance of our approach. Further research is necessary to bridge this gap and improve overall performance. Finally, our current focus on continual learning is limited to the emergence of new events and does not currently consider the possibility of new relations or entities. This limitation is in part due to the base model (RENET) not being inductive and is a problem that is inherent to the model itself. Future research in the field of continual learning may aim to address this limitation by considering new relations and entities, even in the context of base models that do not support these features. ## References Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. E Boschee, J Lautenschlager, S O'Brien, S Shellman, J Starz, and M Ward. 2015. Integrated crisis early warning system (icews) coded event data. *URL:* https://dataverse. harvard. edu/dataverse/icews. Angel Daruna, Mehul Gupta, Mohan Sridharan, and Sonia Chernova. 2021. Continual learning of knowledge graph embeddings. *IEEE Robotics and Automation Letters*, 6:1128–1135. Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In *Proceedings of EMNLP*, pages 2001–2011. Alberto García-Durán, Sebastijan Dumanciˇ c, and Math- ´ ias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. *arXiv* preprint arXiv:1809.03202. Sankalp Garg, Navodita Sharma, Woojeong Jin, and Xiang Ren. 2020. Temporal attribute prediction via joint modeling of multi-relational structure evolution. arXiv preprint arXiv:2003.03919. Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In *Proceedings of AAAI*, volume 34, pages 3988–3995. Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020a. Dyernie: Dynamic evolution of riemannian manifold embeddings for temporal knowledge graph completion. *arXiv preprint arXiv:2011.03984*. Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020b. xerte: Explainable reasoning on temporal knowledge graphs for forecasting future links. *arXiv* preprint arXiv:2012.15537. Zhen Han, Yunpu Ma, Yuyi Wang, Stephan Günnemann, and Volker Tresp. 2020c. Graph hawkes neural network for forecasting on temporal knowledge graphs. In *Automated Knowledge Base Construction*. Tyler L Hayes, Nathan D Cahill, and Christopher Kanan. 2019. Memory efficient experience replay for streaming learning. In 2019 International Conference on Robotics and Automation (ICRA), pages 9769–9776. IEEE. Zijie Huang, Zheng Li, Haoming Jiang, Tianyu Cao, Hanqing Lu, Bing Yin, Karthik Subbian, Yizhou Sun, and Wei Wang. 2022. Multilingual knowledge graph completion with self-supervised adaptive graph alignment. *arXiv preprint arXiv:2203.14987*. Prachi Jain, Sushant Rathi, Soumen Chakrabarti, et al. 2020. Temporal knowledge base completion: New algorithms and evaluation protocols. *arXiv preprint* arXiv:2005.05035. Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2020. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. In Proceedings of EMNLP, pages 6669–6683. Gjergji Kasneci, Maya Ramanath, Fabian Suchanek, and Gerhard Weikum. 2009. The yago-naga approach to knowledge discovery. *ACM SIGMOD* Record, 37(4):41–47. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526. Lukasz Korycki and Bartosz Krawczyk. 2021. Classincremental experience replay for continual learning under concept drift. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pages 3649–3658. Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In *Companion Proc. of the The Web Conference*, pages 1771– 1776. Zhizhong Li and Derek Hoiem. 2018. Learning without forgetting. *IEEE Transactions on Pattern Analysis* and Machine Intelligence, 40(12):2935–2947. Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. 2021. Virtual event, canada 1,2. 2021. temporal knowl-edge graph reasoning based on evolutional representation learning. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21), July 11–15, 2021, Virtual Event, Canada, 1:10. Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, and Fuchun Sun. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. arXiv preprint arXiv:2212.05767. Leland McInnes, John Healy, and Steve Astels. 2017. hdbscan: Hierarchical density based clustering. J. Open Source Softw., 2(11):205. Johannes Messner, Ralph Abboud, and Ismail Ilkan Ceylan. 2022. Temporal knowledge graph completion using box embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 36:7779–7787. Mehrnoosh Mirtaheri, Sami Abu-El-Haija, Tozammel Hossain, et al. 2019. Tensor-based method fortemporal geopolitical event forecasting. In ICML Workshop on Learning and Reasoning with Graph-Structured Data. Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, and Aram Galstyan. 2021. One-shot learning for temporal knowledge graphs. In *3rd Conference on Automated Knowledge Base Construction*. David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. 2019. Experience replay for continual learning. *Advances in Neural* Information Processing Systems, 32. Mohammad Rostami and Aram Galstyan. 2023a. Cognitively inspired learning of incremental drifting concepts. In *Proceedings of the International Joint Conference on Artificial Intelligence*. Mohammad Rostami and Aram Galstyan. 2023b. Overcoming concept shift in domain-aware settings through consolidated internal distributions. In *Proceedings of the AAAI Conference on Artificial Intelligence*. Mohammad Rostami, Soheil Kolouri, Praveen Pilly, and James McClelland. 2020. Generative continual concept learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 5545–5552. Ali Sadeghian, Mohammadreza Armandpour, Anthony Colas, and Daisy Zhe Wang. 2021. Chronor: Rotation based temporal knowledge graph embedding. Proceedings of the AAAI Conference on Artificial Intelligence, 35:6471–6479. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2016. Prioritized experience replay. In *IJCLR*. Tong Shen, Fu Zhang, and Jingwei Cheng. 2022. A comprehensive overview of knowledge graph completion. *Knowledge-Based Systems*, page 109597. Haobin Shi, Shike Yang, Kao-Shing Hwang, Jialin Chen, Mengkai Hu, and Hengsheng Zhang. 2018. A sample aggregation approach to experiences replay of dyna-q learning. *IEEE Access*, 6:37173–37184. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In *NeurIPS*, pages 2990–2999. Hyun Je Song and Seong Bae Park. 2018. Enriching translation-based knowledge graph embeddings through continual learning. *IEEE Access*, 6:60489– 60497. Binh Tang and David S Matteson. 2021. Graph-based continual learning. In International Conference on Learning Representations. Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In International Conference on Machine Learning, pages 3462–3471. PMLR. Junshan Wang, Guojie Song, Yi Wu, and Liang Wang. 2020. Streaming graph neural networks via continual learning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1515–1524. Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. arXiv preprint arXiv:2203.02167. Zhihao Wang and Xin Li. 2019. Hybrid-te: Hybrid translation-based temporal knowledge graph embedding. In *IEEE ICTAI*, pages 1446–1451. IEEE. Jiapeng Wu, Meng Cao, Jackie Chi Kit Cheung, and William L Hamilton. 2020. Temp: Temporal message passing for temporal knowledge graph completion. *arXiv preprint arXiv:2010.03526*. Jiapeng Wu, Yishi Xu, Yingxue Zhang, Chen Ma, Mark Coates, and Jackie Chi Kit Cheung. 2021. Tie: A framework for embedding-based incremental temporal knowledge graph completion. *SIGIR 2021 -* Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 428–437. Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann. 2019. Temporal knowledge graph embedding model based on additive time series decomposition. *arXiv preprint* arXiv:1911.07893. Friedemann Zenke, Wulfram Gerstner, and Surya Ganguli. 2017. The temporal paradox of hebbian learning and homeostatic plasticity. *Curr. opinion in neuro.*, 43:166–176. Fan Zhou and Chengtai Cao. 2021. Overcoming catastrophic forgetting in graph neural networks with experience replay. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 4714–4722. Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, and Yan Zhan. 2020. Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks. arXiv preprint arXiv:2012.08492. ## A Implementation Detail & Hyperparameters We implemented our models using PyTorch, utilizing the RE-NET implementation from their GitHub repository1as a base. We modified the training pipeline of RE-NET and added experience replay and regularization loss. The RE-NET model utilized a mean pooling layer for the neighborhood encoder, with a dropout of 0.5 and an embedding dimension of 100 for relations and entities. For the model variation that employed only EWC loss, we set the learning rate to 10−3. The regularization coefficient for EWC is set to 10 and the weight decay to 0.9 for all the datasets. For variations that included experience replay buffer or fine-tuning, we began training with a learning rate of 10−3 and decreased it to 10−4for subsequent time steps. The buffer size was set to 3000 for ICEWS-M and GDELT and 5000 for ICEWS-2M, and the batch size was 256 for ICEWS-M and GDELT and 512 for ICEWS-2M. We selected the best model using the validation set at each time step. We ran each experiment once for each set of hyperparameters as the RE-NET performance did not vary significantly between runs. The min cluster size for HDBSCAN is set to 5 for all three datasets. We run all the experiments on machines with NVIDIA GeForce RTX 2080 Ti GPUs. ## B Extended Ablation Study In this section, we present the results of the ablation study conducted in Section 5.4 to evaluate the effectiveness of our method. Fig. 6 illustrates various variations of our model, which were trained incrementally over ICEWS-M, ICEWS-2M and GDELT using the hyperparameters reported in the previous section. The model variations include (1) Random Experience Replay (RER), where points are randomly sampled uniformly; (2) Clustering-based Experience Replay (CER), where points are sampled using the method described in Section 4.1.1; (3) Regular EWC outlined in Equation 3 (EWC); (4) Decayed Elastic Weight Consolidation (DEWC), using the decayed λ value outlined in Equation 4; and (5) DEWC + CER, which represents our full model. The results indicate that clustering-based experience replay outperforms random experience replay, and that the DEWC approach is more effective for the ICEWS datasets compared to GDELT. 1https://github.com/INK-USC/RE-Net.git This may be due to the fact that the data distribution for ICEWS datasets changes more significantly over the course of a year compared to GDELT, which only includes 21 days of data. It is also visible from the plots that the GDELT dataset exhibits less forgetting compared to both ICEWS datasets. Finally, the full model (DEWC + CER) always outperforms the other model variations, demonstrating the effectiveness of our methodology. ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? 2 / 2 In the preparation of this paper, we utilized the capabilities of ChatGPT to enhance the clarity and grammatical correctness of the manuscript. Specifically, Sections 1, 2, and 6 of the paper were polished using this methodology. The process entailed providing ChatGPT with paragraphs crafted by the authors, accompanied by a distinct prompt: "Rewrite the following paragraph, making it grammatically correct and clear." ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5 ✓ B1. Did you cite the creators of artifacts you used? Section 3.3, Section 5 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5.1, Appendix Section 1. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix Section 1. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix Section 1. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
feng-lu-2023-multi
Multi-Agent Language Learning: Symbolic Mapping
https://aclanthology.org/2023.findings-acl.491
The study of emergent communication has long been devoted to coax neural network agents to learn a language sharing similar properties with human language. In this paper, we try to find a {`}natural{'} way to help agents learn a compositional and symmetric language in complex settings like dialog games. Inspired by the theory that human language was originated from simple interactions, we hypothesize that language may evolve from simple tasks to difficult tasks. We propose a curriculum learning method called task transfer, and propose a novel architecture called symbolic mapping. We find that task transfer distinctly helps language learning in difficult tasks, and symbolic mapping promotes the effect. Further, we explore vocabulary expansion, and show that with the help of symbolic mapping, agents can easily learn to use new symbols when the environment becomes more complex. All in all, we find that a process from simplicity to complexity can serve as a natural way to help multi-agent language learning, and the proposed symbolic mapping is effective for this process.
# Multi-Agent Language Learning: Symbolic Mapping Yicheng Feng School of Computer Science Peking University fyc813@pku.ecu.cn ## Abstract The study of emergent communication has long been devoted to coax neural network agents to learn a language sharing similar properties with human language. In this paper, we try to find a 'natural' way to help agents learn a compositional and *symmetric* language in complex settings like dialog games. Inspired by the theory that human language was originated from simple interactions, we hypothesize that language may evolve from simple tasks to difficult tasks. We propose a curriculum learning method called *task transfer*, and propose a novel architecture called *symbolic mapping*. We find that task transfer distinctly helps language learning in difficult tasks, and symbolic mapping promotes the effect. Further, we explore *vocabulary expansion*, and show that with the help of symbolic mapping, agents can easily learn to use new symbols when the environment becomes more complex. All in all, we find that a process from simplicity to complexity can serve as a natural way to help multi-agent language learning, and the proposed symbolic mapping is effective for this process. ## 1 Introduction Agent communication has been a popular research field in the context of multi-agent reinforcement learning (Foerster et al., 2016; Sukhbaatar et al., 2016; Jiang and Lu, 2018; Eccles et al., 2019). Recent work has focused on the emergence of language in cooperative tasks where neural network agents learn a communication protocol from scratch to solve problems together (Lazaridou et al., 2017; Das et al., 2017; Havrylov and Titov, 2017; Kottur et al., 2017; Li and Bowling, 2019; Ren et al., 2020). An array of work has empirically shown that agents can make use of their developed language to successfully complete the tasks. Beyond that, some work probed into the process of language emergence, and tried to figure out † Corresponding Author Zongqing Lu† Peking University BAAI zongqing.lu@pku.edu.cn whether the learned language could share similar properties with human language like *compositionality* (Mordatch and Abbeel, 2018; Resnick et al., 2020; Chaabouni et al., 2020; Choi et al., 2018) and *symmetry* (Graesser et al., 2019; Dubova and Moskvichev, 2020; Dubova et al., 2020). Most of these studies on emergent communication are based on *referential games* (Lewis, 1969) and have shown that compositionality can be induced with suitable environmental pressures. Some have explored the influential factors on the symmetry of protocols among a group of agents. However, tasks in these studies are often simple, and some of these methods are hard to implement in complex settings like dialog games. Kottur et al. (2017) found that in a two-agent multi-round dialog game, language with compositionality does not naturally emerge, unless strict conditions are imposed to agents, such as deprivation of memory. Language emergence only in simple games is obviously not satisfactory. In this paper, we tend to find a new way to make compositional and symmetric language emerge 'naturally' in complex settings. Psychological studies suggest that human language was originated from simple gestures like pointing and pantomiming (Tomasello, 2010). This may explain why referential games are suitable for emergent language studies: these games are similar to 'pointing' in pragmatic process. However, from another perspective, the theory may also imply that communication protocols like human language cannot be formed *directly* from complex interactions. Instead, a natural process is probably that a language is first formed in simple tasks, and then applied in more complex tasks, meanwhile it evolves to become more complicated and complete, similar to the concept of *curriculum learning* (Bengio et al., 2009). Hence, we propose a method called *task transfer* to implement this process on emergent communication between neural network agents, and explore whether the process could help language learning in complex settings through empirical experiments. We also design two tasks for the experiment, including a two-player referential game and a multi-round dialog game involving a group of agents. Straightforward task transfer may not work well, since agents, even if using a same language all the time, can have different speaking policies across tasks. So instead of transferring the policies directly, we tend to make agents learn a common function for communication. We propose a novel architecture called *symbolic mapping*, which maps the input to related symbols, as a basic component of communication system of agent. The intuition is that when presented with the same input, we always associate it with the same pile of words and concepts, and this kind of association is consistent across tasks, so can be transferred. Our experiments show that agents with symbolic mapping perform better in task transfer. As we explore the learning process of agents from simple tasks to difficult tasks, we are also curious about how the language evolves if old conventions are not enough in new environments. Language learning should not be accomplished overnight. In a more natural scene, agents should first learn a simple language in a initial environment, and after entering a more complicated environment, they will learn something new and the language develops. We conduct the experiment about *vocabulary expansion*, also in a curriculum learning manner. We find that through vocabulary expansion, agents can accomplish tasks in complex environments where they would fail if they are asked to learn a language directly. This result reveals again that a process from simplicity to complexity is crucial for multi-agent language learning. And we also find that symbolic mapping agents perform better in vocabulary expansion. ## 2 Related Work Cooperative games. Different kinds of cooperative games have been proposed in emergent communication literature. A popular one is referential game (Lewis, 1969), where one agent, often noted as the speaker, has to send a message describing a target (*e.g.*, an image) which it has just observed to the other agent. Then the other agent, often noted as the listener, must select the target from several candidates containing the target and some distractors, after receiving the message (Lazaridou et al., 2017; Havrylov and Titov, 2017). We use a variant of referential game to serve as the simple task in our experiments, similar to the game in Chaabouni et al. (2020) where the listener should reconstruct the target. The difference is that we train the listener model by reinforcement learning, while they use the cross-entropy loss. Our difficult task is inspired by the *Task & Talk* game proposed by Kottur et al. (2017), which is a multi-round dialog game. In the Task & Talk game, there are two agents, one always asks questions while the other answers these questions. However, our task involves a group of homogeneous agents who do not play specific roles.Besides, our task has unfixed number of rounds, making the game more realistic while more complex. Other studies (Mordatch and Abbeel, 2018; Graesser et al., 2019; Fitzgerald, 2019) also concern emergent language in a group of agents, and Evtimova et al. (2018) proposed a multi-step referential game. Properties of communication protocols. A mainstream research direction in emergent communication is to find out whether neural network agents can produce communication protocols which exhibit some properties of human language. The most extensively studied property is compositionality. Many studies (Lazaridou et al., 2018; Li and Bowling, 2019; Ren et al., 2020; Resnick et al., 2020) have found that in referential games, once given appropriate environmental pressures, like changing learning environments, communication capacities or agents' model capacities, compositionality could be improved. Kottur et al. (2017) found that compositionality does not emerge naturally in dialog games, which is also verified by our experiments. In the studies where a group of agents learn their languages together, another important property is symmetry. That means an agent community should converge on a shared communication protocol. Dubova et al. (2020) investigated the impact of different social network structures on language symmetry, while Dubova and Moskvichev (2020) explored some other factors including supervision, population size and self-play. In this paper, we focus on improving the two properties through a process from simplicity to complexity. Evolution of communication. Recent studies, inspired by linguistic theories, have brought evolution into the research of emergent communication. Cogswell et al. (2019) investigated the benefit from cultural transmission, while Dagan et al. (2021) integrated both cultural evolution and genetic evolution. Ren et al. (2020) proposed a neural iterated learning algorithm, where agents in a new generation are partially exposed to the language emerged from the previous one. Li and Bowling (2019) let the speaker interact with new listeners periodically, while Graesser et al. (2019) analyzed how the language evolves when different linguistic communities come in contact with each other. Most similar to our approach, Korbak et al. (2019) explored language learning across games of varying complexity by template transfer. Different from their work where a hard task is decomposed into several parts and the transferred agent is the listener, we explore language transfer from simple interactions to different tasks involving more complex communication forms, and the speaker is not reinitialized so that the language evolution is consistent. And we also explore the expansion of vocabulary. Symbolic representation. Previous studies have explored symbolic representation in the deep reinforcement learning (RL) framework (Garnelo et al., 2016; Garnelo and Shanahan, 2019), and found that a compositionally structured representation could help address several shortcomings inherent in the deep RL systems. Symbolic mapping can be seen as a kind of symbolic representation in its function. However, unlike prior work, symbolic mapping is learned and constructed through emergent communication instead of representation learning techniques and is trained end-to-end by RL. That means agents form the symbolic representation when learning to communicate. ## 3 Method 3.1 Task Transfer Our main hypothesis is that multi agent language learning should benefit from a process from simplicity to complexity, which brings us to curriculum learning. So to prove this, we propose to make agents learn language in a simple task first, and then continue learning in the difficult task, which is a two-stage curriculum. We call this method *task* transfer, since we hope the learned language can be transferred across tasks. We focus on multi-round dialog games as target tasks in this paper. One question is how to choose the starting point for task transfer. From results in psychological studies, language should first be originated from simple interactions like pointing and pantomiming. Then referential game becomes ![2_image_0.png](2_image_0.png) a reasonable option, since referring to objects is similar to these interactions in pragmatic process. We use description game, a variant of referential game, as the starting point in our experiments. ## 3.2 Symbolic Mapping Curriculum learning is usually helpful in machine learning literature, but language transfer across different tasks is not expected to be a natural outcome. Actually, we think straightforward task transfer may not work that well. Curriculum learning helps policy generalization to similar tasks, but what we explore is language learning across different kinds of games where agents need different policies. So instead of directly transferring the policies, we tend to design a fundamental component of communication system in the architecture which can be shared all the time. Therefore, we propose an architecture called *symbolic mapping*, which maps input to related symbols. Before thinking about which symbols to communicate, we first think about which symbols are relevant, and this kind of association is consistent across tasks. The illustration of symbolic mapping is shown in Figure 1. Concretely, it is realized by a linear layer followed by a sigmoid function which maps the input object to a vector with dimension |V |, which is the vocabulary size, and each element of the vector corresponds to the degree of relevance between a symbol and the object. Several symbols are sampled using the Bernoulli distribution for each element of the vector according to the probability given by the output of the sigmoid function, and then stored as the agent's *word bank*. The number of sampled symbols, namely the size of the word bank, is not predefined or limited, so the mapping process is not restricted but learned with freedom. Then we propose an architecture that implements symbolic mapping with LSTM based agents so that agents can communicate making use of it. Now that the number of symbols in the word bank is unfixed, we use a speaking network to estimate whether ![3_image_0.png](3_image_0.png) each relevant symbol is useful at each time step. The speaking network is realized by a 2-layer MLP, and takes the concatenation of each symbol and the hidden state of LSTM as input, then outputs a score for each symbol. Note that all symbols in the word bank get scores by a shared speaking network. Then we pass all scores through a softmax function to get a probability distribution over the word bank. At training time we sample a symbol from it, while at test time we select symbols using argmax. An illustration can be found in Figure 2 (left). ## 3.3 Game Settings Here we describe the two tasks used in our experiments. Discrimination game is the difficult task, a multi-round dialog game, illustrated in Figure 3b. The simple task, description game, is a variant of referential game, as depicted in Figure 3c. Discrimination game. Discrimination game involves two datasets, object dataset D and pair dataset P. Each object in D comprises n attributes. For each attribute a ∈ {1, 2*, . . . , n*}, there are m(a) possible values. For a given n and a tuple of value numbers m = (m(1), m(2)*, . . . , m*(n)), we note the corresponding object dataset as Dn,m, and the number of different objects will be |Dn,m| = Qn a=1 m(a). Given an object dataset D, the pair dataset P, as illustrated in Figure 3a, is then constructed as for each pair (oi, oj ) where oi, oj ∈ D, oi = oj or oi and oj have only one different attribute. If the objects are selected from Dn,m, we note the pair dataset as Pn,m. Note that different orders of oi and oj mean different pairs, since oi will be observed by agent i who will speak first in a game episode. Moreover, each pair p = (oi, oj ) ∈ P has a label lp. If oi = oj , then lp = 0; otherwise lp = a where a is the different attribute between oi and oj . In discrimination game there is a group of homogeneous agents which we call a community. Each game episode involves two agents i and j which are randomly sampled from the community. A pair p = (oi, oj ) is sampled from P, and the two agents are presented with object oi and oj respectively. Then they start the dialog. At each time step t, the speaking agent should choose a symbol st from a shared vocabulary V and send it to the other agent. Any agent, after receiving a symbol, can choose to continue or terminate the dialog. If the choice is to continue, then the receiving agent becomes the speaker at the next time step, and the players take turns to speak until the dialog is terminated. Suppose agent j chooses to end the dialog, then it must answer whether oi and oj are the same; if not, then which attribute is the different one. In other words, it must pick the true label lp for the pair (oi, oj ). If the answer is correct, then both agents succeed and get a reward r = 1. Otherwise, they fail and get no reward (r = 0). If the number of dialog rounds reaches the upper limit Tmax, the agents also fail. Description game. Description game proceeds as follows. First, an agent i observes an input object oi from Dn,m. Next, it chooses a fixed-length (n) sequence of symbols from vocabulary V to describe oi, and sends it to listener j. Then j consumes all symbols and outputs oˆi. If oi = ˆoi, the agents succeed. The reward for speaker i is according to the game result, namely r = 1 if they succeed or r = 0 if they fail. The listener j gets rewards according to its reconstruction of each attribute. In our setting, the listener has separate reconstruction models for each attribute, and each of them gets r = 1 if its corresponding attribute is reconstructed correctly and gets r = 0 otherwise. ## 4 Experimental Setting For each attribute a, we represent it as a Nadimension one-hot vector, where Na = m(a). An input object o from Dn,m is then represented by the concatenation of all its attributes. Symbolic mapping map(·) chooses symbols for o and gets the word bank W. The hidden state of the LSTM ht serves as the memory. When speaking at time step t, one-hot encodings of symbols in W are concatenated to the hidden state ht and passed to speaking network gsp(·) to get the probability distribution πsp(·) to produce the symbol. In discrimination game, we initialize the hidden state h0 as a zero vector, and each time a symbol s is transmitted in the dialog, s is fed into LSTM f(·). Symbols transmitted in the dialog are encoded as one-hot embeddings. To differentiate the speaker of each symbol, we concatenate a flag to ![4_image_0.png](4_image_0.png) 1 3**triangle, green** reconstruction 1 3**triangle, green** reconstruction ![4_image_1.png](4_image_1.png) style the embeddings. If the speaker is the agent itself, the flag is zero; otherwise the flag is one. Note that the agent does not know the *identity* of its partner. Whenever receiving a symbol from the partner at time step t, the concatenation of hidden state ht and input object o is fed to the decision network π j dec(·), realized by a 2-layer MLP followed by a softmax activation, which outputs an action vt. The action means continuing the dialog or an answer. In description game, the speaking process is the same. We fix the message length to n, corresponding to one symbol per attribute. To do this, after the speaker produces a symbol st at time step t, the symbol is fed into its LSTM f(·), and the next symbol st+1 is sampled at time step t + 1. This process proceeds until the fixed message length is reached. The listener is instantiated by n linear layers, which are called reconstruction networks. The message sent by the speaker is represented by the bag-of-words model and consumed by the listener. Then each of its reconstruction network outputs an action to predict the value of each attribute. We use REINFORCE (Williams, 1992) to train each agent. We apply entropy regularization in the loss function to encourage exploration, and use the Adam optimizer with a learning rate of 0.001 in all settings. We run all our experiments three times with different random seeds and present the mean and standard deviation of the results. ## 5 Metrics Compositionality. In our setting, the evaluation criterion of compositionality is whether agents can communicate different attributes independently. Note that compositionalty in natural language has more complicated forms, but we only consider the juxtaposition of independent symbols to represent an overall meaning because we hypothesize that compositionality was rather simple when language was formed in the early stage and thus the proposed form is adequate for our research. Inspired by *positional disentanglement* in Chaabouni et al. (2020), we propose a metric called **referential disentanglement** (*refdis*), which measures whether a specific symbol refers to a specific attribute. We ignore the positional information because we need a language suitable for different kinds of interactions, and if symbols' positions are informative, the language is hard to transfer to dialogs. For each symbol s, we denote a s1 the attribute that has the lowest conditional entropy given s : a s1 = arg minaH(a|s). Similarly, we denote a s2 = arg mina̸=a s 1H(a|s). Then we define *refdis* as: $$r e f d i s=\sum_{s}\left(\frac{\mathcal{H}(a_{2}^{s}|s)}{\mathcal{H}(a_{2}^{s})}-\frac{\mathcal{H}(a_{1}^{s}|s)}{\mathcal{H}(a_{1}^{s})}\right)\cdot k(s),$$ where k(s) is the frequency of occurrence of symbol s. The intuition of *refdis* is that each symbol should only be informative about one attribute. The best case is when one attribute is determined but all other attributes are totally uncertain given any specific symbol, with *refdis* being 1, and in the worst case the *refdis* is 0. *Context-independence* (CI) proposed in Bogin et al. (2018) shares similar concept with *refdis*, but *refdis* evaluates compositionality according to symbols while CI focuses on the alignment between symbols and concepts. Symmetry. We evaluate the symmetry of the learned language by computing the JensenShannon divergence between pairs of agents' distributions of different values of attributes, given a specific symbol. For a pair of agents i and j, we define **referential divergence** (*refdiv*) as: refdiv = $$\frac{1}{|V|\cdot n}\sum_{s}\sum_{a}\mathrm{JSD}\left(p(m_{i}^{a}|a,s)||p(m_{j}^{a}|a,s)\right),$$ where p(ma i|*a, s*) is the value distribution of attribute a of agent i given symbol s. The value of refdiv is also between 0 and 1, and a perfectly symmetric communication protocol will get *refdiv* = 0. | Training (%) | Testing (%) | refdis ↑ | refdiv ↓ | | |----------------|---------------|-------------|------------|------------| | LSTM | 47.62(2.54) | 8.42(1.27) | 0.07(0.03) | 0.87(0.11) | | IL | 45.67(0.66) | 13.47(1.27) | 0.06(0.01) | 0.87(0.02) | Table 1: The performance of the agent community playing discrimination game on P3,(3,3,3). LSTM refers to vanilla LSTM-based agents, while IL refers to LSTM agents trained with iterated learning. The first and second column shows the success rate in training set and testing set respectively. Both methods get poor performance. ## 6 Experiments And Results 6.1 Language Learning In Discrimination Game We first examine the performance of neural network agents learning language in discrimination game. We test two methods: vanilla LSTM, which is aimed to show the performance of simple LSTMbased agents without particular training methods, and iterated learning (IL), which is a framework proposed by evolutionary linguists to simulate the language evolution process, and is believed to help compositional languages emerge (Kirby et al., 2014). To apply IL in our setup, we modify the neural iterated learning algorithm (NIL) proposed by Ren et al. (2020). The implementation details of LSTM and IL can be found in appendix. We use dataset P3,(3,3,3), where objects have three attributes and each attribute has three values, and split the dataset into the training set and the testing set to explore the generalization ability of the learned languages to unseen objects, which can also reflect compositionality. We set agent number to 3, and the vocabulary size is set to 9. The upper limit for the number of dialog rounds is Tmax = 3 (each agent has three turns to speak). Table 1 shows the results, where *refdiv* is averaged over all pairs of agents. Both two methods get poor performance. The success rates reveal that agents encounter difficulties in learning a good policy to accomplish the game, and their learned communication protocols are overfitting the training set, which implies that the language is not compositional. The low *refdis* also verifies this. The results of *refdiv* show that the agents do not converge on symmetric communication protocols. These results confirm that the multi-round dialog game is challenging for a good language to emerge. Methods like iterated learning may also not work well in complex settings, though the IL agents achieve relatively higher testing success rate. We conjecture that the difficulty may come from the following reasons. For compositionality, the instability of dialogs may push the agents to convey more information each time (*e.g.*, using one symbol to express both two attributes), ending up in a non-compositional communication protocol. For language symmetry, in an agent group, different partners may decode a same message in different ways, and as a result the training will be unstable and hard to converge on a shared communication protocol. Therefore, learning language directly in discrimination game is hard. ## 6.2 From Simple Tasks To Difficult Tasks In this section, we want to verify our hypothesis that language can evolve from simple tasks to difficult tasks, and this process, which we call as *task* transfer, helps language learning in difficult tasks. To do this, we first carry out description game on the agent community, and then train the learned speakers to play discrimination game. And we want to investigate whether our proposed symbolic mapping architecture can indeed promote task transfer, so we use LSTM and IL introduced in the previous section to serve as our baselines. ## 6.2.1 Language Learning In Description Game To conduct a speaker-listener game in an agent community, most studies make each agent both speaker and listener to simulate a human community (Dubova and Moskvichev, 2020; Dubova et al., 2020). However, since neural agents' speaking and listening policies are not tied together like humans, this setting can be seen as multiple speakers speaking to multiple listeners, making the learning unstable. The multi-listener problem is inevitable in dialog games, but can be avoided in referential games to encourage language symmetry. And through task transfer, the emerged symmetry may be maintained, which becomes a natural way to form symmetric language in dialog games. Therefore, instead of giving each agent a listening model to interact with all other agents, we choose to use a *shared listener* to simplify and stabilize the language learning and encourage the emergence of language symmetry. We use dataset D3,(3,3,3), and set agent number in the community to 3 and vocabulary size to 9, the same as in Section 6.1, and we introduce another agent to play the shared listener role. The message | Success Rate (%) | refdis ↑ | refdiv ↓ | | | |--------------------|--------------|--------------|------------|------------| | LSTM | 100.00(0.00) | 0.48(0.07) | 0.06(0.04) | | | IL | 100.00(0.00) | 0.71(0.09) | 0.19(0.03) | | | SM | protocol | 100.00(0.00) | 0.89(0.06) | 0.12(0.03) | | mapping | 0.71(0.20) | 0.04(0.04) | | | length is set to 3. The results are shown in Table 2. SM refers to agents with the proposed architecture in Section 3.2, and for SM agents we calculate the two metrics on both symbolic mapping (which symbols are stored into word bank) and the actual communication protocol (which words are sent to another agent) to explore their relationship. All methods can learn to accomplish the game perfectly, and results of *refdiv* show that agents can converge on symmetric languages more easily now. Besides, the languages that emerge in this game present much higher compositionality compared with language learned in discrimination game, confirming that simple tasks are more suitable for agents to learn language with good properties. Among the three methods, LSTM agents achieve relatively poor compositionality, showing that agents cannot learn compositionality so well without any environmental pressure, in line with conclusions in other studies. IL agents perform much better in terms of compositionalty, so the method can indeed help in this simpler game. The relatively poor symmetry may be caused by the supervised learning phase in iterated learning, where each new agent learns language from different agents in the past generation. Languages learned by SM agents present best compositionality. This may be because that the symbolic mapping naturally promotes compositionality, since the association between input and symbols can be easily disentangled. High refdis and low *refdiv* calculated on symbolic mapping also indicate that after language learning, the mapping can encode good language properties. ## 6.2.2 Task Transfer After the agents have successfully learned to accomplish description game, we then train the speakers to play discrimination game. For LSTM agents, | Training(%) Testing(%) | refdis ↑ | refdiv ↓ | | |--------------------------|-----------------------|------------------------------------|-----------------------| | LSTM | 85.80(2.82) | 51.01(10.14) 0.34(0.05) 0.28(0.08) | | | IL | 51.13(4.87) | 15.66(5.82) | 0.05(0.03) 0.75(0.09) | | SM | protocol 94.17(4.98) | 85.35(8.27) | 0.62(0.08) 0.18(0.06) | | mapping | 0.37(0.09) 0.06(0.01) | | | we use the learned model directly in the new task. For IL agents, we use the learned model to perform task transfer in the first generation. For SM agents, we load the learned symbolic mapping to reinitialized models without fixing the symbolic mapping so that it can continue to evolve. The experiment settings are the same as in Section 6.1. The results are shown in Table 3. The performance improvement of LSTM and IL compared with that in Table 1 proves the effectiveness of task transfer. Further, the best performance of SM agents confirms the benefit of our proposed architecture. In different kinds of games, agents need different speaking policies, so LSTM and IL agents, who transfer the speaking policies directly, cannot generalize so well to the new game. IL agents perform relatively bad in task transfer probably because in the last few generations when training in the simple game, they reinforce the successful policy again and again, and they learn the policy for the simple game so firmly that the generalization to a new task becomes more difficult. In contrast, SM agents learn a new speaking policy from scratch in the new game, while the symbolic mapping provides knowledge about the learned language implicitly. The results show that this architecture greatly promotes the effect of task transfer. ## 6.3 Vocabulary Expansion We have empirically shown that agents' language can evolve in task transfer, and in this section we explore a curriculum on another dimension. In natural language, it is common that vocabulary changes continually over time and new words are created endlessly, so we hope language emerged by agents can also develop. Besides, the emergence of language should not be achieved overnight, and a natural process is to form the language step by step. So we explore the curriculum where the number of objects' attributes increases in a same task. And Success Rate (%) refdis ↑ *refdiv* ↓ LSTM 1.56(0.00) 0.00(0.00) 1.00(0.00) SM protocol 2.77(0.60)0.09(0.06) 0.75(0.07) mapping 0.03(0.01) 0.18(0.07) Table 4: The performance of the agent community playing with a shared listener in description game on D3,(4,4,4). through this experiment we also want to find out whether symbolic mapping is still useful when the task is the same but the difficulty changes. We conduct the experiment called *vocabulary* expansion. We first carry out description game using LSTM and SM agents on dataset D3,(4,4,4) which contains 64 objects. We set agent number to 3 and vocabulary size to 12. The results are shown in Table 4. It is surprising that in this bigger dataset, both methods fail in the simple task. LSTM agents learn only to speak a single word all the time, while the symbolic mappings learned by SM agents are nearly random. The reason is probably that the chance to succeed in this environment is very small at the beginning (1/64 here), so the reward is too sparse for reinforcement agents. Now we try to make agents learn the language from a simpler start. We first train the agents on a smaller dataset D2,(4,4), and then we introduce a new attribute into the environment and train them on D3,(4,4,4) with four new symbols available. We also try to reinitialize the speaker network and the LSTM network of SM agents, only retaining the symbolic mapping, to investigate the effect of symbolic mapping in vocabulary expansion. The details of the implementation of the experiments can be found in Appendix B. Table 5 and Table 6 show the results of the two experiments. While agents can learn good language in the small environment, they can also achieve good performance in the bigger environment now via vocabulary expansion. This demonstrates that language can evolve to become more complicated as the environment develops, and again confirms | Success Rate (%) refdis ↑ | refdiv ↓ | | | |-----------------------------|-----------------------|-----------------------|-----------------------| | LSTM | 100.00(0.00) | 0.64(0.12) 0.11(0.06) | | | SM | protocol | 100.00(0.00) | 0.84(0.06) 0.12(0.02) | | mapping | 0.59(0.18) 0.05(0.01) | | | our hypothesis that the process from simplicity to complexity is crucial for agents to learn language in complex environments. The results also reveal that SM agents are better at vocabulary expansion, as they can not only express new attributes with the help of new symbols, thus achieving higher success rate, but also use the symbols more compositionally. Note that the reinitialized model performs close to the not reinitialized model, showing that symbolic mapping plays a deterministic role for SM agents in vocabulary expansion. We present an example of the frequencies of different attribute values observed by LSTM and SM agents corresponding to four new symbols in Figure 4. SM agents mainly use the new symbols to express values of the new attribute, showing good compositionality. In contrast, LSTM agents fail to use the new symbols to express accurate meanings after vocabulary expansion. From this perspective, in the curriculum where the task is not changed, the proposed architecture is still helpful. ## 7 Conclusion | Success Rate (%) | refdis ↑ | refdiv ↓ | | | |--------------------|--------------|--------------|------------|------------| | LSTM | 83.85(22.65) | 0.47(0.25) | 0.14(0.05) | | | SM | protocol | 100.00(0.00) | 0.91(0.03) | 0.11(0.02) | | mapping | 0.73(0.10) | 0.05(0.01) | | | | SM-RE | protocol | 100.00(0.00) | 0.91(0.01) | 0.12(0.04) | | mapping | 0.72(0.04) | 0.06(0.02) | | | In this paper, we hypothesize that a process from simplicity to complexity is a natural way to help multi-agent language learning. We propose a curriculum learning method called *task transfer*, which uses referential games as the starting point of language learning. We propose *symbolic mapping* and implemented it in LSTM-based agents. This architecture can be applied in different kinds of interactions, so that it can help realize language transfer across different tasks. We also explore another curriculum *vocabulary expansion*. Our results show that learning from simplicity to complexity indeed helps, while symbolic mapping greatly promotes the effect of both task transfer and vocabulary expansion. In summary, we verify our hypothesis ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Figure 4: The frequencies of attribute values observed by LSTM and SM agents corresponding to four new symbols in the vocabulary expansion experiment. The four colors of bars correspond to four new symbols respectively. The x label is abbreviations of attribute values, and the last four are values of the new attribute. from two aspects, language transfer and language development, and our proposed architecture symbolic mapping shows remarkable effect. ## Limitations In this section, we discuss some limitations of our work. We conduct preliminary experiments to verify the influence of task transfer and vocabulary expansion on language learning in complex forms, and to explore the effectiveness of our proposed architecture, symbolic mapping, and we assume that language was formed through simple interactions in the early stage. Therefore, additional experiments involving more complex games or other input forms like real images have not been studied and are left for future work. Besides, more advanced language properties and syntax are temporarily not studied in this work. As for task transfer, we verify the effectiveness of a two-stage curriculum starting from referential games, while more advanced curriculum are left for future work, where more cognitive science findings should be involved. ## Ethics Statement We believe our work has no potential risks or negative social impacts now. ## Acknowledgements This work was supported in part by NSF China under grant 62250068. The authors would like to thank the anonymous reviewers for their valuable comments. ## References Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML. Ben Bogin, Mor Geva, and Jonathan Berant. 2018. Emergence of communication in an interactive world with consistent speakers. *arXiv preprint* arXiv:1809.00549. Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. 2020. Compositionality and generalization in emergent languages. In ACL. Edward Choi, Angeliki Lazaridou, and Nando de Freitas. 2018. Compositional obverter communication learning from raw visual input. In *ICLR*. Michael Cogswell, Jiasen Lu, Stefan Lee, Devi Parikh, and Dhruv Batra. 2019. Emergence of compositional language with deep generational transmission. arXiv preprint arXiv:1904.09067. Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. 2021. Co-evolution of language and agents in referential games. In *EACL*. Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In *ICCV*. Marina Dubova and Arseny Moskvichev. 2020. Effects of supervision, population size, and self-play on multi-agent reinforcement learning to communicate. In *ALIFE*. Marina Dubova, Arseny Moskvichev, and Robert Goldstone. 2020. Reinforcement communication learning in different social network structures. *arXiv preprint* arXiv:2007.09820. Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, and Thore Graepel. 2019. Biases for emergent communication in multi-agent reinforcement learning. In *NeurIPS*. Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. 2018. Emergent communication in a multi-modal, multi-step referential game. In *ICLR*. Nicole Fitzgerald. 2019. To populate is to regulate. arXiv preprint arXiv:1911.04362. Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In *NeurIPS*. Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. 2016. Towards deep symbolic reinforcement learning. *arXiv preprint arXiv:1609.05518*. Marta Garnelo and Murray Shanahan. 2019. Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Current Opinion in Behavioral Sciences. Laura Graesser, Kyunghyun Cho, and Douwe Kiela. 2019. Emergent linguistic phenomena in multi-agent communication games. In *EMNLP*. Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. In *NeurIPS*. Jiechuan Jiang and Zongqing Lu. 2018. Learning attentional communication for multi-agent cooperation. In *NeurIPS*. Simon Kirby, Tom Griffiths, and Kenny Smith. 2014. Iterated learning and the evolution of language. *Current opinion in neurobiology*, 28:108–114. Tomasz Korbak, Julian Zubek, Lukasz Kucinski, Piotr Milos, and Joanna Raczaszek-Leonardi. 2019. Developmentally motivated emergence of compositional communication via template transfer. *arXiv preprint* arXiv:1910.06079. Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge 'naturally'in multi-agent dialog. In *EMNLP*. Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with symbolic and pixel input. In *ICLR*. Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emergence of (natural) language. In *ICLR*. David K. Lewis. 1969. *Convention: A Philosophical* Study. Wiley-Blackwell. Fushan Li and Michael Bowling. 2019. Ease-ofteaching and language structure from emergent communication. In *NeurIPS*. Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In *AAAI*. Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, and Simon Kirby. 2020. Compositional languages emerge in a neural iterated learning model. In *ICLR*. Cinjon Resnick, Abhinav Gupta, Jakob N. Foerster, Andrew M. Dai, and Kyunghyun Cho. 2020. Capacity, bandwidth, and compositionality in emergent language learning. In *AAMAS*. Sainbayar Sukhbaatar, arthur szlam, and Rob Fergus. 2016. Learning multiagent communication with backpropagation. In *NeurIPS*. Michael Tomasello. 2010. *Origins of human communication*. MIT press. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Machine learning*, 8(3-4):229–256. ## A Training And Implementation Details In all of our experiments, each agent's LSTM has a hidden state of size 50, the dimensions of the hidden layers of all MLPs are the same as their input size, and the entropy regularization parameter λH is set to 0.05. We train LSTM and SM agents for 10000 epochs in description game and 20000 epochs in discrimination game, unless the agents achieve 100% success rate ahead of time. Our experiments are done using a single GPU GTX 1080 Ti. Most experiments can be done within several hours, while training of IL agents may take more time depending on the number of generations. The LSTM agents are implemented as LSTM networks with hidden states of size 50. When an LSTM agent observes an object, a linear layer maps the input embedding into the agent's initial hidden state h0. When speaking, we map the agent's hidden state into a probability distribution over the whole vocabulary with an MLP and a softmax function, and we randomly sample a symbol from the probability distribution. The generated symbol will then be fed back into the LSTM. The decision network is the same as SM agents. We modify the neural iterated learning algorithm to apply iterated learning in our setup. The IL agents' architecture are the same as LSTM agents. The algorithm runs for several generations, and there are three phases in each generation: learning phase, interacting phase and transmitting phase. At the beginning of each generation, all agents are randomly initialized. When training description game, in the learning phase, each agent in the community learns from data collected in the previous generation with cross-entropy, and the shared listener is pre-trained with REINFORCE by interacting with the pre-trained agent community. In the interacting phase, the agent community plays description game with the shared listener and they are trained the same way as LSTM agents. In the transmitting phase, all objects are fed to each speaking agent, and the corresponding messages generated are stored in a dataset for the next generation. When training discrimination game, in the learning phase, two agents are randomly sampled to learn dialogs with supervised learning from data collected in the previous generation, and the rest agent is pre-trained with REINFORCE by interacting with the pre-trained other two agents. In the interacting phase, the agent community plays discrimination game and they are trained the same way as LSTM agents. In the transmitting phase, two agents are randomly sampled, and the whole training set is fed to them to collect the generated dialogs into a dataset for the next generation. In description game training, we set generation number to 20, pre-train iteration number to 2000 for supervised learning and 3000 for reinforcement learning. We train agents for 2000 epochs in the interacting phase. In discrimination game training, we set generation number to 10, pre-train iteration number to 40000 for supervised learning and 100000 for reinforcement learning. We train agents for 4000 epochs in the interacting phase. We tried a set of hyperparameters and use the ones with the best performance. ## B Implementation Details Of Vocabulary Expansion When training the description game on D2,(4,4), we use zero-padding to object representations and symbol embeddings to encode the new attribute and new symbols, and we set message length to 2. The vocabulary size is set to 8 at first. For LSTM agents, the output number of the speaker network is set to 12, but we mask 4 of them in the first training. When training the three attribute game, the message length is added to 3, and the vocabulary size is expanded to 12. We use the learned model directly for LSTM agents. For SM agents, we reinitialize the agents' symbolic mapping as a linear layer with output dimension dim = 12 and set the weights to be zero. Then we load the parameters of the learned symbolic mapping into it. We also try to reinitialize the speaker network and the LSTM network of SM agents, only retaining the symbolic mapping, to investigate the effect of symbolic mapping in vocabulary expansion. ## C Examples Of The Learned Symbolic Mapping And Communication Protocol To show what symbolic mapping learns and how it helps task transfer, we conduct the task transfer experiment on a smaller dataset D2,(3,3) and present here some examples. We refer to the attributes as color and *shape*, and each of them has 3 values (*i.e.*, red, green, blue, triangle, square, circle). The vocabulary size is set to 6, the message length is set to 2 in description game and the upper limit for the number of dialog rounds in discrimination game is Tmax = 2. Examples of the learned symbolic mapping in triangle 3,4 0,3,4 2,3,4 square 5 0,5 2,5 circle 1 0,1 1,2 triangle 3,4 0,3,4 2,3,4 square 5 0,5 2,5 circle 1,4 0,1 1,2 triangle 3,4 0,3,4 2,3,4 square 5 0,5 2,5 circle 1 0,1 1,2 red green blue red green blue red green blue Table 7: The learned symbolic mapping of the three agents in the community when playing with a shared listener in description game on D2,(3,3). red green blue triangle 3,4 0,3,4 2,3,4 square 4,5 0,3,4,5 2,3,5 circle 1,4 0,1,4 1,2 red green blue triangle 3,4 0,3,4 2,4 square 4,5 0,5 2,5 circle 1,4 0,1,4 1,2 red green blue triangle 3,4 0,3,4 2,3,4 square 3,4,5 0,3,5 2,3,5 circle 1,3,4 0,1,3 1,2,3 the agent community is shown in Table 7 and Table 8. They verify that symbolic mapping is not changed greatly across two tasks, so the learned language can be transferred. In both games, all agents associate symbol '0' with attribute 'green', '1' with 'circle', '2' with 'blue' and 5 with 'square', which presents good compositionality and symmetry. Symbol '3' and '4' have relatively ambiguous meanings, which is changed between two tasks, but they mainly cover the attributes 'red' and 'triangle' which cannot be expressed by other symbols. So red green blue triangle 3,4 0,3 2,3 square 5,5 5,0 5,2 circle 1,1 1,0 1,2 red green blue triangle 4,4 0,4 4,2 square 5,5 5,0 5,2 circle 1,1 1,0 1,2 red green blue triangle 4,4 0,4 2,4 square 5,5 5,0 5,2 circle 1,1 1,0 1,2 Table 9: The learned communication protocols of the three agents in the community when playing with a shared listener in description game on D2,(3,3). red green blue triangle 4 0 2 square 4,5 0 2,5 circle 1 0 1,2 red green blue triangle 3 0,4 2,4 square 4,5 0,5 2,5 circle 1,4 0,1 1,2 red green blue triangle 3 0 2 square 3,5 0,5 2,5 circle 1,3 0,1 1,2 agents can form compositional structure in symbolic mapping through emergent communication, and the properties like compositionality and symmetry shown in symbolic mapping can explain why symbolic mapping helps language learning through task transfer and why the learned language properties in simple tasks can be maintained in complex tasks by SM agents. We also present the corresponding communication protocols learned by the agents in the experiment in Table 9 and Table 10. As discrimination game can be terminated at any time, agents may not have chance to express complete information. So in Table 10 we only present all symbols that the agent has spoken in different games after observing a specific object in discrimination game. Compared with Table 7 and Table 8, the communication protocols make use of the compositional words in symbolic mapping faithfully in both games, so the language is indeed transferred across tasks. Besides, good compositionality and symmetry exhibited in description game are also transferred, which helps success rate in discrimination game. It may seem odd that the first agent only speaks symbol '0' after observing all green objects in discrimination game. We point out that it results from its game policy: it always expresses 'green' and wait the other agent to communicate about the shape. That may explain why we think speaking policy should not be transferred directly like LSTM agents: policies can be specific to tasks, while only more basic components like symbolic mapping can carry general information about a language. We should also point out that though the third agent associates symbol '3' with all objects in discrimination game in symbolic mapping, it only speaks it when presented with red objects. This may explain why *refdis* can be higher in protocol compared with mapping. ## D Fixed Random Mapping We compare the performance of the learned symbolic mapping with a fixed random mapping to show whether the benefit is provided by the reduction of dimensionality. We stop the gradient passed to the symbolic mapping when training so the mapping is randomly initialized and fixed. Since the symbolic mapping is fixed now, it cannot learn anything in the simple task, so the task transfer cannot be performed, and we only keep the mapping the same in the two tasks. We present the results of agents playing in the description game and the discrimination game respectively in Table 11 and Table 12. We run five seeds for each experiment. Surprisingly, the performance of the fixed random mapping in the simple task is very poor, while the success rate in the difficult task is higher than LSTM agents. From the metrics of the mapping we can find that the random mapping does not show any good properties as the learned symbolic mapping, so it cannot help the policy learning. The | Success Rate (%) | refdis ↑ | refdiv ↓ | | | |--------------------|------------|-------------|------------|------------| | SM-fix | protocol | 47.69(4.92) | 0.14(0.03) | 0.38(0.04) | | mapping | 0.03(0.02) | 0.24(0.11) | | | Table 11: The performance of the agent community with fixed random mapping playing with a shared listener in description game on D3,(3,3,3). | Training(%) Testing(%) refdis ↑ | refdiv ↓ | |-----------------------------------|-----------------------------------| | SM-fix protocol 80.73(2.66) | 50.10(4.72) 0.17(0.02) 0.53(0.06) | | mapping | 0.03(0.02) 0.23(0.12) | Table 12: The performance of the agent community with fixed random mapping playing discrimination game. poor success rate in description game then shows that dimensionality reduction does not ensure the performance improvement, though it really helps in discrimination game. The reason may be that a random mapping cannot make agents communicate about all attributes well, harmful to the performance in description game, but agents can find ways to accomplish discrimination game when the attributes that can be expressed are limited. However, from the metrics and the success rate in the testing set we can find that the learned language in discrimination game is not compositional, and agents cannot learn a symmetric language with fixed random mappings. So the reduction in dimensionality probably merely helps agents to overfit. So we can conclude that the performance of symbolic mapping does not benefit from the dimensionality reduction solely, and the learning process is crucial for language emergence with good properties. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. B1. Did you cite the creators of artifacts you used? Not applicable. We do not use any artifacts. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. We do not use any artifacts. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3.2, Section 3.3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Our data does not involve these information or content. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. We describe our data in Section 3.3. It is very simple so there is no need for a documentation. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.3, Section 6 ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3.3, Section 6, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6, Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. We do not use existing packages. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
urbizu-etal-2023-scaling
Scaling Laws for {BERT} in Low-Resource Settings
https://aclanthology.org/2023.findings-acl.492
Large language models are very resource intensive, both financially and environmentally, and require an amount of training data which is simply unobtainable for the majority of NLP practitioners. Previous work has researched the scaling laws of such models, but optimal ratios of model parameters, dataset size, and computation costs focused on the large scale. In contrast, we analyze the effect those variables have on the performance of language models in constrained settings, by building three lightweight BERT models (16M/51M/124M parameters) trained over a set of small corpora (5M/25M/125M words).We experiment on four languages of different linguistic characteristics (Basque, Spanish, Swahili and Finnish), and evaluate the models on MLM and several NLU tasks. We conclude that the power laws for parameters, data and compute for low-resource settings differ from the optimal scaling laws previously inferred, and data requirements should be higher. Our insights are consistent across all the languages we study, as well as across the MLM and downstream tasks. Furthermore, we experimentally establish when the cost of using a Transformer-based approach is worth taking, instead of favouring other computationally lighter solutions.
# Scaling Laws For Bert In Low-Resource Settings Gorka Urbizu1Iñaki San Vicente1 **Xabier Saralegi**1 Rodrigo Agerri2 **Aitor Soroa**2 1**Orai NLP Technologies** [g.urbizu|i.sanvicente|x.saralegi]@orai.eus 2**HiTZ Center - Ixa, University of the Basque Country UPV/EHU** [rodrigo.agerri|a.soroa]@ehu.eus ## Abstract Large language models are very resource intensive, both financially and environmentally, and require an amount of training data which is simply unobtainable for the majority of NLP practitioners. Previous work has researched the scaling laws of such models, but optimal ratios of model parameters, dataset size, and computation costs focused on the large scale. In contrast, we analyze the effect those variables have on the performance of language models in constrained settings, by building three lightweight BERT models (16M/51M/124M parameters) trained over a set of small corpora (5M/25M/125M words). We experiment on four languages of different linguistic characteristics (Basque, Spanish, Swahili and Finnish), and evaluate the models on MLM and several NLU tasks. We conclude that the power laws for parameters, data and compute for lowresource settings differ from the optimal scaling laws previously inferred, and data requirements should be higher. Our insights are consistent across all the languages we study, as well as across the MLM and downstream tasks. Furthermore, we experimentally establish when the cost of using a Transformer-based approach is worth taking, instead of favouring other computationally lighter solutions. ## 1 Introduction Pre-trained neural language models based on the Transformer architecture have shown impressive results on many NLP tasks to the point that their use has become standard practice. The capabilities of these models improve as the complexity (in terms of parameters) of their architecture (Wei et al., 2022a) and the size of the corpora on which the pre-training is performed increase (Zhang et al., 2021). For this reason, there is now a tendency to build ever-larger models trained on ever-growing corpora. This trend has resulted in a never-ending increase of the computational requirements to perform model pre-training, but also for the subsequent fine-tuning and inference processes at production time. Moreover, building very large models require huge training corpora, which is only available for a handful of rich-resource languages. Kaplan et al. (2020) and Hoffmann et al. (2022) propose power-law formulas that relate model size, corpora size and computation power, and help find the optimal settings in advance given a fixed budget. However, their analysis is focused on autoregressive models of relatively big sizes, that require large corpora to train. In this paper, we analyze whether the conclusions drawn in these works also apply to discriminative (encoder-only) models in low-resource settings, where both the data size and budget are constrained. We analyze the performance of several combinations of model and data sizes using a simulated low-resource scenario in four linguistically diverse languages from different families (Basque, Spanish, Swahili and Finnish). Our study reveals that the data size and model size power law values provided by Kaplan et al. (2020) and Hoffmann et al. (2022) are not optimal in these scenarios. Instead, our experiments show that data size should be relatively bigger than what those scaling laws estimated when training small models (16-124M parameters) for optimal performance. Furthermore, given a fixed computational budget, it is better to train big models instead of 7771 computing more model updates in smaller models. Additionally, we establish the minimally required combinations of compute, model and dataset sizes of Transformer-based approaches that outperform other lighter neural baselines, taking CO2 emissions into consideration. ## 2 Related Work Since the emergence of the attention-based Transformer (Vaswani et al., 2017) architecture and the masking pre-training strategies introduced since BERT (Devlin et al., 2019), different pre-training strategies have been published. But aside from the improvements to the architecture or training procedures, the qualitative improvement in results is mainly the result of increasing the model size, alongside the amount of text corpora used to train them: Chinchilla (70B parameters) (Hoffmann et al., 2022), LaMDA (137B) (Thoppilan et al., 2022), GPT-3 (175B) (Brown et al., 2020), Gopher (280B) (Rae et al., 2021) and PaLM (540B) (Chowdhery et al., 2022). This fast-growing increase in model sizes and data is proven to surface new abilities in larger models, but not present in smaller models (Wei et al., 2022a). The relationship between the size of the pretraining corpus and the performance of the language model in NLU tasks has been addressed in the literature before. The performance improves when the amount of data is increased (Zhang et al., 2021; Hu et al., 2020; Micheli et al., 2020; Raffel et al., 2020), although, at a certain point, the increase in performance slows down when the model size is kept fixed (Inoue et al., 2021; Martin et al., 2020; Micheli et al., 2020; Raffel et al., 2020; Liu et al., 2021). Furthermore, it is more convenient to improve the diversity of training datasets, than to add more text from the same domains (Inoue et al., 2021; Martin et al., 2020; Liu et al., 2021). The correlation between model size and model performance on NLU tasks has also been analyzed. The performance of the model improves when scaling the model size (and FLOPs) (Turc et al., 2019; Raffel et al., 2020; Xia et al., 2022; Clark et al., 2022). However, all these works used very large pre-training datasets. They do not analyze if the increase in performance slows down bottlenecked by the pre-training dataset size and thus, the conclusion of scaling being always beneficial cannot be extended to low-data scenarios. The works of Kaplan et al. (2020) and Hoffmann et al. (2022), whose aim is aligned with this work, empirically study the optimal ratios of the training tokens, model parameters, and computation to train dense language models and infer scaling laws. Kaplan et al. (2020) train models of a size ranging from 768 to 1.5 billion parameters with datasets ranging from 22 million to 23 billion tokens and conclude that LM performance improves smoothly as we increase the model size, dataset size, and amount of computation. They show that all three factors must be scaled up in tandem, to avoid bottleneck issues. Furthermore, they note that larger models are more sample-efficient, and that convergence is inefficient, suggesting that it's better to under-train a bigger model than converge a smaller one on the same computing budget. Hoffmann et al. (2022) find the optimal model size and the number of training tokens for given a fixed FLOPs budget. For this purpose, they draw their own scaling laws, based on the losses of over 400 models, ranging from 70M to 16B parameters, and trained on 5B to 400B tokens. They state that model size and the number of training tokens should scale equally, based on three alternative approaches, while Kaplan et al. (2020) extrapolates that every time the model size is increased by 8, the data only needs to be increased by 5. Thus, after concluding that the performance of most of the current large language models is bottlenecked by the undersized corpora, they train Chinchilla. However, the scaling laws of Kaplan et al. (2020) and Hoffmann et al. (2022) are not useful for the low-resource settings we want to focus on. According to Kaplan et al. (2020) we need very small training corpora (e.g. 744K tokens for a BERT*BASE*, which is clearly not enough or optimal). Hoffmann et al. (2022) infers significantly bigger training corpora: e.g. 3M tokens for a BERT*MINI* or 86M tokens for a BERT*BASE*. Current models for low-resource languages are trained with corpora around that range (Joshi et al., 2020): 161M for Irish (Barry et al., 2021), 130M for Luxembourgish (Lothritz et al., 2022), 45M for Galician (Vilares et al., 2021), 16M for Swahili (Martin et al., 2022b) and 4.4M for Quechua (Zevallos et al., 2022). However, increasing pre-training data several orders of magnitude has been proved beneficial for base size models (Liu et al., 2019). Finally, those optimal scaling laws have been deduced from models trained over one epoch, while in low/medium-resource settings models are often trained over several epochs (Martin et al., 2020; Lothritz et al., 2022; Zevallos et al., 2022). Nonetheless, for certain NLU tasks (e.g. NeQA and Quote Repetition) scaling language models is detrimental (Perez and McKenzie, 2022), creating inverse scaling laws. However, Wei et al. (2022b) implies U-shaped scaling laws where even larger models might be able to solve those tasks that comprise a true and a distractor task. ## 3 Experimental Setup We aim to find the optimal combination of modelsize, dataset-size and computing in low-resource environments and assess whether they follow the scaling laws established in the literature. In addition, we seek to find the minimum requirements to overcome computationally lighter neural baselines. Therefore, we carry out experiments for 3 corpus sizes and 3 model sizes, in 4 languages, training a total of 36 different models. ## 3.1 Language Selection We conduct the experiments in four languages from different language families, selected among those that have enough monolingual data to train LMs, as well as enough available evaluation datasets for NLU tasks. Hence, the low-resource setting has been simulated in some cases. Among other languages that fulfil those criteria, we opted for Basque (eu), Spanish (es), Swahili (sw) and Finnish (fi). In addition to being part of disjoint language families, these languages are linguistically diverse with different complexities across morphology, syntax, verb system and vocabulary (Coloma, 2015) (see Appendix A). ## 3.2 Corpora For each language, we created three corpora comprising 125M, 25M and 5M words, respectively. We limited the number of corpora sizes to three in order to control the number of experiments, and thus the computational resources needed. Preliminary experiments showed a big fall in the results when reducing pre-training data to just 1M words, in consistency with Zhang et al. (2021). Since obtaining corpora of about 5M words is achievable by most languages that have annotated datasets (Joshi et al., 2020), we set the lower bound at 5M words. Zhang et al. (2021) shows that 10M to 100M words of pretraining data are enough for a language model to acquire the linguistic capacities | L | HH | INT | H | NEP | Param. | | |----------|------|-------|---------|-------|----------|-----| | BERT124M | 12 | 768 | 3072 12 | 86M | 124M | | | BERT51M | 8 | 512 | 2048 | 8 | 25M | 51M | | BERT16M | 4 | 256 | 1024 | 4 | 3M | 16M | Table 1: Model sizes in our experiments. L: layers. HH: hidden dimensions. INT: intermediate layer dimensions. H: attention heads. NEP: non-embedding parameters. of syntax and semantics. Thus, we set the other two corpora sizes at 25M and 125M words, keeping a constant increase rate among them. Regarding the nature of the texts, corpora for Basque and Spanish are a mix of 75% news and 25% text from Wikipedia. We selected the newspaper Berria1for Basque, and El Pais2for Spanish. Corpora for Swahili and Finnish were built by randomly selecting documents (longer than 10 sentences) from the web-crawled *cc100* corpus (Conneau et al., 2020; Wenzek et al., 2020). ## 3.3 Models In a similar fashion to (Turc et al., 2019), we employ three different variants of the BERT model, dubbed BERT124M 3, BERT51M and BERT16M. These models have 12, 8 and 4 layers respectively, also shrinking other parameters proportionally (hidden dimension, number of attention heads, etc.), since model shape does not affect performance significantly (Kaplan et al., 2020). Table 1 shows a detailed view of the parameters in each model. We also increased the vocabulary sizes from the original 30K subword tokens to 50K because it is beneficial for agglutinative languages (Agerri et al., 2020). We trained each model up to 500K steps with a batch of 256 and a sequence length of 512. For more pre-training details see Appendix B. ## 4 Evaluation Settings We evaluate all models intrinsically and extrinsically. For the intrinsic evaluation, we tested the models on masked language modeling; for the extrinsic evaluation, we selected four NLU downstream tasks with available datasets in all the selected languages: Name Entity Recognition and Classification (NERC), Topic Classification (Topic), Sentiment Analysis (SA) and Questionanswering NLI (QNLI). Our selection of tasks in- | EU | ES | SW | FI | | | | | | | | | | | |-------|-------|------|------|-------|-----|------|-------|-----|------|-------|-----|------|-----| | Task | train | dev | test | train | dev | test | train | dev | test | train | dev | test | M. | | NERC | 52K | 13K | 36K | 265K | 53K | 52K | 175K | 25K | 51K | 180K | 14K | 46K | F1 | | Topic | 9K | 2K | 2K | 9K | 1K | 4K | 10K | 3K | 7K | 10K | 10K | 10K | F1 | | SA | 6K | 1K | 1K | 5K | 2K | 1K | 6K | 782 | 1K | 4K | 633 | 1K | F1 | | QNLI | 2K | 230 | 238 | 30K | 4K | 4K | 4K | 624 | 1K | 7K | 1K | 1K | acc | | MLM | 1M | 1M | 1M | 1M | acc | | | | | | | | | cludes one sequence labeling task and three sequence classification tasks including sentiment analysis and QNLI, which are tasks that require a deeper NLU than the shallow linguistic tasks of NERC and Topic Classification (Zhang et al., 2021). Table 2 shows the details of each dataset. ## 4.1 Mlm Masked Language Modeling (MLM) is one of the default pre-training objective functions of BERT. We report both the loss and accuracy of MLM. For this purpose, we created test datasets, from news sources not used for pre-training the models. For Basque we gathered texts from Argia4 news magazine. For Spanish, we opted for texts from the newspaper El Mundo 5. For Swahili, as a data source not used in the pre-training, we randomly selected a sub-corpus from the pre-train data for SwahBERT model (Martin et al., 2022a), which is mostly made up of news (%80). For Finnish we opted for a subset of *cc100* not used in the pretraining, due to the lack of document-level news corpora available with an open license. ## 4.2 Nerc Named Entity Recognition and Classification (NERC) is a token classification task. For Basque, we used the *in-domain NERC* dataset from the BasqueGLUE benchmark (Urbizu et al., 2022). For Spanish, we opted for the *Conll2002* dataset (Sang, 2002). For Swahili, we selected *Masakhaner* (Adelani et al., 2021). And lastly, for Finnish, we used FiNER (Ruokolainen et al., 2019). Each dataset has 4 categories, and we use the F-score as the performance metric. ## 4.3 Topic Classification Topic classification is a sequence classification multi-class task. For Basque, we chose the 4www.argia.eus 5www.elmundo.es BHTCv2 dataset including 12 thematic classes (Urbizu et al., 2022). The Spanish counterpart is *MLdoc* (Schwenk and Li, 2018) which has 4 classes. For Swahili, we employed *Swahili: News Classification Dataset* (David, 2020) which has 4 thematic classes. Since a development dataset split was missing, we randomly selected the 20% of the training split to create it. Furthermore, since the fine-tuning dataset is bigger than the smallest of the pre-training dataset (5M), we downsampled this training to 10K examples. And for Finnish, we selected the 10% version of the *Yle corpus*6, which contains 10 thematic classes. Performance is measured with the F-score score. ## 4.4 Sa Sentiment Analysis (SA) is a sequence classification task. For Basque, we employed the dataset BEC2016eu (Urbizu et al., 2022), which has positive, negative and neutral classes. *InterTass2020* (Cumbreras et al., 2016) is the Spanish dataset selected for SA, which also has positive, negative and neutral classes. For Swahili, we utilized the dataset presented by Martin et al. (2022b). The dataset was mapped to polarity annotation following guidelines from the article: joy ([1]) = positive, disgust ([4]) = negative, neutral ([0]) and surprise ([5]) = neutral. Only examples with a single label were mapped. Original train/dev/test splits were maintained. And lastly, for Finnish, we chose *Finnish sentiment*7 which only contains positive and negative labels. We use F-score as the performance metric. ## 4.5 Qnli Question-answering NLI (QNLI) is a sequence classification task. For Basque, we employed QNLIeu (Urbizu et al., 2022). And for Spanish, Swahili and Finnish, we adapted already available 6www.github.com/spyysalo/yle-corpus 7www.huggingface.co/datasets/sepidmnorozy/ Finnish_sentiment ![4_image_1.png](4_image_1.png) ![4_image_3.png](4_image_3.png) ![4_image_0.png](4_image_0.png) ![4_image_2.png](4_image_2.png) conversational Question Answering (QA) datasets, into a sequence-pair binary classification task following the design of QNLI for English (Wang et al., 2019). The QA dataset selected were *SQAD*es (Carrino et al., 2020), *Tydiqa*sw and Tydiqaf i (Clark et al., 2020). Each QNLI dataset has a QuestionSequence pair for entailment8. *Tydiqa* only provides splits for train and development. Thus, we used that development split as our test split, and randomly select some examples from the training set to create our development set. We follow the English QNLI design and use accuracy as the evaluation metric. ## 4.6 Systems And Baselines For the extrinsic evaluation, we fine-tuned each of the 36 BERT models making use of the Transformers library (Wolf et al., 2020), with a lr of 3e−5, an effective batch size of 32 and training up to 10 epochs9, which are considered default values (Devlin et al., 2019; Mosbach et al., 2020). For each task and language, we report an average of 5 runs. In order to compare the performance of our models to other lighter approaches, we implement a competitive neural baseline based on contextual embeddings using Flair (Akbik et al., 2019). For sequence labeling tasks, embeddings are passed into a BiLSTM-CRF system based on the architecture proposed by (Huang et al., 2015). For text classification tasks, the computed Flair embeddings are fed into a BILSTM to produce a document-level embedding which is then used in a linear layer to make the class prediction. We pre-train our own contextual Flair embeddings using the 125M corpora for each language with the following hyperparameters: Hidden size of 2048, sequence length of 250, a mini-batch size of 100 and 10 epochs. The rest of the training parameters are left in their default setting10. In addition to LSTM-based neural baselines, we also include in the comparison a multilingual BERT 9Selecting the best-performing epoch on the development set. 10Each model took 80h to train on an Nvidia TitanV GPU. model, namely mBERT*base* (Devlin et al., 2019). We assume that this kind of multilingual language models will always be available and that they can somehow be an alternative in some low-resource settings, namely when limited resources refer to the computational capacity for pre-training and to the availability of enough text in the target language. In that line, we perform this comparison only for the Basque and Swahili languages which approximately include a training corpus size in mBERT no larger than those of the corpora used in our experiments (roughly 35M for eu and 11M for sw). ## 5 Results 5.1 Down-Scaling Laws Figures 1-4 11 show the relation between the FLOPs and the MLM loss in the development dataset for the models in the four languages. Different colours stand for different model sizes (16M/51M/125M), and different symbols represent the pre-training data (5M/25M/125M). Each line is formed by 5 checkpoints (every 100K steps). We can appreciate how the lowest loss is achieved by the biggest model with the most pre-training data, as expected from scaling laws, which dictate an improvement in performance when model-size, dataset-size and compute budget are increased simultaneously. Furthermore, the plot shows that increasing pre-training data is more beneficial than increasing the model size or amount of compute in this low-resource scenario; the gap between the loss obtained from increasing pre-training data is much bigger than the improvements obtained when increasing model-size or training steps. The figures also show that models trained with small datasets yield larger MLM losses in development with further training (× curves), which we attribute to overfitting, as the training MLM loss does shrink as training advances. Big models with medium datasets (red and black ⋄ curves) also show the same tendency, although to a lesser extent. According to the figures, unless we use a dataset of at least 125M in the case of BERT51M and BERT124M, or a dataset of at least 25M in the case of the smallest BERT16, we should consider applying early stopping to our models to avoid overfitting, which corresponds with the first checkpoint of 100K steps we plotted in most cases. Nevertheless, over-fitting issues during pretraining, do not have a direct impact neither on MLM accuracy nor on downstream tasks (see Appendix D). Thus, for the evaluation of the models regarding MLM accuracy (analysis available in Appendix C) and NLU downstream tasks (Section 5.2), we employed the final checkpoints at 500K steps. Regarding languages, a comparison of the four figures (Figures 1-4) shows that the correlation between MLM loss and combinations of model-size, dataset-size and FLOPs is consistent across languages. MLM accuracies are also consistent across languages (Appendix C). Hoffmann et al. (2022) estimates that 3M, 25M and 86M tokens are optimal to train BERT16M BERT51 and BERT124 respectively, while Kaplan et al. (2020) estimates much lower values. Our results show that the amount of data needed to train an LM optimally is no less than 25M words for BERT16M and 125M for BERT51 and BERT124. We carry out an in-depth comparison of these results with additional data in Appendix H. ## 5.2 Evaluation On Nlu Tasks This section analyses the performance of our models on the NLU tasks listed in section 4, to measure the effect of model-size and pre-training data-size once finetuned on the downstream tasks. The results for the NERC task are shown in Table 3. As expected, there is a clear positive correlation between the evaluation metric and the model and corpora size, but corpora size has slightly more impact on the performance. The results for topic classification at Table 4 follow the same trends, albeit with smoother differences. Table 5 presents the results for SA, again repeating the trends, but with a few outliers. Lastly, for QNLI (see Table 6), we observe there is a general trend of improving results while increasing dataset and model sizes. However, many results present large standard deviations, leading to several outliers12 that stand out from the general trend. The models trained obtain competitive results, as shown by the results for NERC, topic classification and SA for Swahili, which are new SotA for those datasets to the best of our knowledge. Besides, some of the results obtained with the BERT124M and 125M words are comparable with SotA models trained over huge datasets (See appendix F). 12outliers marked in red 11Zoomed in for BERT16M in the Appendix G. | NERCeu | 5M | 25M | 125M | |----------|-----------|-----------|-----------| | BERT16M | 63.90±0.5 | 72.23±0.6 | 74.12±0.3 | | BERT51M | 70.14±0.4 | 79.07±0.4 | 82.98±0.1 | | BERT124M | 73.14±0.5 | 79.09±0.8 | 84.58±0.2 | | NERCes | 5M | 25M | 125M | | BERT16M | 76.57±0.3 | 81.56±0.5 | 81.70±0.5 | | BERT51M | 80.43±0.4 | 85.11±0.8 | 86.34±0.7 | | BERT124M | 81.75±0.4 | 84.99±0.8 | 87.28±0.3 | | NERCsw | 5M | 25M | 125M | | BERT16M | 86.36±0.2 | 88.62±0.2 | 88.63±0.4 | | BERT51M | 88.74±0.2 | 90.68±0.2 | 91.63±0.1 | | BERT124M | 88.93±0.4 | 90.97±0.2 | 92.09±0.2 | | NERCf i | 5M | 25M | 125M | | BERT16M | 76.82±0.3 | 81.48±0.4 | 81.83±0.3 | | BERT51M | 79.73±0.2 | 85.27±0.4 | 87.02±0.2 | | BERT124M | 80.56±0.7 | 85.77±0.3 | 88.99±0.2 | Table 3: Results for the 9 models on NERC (F1) for Basque, Spanish, Swahili and Finnish. | Topiceu | 5M | 25M | 125M | |-----------|-----------|-----------|-----------| | BERT16M | 68.00±0.6 | 71.81±0.3 | 72.49±0.4 | | BERT51M | 69.98±0.6 | 73.16±0.6 | 74.87±0.4 | | BERT124M | 71.70±0.7 | 74.61±0.3 | 76.06±0.4 | | Topices | 5M | 25M | 125M | | BERT16M | 94.54±0.3 | 95.86±0.3 | 95.42±0.4 | | BERT51M | 94.89±0.3 | 95.45±0.2 | 95.91±0.4 | | BERT124M | 95.32±0.4 | 95.82±0.3 | 96.27±0.3 | | Topicsw | 5M | 25M | 125M | | BERT16M | 91.64±0.3 | 91.96±0.3 | 92.45±0.2 | | BERT51M | 92.12±0.2 | 92.39±0.1 | 92.88±0.2 | | BERT124M | 91.95±0.4 | 92.69±0.1 | 93.07±0.2 | | Topicf i | 5M | 25M | 125M | | BERT16M | 88.15±0.1 | 88.94±0.2 | 89.16±0.2 | | BERT51M | 88.53±0.3 | 89.40±0.3 | 89.61±0.3 | | BERT124M | 88.41±0.2 | 89.72±0.2 | 90.14±0.1 | ## 5.2.1 Evaluation Vs Baseline Systems Table 7 contains the results for the models trained with the corpora of 125M words, compared to the BiLSTM-CRF Flair baseline (trained with the same 125M corpora) and mBERT (for the languages with a comparable target-language pre-training corpus size). BERT models outperform the Flair neural baseline, but, depending on the evaluation dataset (task and language), the baseline is outperformed only by the BASE124M model or by all three model sizes. Furthermore, for some datasets, even the Table 5: Results for the 9 models on sentiment analysis (F1) for Basque, Spanish, Swahili and Finnish. | QNLIeu | 5M | 25M | 125M | |----------|-----------|-----------|-----------| | BERT16M | 68.19±2.3 | 68.95±2.0 | 71.22±5.0 | | BERT51M | 65.06±0.8 | 76.37±2.5 | 74.18±1.6 | | BERT124M | 67.43±2.7 | 72.66±2.0 | 74.09±1.7 | | QNLIes | 5M | 25M | 125M | | BERT16M | 65.01±0.6 | 70.89±1.4 | 72.72±0.7 | | BERT51M | 67.00±1.8 | 74.11±1.1 | 78.00±0.0 | | BERT124M | 67.39±0.5 | 73.07±1.3 | 81.10±0.7 | | QNLIsw | 5M | 25M | 125M | | BERT16M | 62.80±1.1 | 62.45±1.2 | 63.42±1.0 | | BERT51M | 62.27±1.7 | 63.83±1.4 | 63.87±1.5 | | BERT124M | 64.08±1.1 | 62.68±1.2 | 63.34±1.4 | | QNLIf i | 5M | 25M | 125M | | BERT16M | 51.49±0.9 | 50.96±0.7 | 58.89±3.7 | | BERT51M | 54.28±2.5 | 54.58±3.2 | 57.30±4.3 | | BERT124M | 54.07±2.6 | 59.89±4.5 | 58.56±1.1 | | SAeu | 5M | 25M | 125M | |----------|-----------|-----------|-----------| | BERT16M | 67.80±0.5 | 68.63±1.0 | 67.59±0.5 | | BERT51M | 67.00±1.0 | 68.54±0.5 | 69.40±0.9 | | BERT124M | 67.22±0.7 | 68.79±0.7 | 68.91±0.5 | | SAes | 5M | 25M | 125M | | BERT16M | 37.67±1.5 | 37.62±0.7 | 37.51±1.4 | | BERT51M | 36.05±2.1 | 37.57±0.5 | 39.89±0.0 | | BERT124M | 36.37±2.2 | 37.17±1.1 | 43.27±1.1 | | SAsw | 5M | 25M | 125M | | BERT16M | 71.52±0.6 | 75.56±0.3 | 74.84±0.6 | | BERT51M | 70.49±0.7 | 75.39±1.4 | 77.07±0.0 | | BERT124M | 69.60±1.3 | 75.54±0.9 | 79.04±0.7 | | SAf i | 5M | 25M | 125M | | BERT16M | 89.69±0.2 | 90.96±0.2 | 91.14±0.4 | | BERT51M | 89.61±0.6 | 91.86±0.3 | 92.58±0.0 | | BERT124M | 90.32±0.5 | 91.55±0.3 | 94.38±0.3 | BERT16M models trained with the smallest corpus (5M), not included in this table, outperform the Flair baseline (trained over a corpus of 125M words). Finally, computational costs (analysed in Section 5.3) ought to be a factor to consider and decide if the gain in performance is worth the increase in computational requirements. The boost in performance when increasing the model size is larger in downstream tasks than in the MLM intrinsic task, particularly when shifting from the smallest BERT16M to the intermediate BERT16M BERT51M BERT124M Flair mBERT ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) eu NERC 74.12±0.3 82.98±0.1 **84.58**±0.2 82.13±0.4 79.39±1.0 Topic 72.49±0.4 74.87±0.4 **76.06**±0.4 67.89±0.3 70.57±0.5 SA 67.59±0.5 **69.40**±0.9 68.91±0.5 68.17±0.3 67.34±0.7 QNLI 71.22±5.0 74.18±1.6 74.09±1.7 48.66±5.2 **78.48**±1.9 es NERC 81.70±0.5 86.34±0.7 **87.28**±0.3 87.09±0.3 — Topic 95.42±0.4 95.91±0.4 **96.27**±0.3 94.08±0.4 — SA 37.51±1.4 39.89±0.0 **43.27**±1.1 34.73±3.0 — QNLI 72.72±0.7 78.00±0.0 **81.10**±0.7 56.42±0.6 — sw NERC 88.63±0.4 91.63±0.1 **92.09**±0.2 92.04±0.1 91.17±0.1 Topic 92.45±0.2 92.88±0.2 **93.07**±0.2 91.83±0.2 91.52±0.2 SA 74.84±0.6 77.07±0.0 **79.04**±0.7 73.60±0.5 69.17±1.2 QNLI 63.42±1.0 **63.87**±1.5 63.34±1.4 52.82±2.1 63.48±1.1 fi NERC 81.83±0.3 87.02±0.2 **88.99**±0.2 84.76±0.4 — Topic 89.16±0.2 89.61±0.3 **90.14**±0.1 86.58±0.7 — SA 91.14±0.4 92.58±0.0 **94.38**±0.3 89.74±0.5 — QNLI **58.89**±3.7 57.30±4.3 58.56±1.1 51.54±1.2 — BERT51M. This indicates that a larger model is better suited for fine-tuning, as the number of trainable parameters is also higher. The results and scaling trends across languages are very consistent. The results and the trends we obtained are also consistent across different tasks, with the exception of QNLI, where results are volatile13 and have many outliers. ## 5.3 Flops And Co2 **Emissions** Table 8 shows the computational costs and CO2 emissions for each system for training, finetuning14 and inference. We calculated the FLOPs following the same method as Hoffmann et al. (2022). For non-transformer baselines, FLOPs were computed following (Zhang et al., 2018). CO2 emissions were estimated with *Machine-Learning Impact calculator*15 (Lacoste et al., 2019). The neural baseline based on Bi-LSTMs is lighter FLOP-wise, on pre-training, fine-tuning and inference time, even against the smallest BERT16M model. Still, the Flair baseline has higher CO2 emissions for finetuning, due to its inability to parallelize from the recurrent nature of the LSTMs. If we revisit the results on MLM and NLU tasks (Sections 5.1 and 5.2) with computational costs in mind, we can say that if we only have a tiny corpus 13with an average standard deviation of 1.8 14Finetuning values are computed for a single run at Spanish topic classification. 15https://mlco2.github.io/impact\#compute (5M token) available, the results obtained with a ![7_image_2.png](7_image_2.png) small model (BERT16M) are on par with its bigger siblings at MLM, Topic, SA and QNLI, but not in NERC, where increasing the model size (up to BERT51M) is needed to get competent results. In a scenario with a small dataset (25M), BERT16M would only obtain comparable results at topic classification and SA, but a BERT51M model obtains results as good as, or even better than BERT124M. Thus, we can opt for the BERT51M and use only half of the compute. However, if we are working with a pre-training dataset bigger than 125M, BERT124M obtains the best results by far, indicating that it is worth investing the compute needed to train such a model. However, here we are comparing models of different sizes, trained for the same amount of steps. What would happen if we want the best model for a fixed computational budget? We answer that in Appendix E, where we compare the BERT51M and BERT124M, pre-trained on a comparable amount of computation. In line with (Kaplan et al., 2020), we conclude that it is better to undertrain a BERT124M than overtraining a BERT51M with the same amount of computation. ## 6 Conclusions We present a study of the performance of language models in constrained settings, to analyze if the same scaling laws studied for large-language mod- | Pre-training | Fine-tuning | Inference | | | | | |----------------|---------------|-------------|---------|-------|---------|---------| | Model | FLOPs | CO2eq | FLOPs | CO2eq | FLOPs | CO2eq | | BERT124M | 4.9e+19 | 98 kg | 3.4e+16 | 47 g | 1.3e+11 | 0.18 mg | | BERT51M | 2.0e+19 | 41 kg | 1.4e+16 | 23 g | 5.3e+10 | 0.07 mg | | BERT16M | 6.3e+18 | 13 kg | 4.4e+15 | 11 g | 1.6e+10 | 0.02 mg | | Flair | 1.4e+17 | 4 kg | 5.3e+14 | 334 g | 5.3e+09 | 0.01 mg | els apply to low-resource scenarios. We find out that the estimated values for optimal balance of model size and corpora size do not hold in these scenarios, and that pre-training tokens should be higher than the amount of model parameters. From our experiments, we conclude that it is preferable to train big models on as much data as possible rather than using the computational power to further train smaller models. We see a clear trend where bigger models tend to quickly overfit when pre-trained for many epochs with small corpora. Still, even when they overfit in the pretraining stage, bigger models consistently outperform smaller models in downstream applications which require fine-tuning. The experimental results are consistent among languages. Additionally, we empirically establish when the computational cost of using a Transformer-based approach is worth taking. All the pre-training corpora, models and datasets created in this work are publicly available16. ## Limitations First of all, our study is limited to languages that use the Latin script. Still, the 4 languages are from different language families and are typologically diverse. Secondly, the low-resource scenario is simulated. As mentioned in 3, in order to carry out the experiments the languages involved were required to have enough monolingual data to train LMs, as well as available evaluation datasets for NLU tasks. The source of the pre-training corpora for Swahili and Finnish (*cc100*) is not completely comparable with the corpora used for Basque and Spanish (75% news, 25% Wikipedia), due to the unavailability of a large curated corpus for Swahili, and the lack of big news corpora for Finnish with an open license that allowed us to share freely the pre-training data. 16https://github.com/orai-nlp/low-scaling-laws Our study is limited to 3 language model sizes and 3 pre-training corpora sizes. Including other model sizes like a BERT-Large or a model between 51M and 16M (where there is a big gap in results), and adding more pre-training corpora sizes (let's say 625M and 1M words) were out of the scope of this work. In addition, we use the default hyperparameters that are commonly used for BERT-base (BERT124M) for the pre-training and fine-tuning of the BERT51M and BERT16M models without any hyperparameter tuning. ## Acknowledgements This work has been partially funded by the Basque Government (ICL4LANG project, grant no. KK-2023/00094). It has also received funding from the following MCIN/AEI/10.13039/501100011033 projects: (i) DeepKnowledge (PID2021-127777OB-C21) and ERDF A way of making Europe and, (ii) DeepR3 (TED2021-130295B-C31) and European Union NextGeneration EU/PRTR. Rodrigo Agerri currently holds the RYC-2017-23647 fellowship (MCIN/AEI/10.13039/501100011033 and 63.34ESF Investing in your future). We also acknowledge the support of Google's TFRC program. ## References David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, et al. 2021. Masakhaner: Named entity recognition for african languages. Transactions of the Association for Computational Linguistics, 9:1116–1131. Rodrigo Agerri, Iñaki San Vicente, Jon Ander Campos, Ander Barrena, Xabier Saralegi, Aitor Soroa, and Eneko Agirre. 2020. Give your text representation models some love: the case for basque. In *Proceedings of the 12th Language Resources and Evaluation* Conference, pages 4781–4788. Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, page 724–728. Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de Viñaspre, and Aitor Soroa. 2022. Does corpus quality really matter for low-resource languages? arXiv preprint arXiv:2203.08111. James Barry, Joachim Wagner, Lauren Cassidy, Alan Cowap, Teresa Lynn, Abigail Walsh, Mícheál J Ó Meachair, and Jennifer Foster. 2021. gabert–an irish language model. *arXiv preprint arXiv:2107.12930*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Casimiro Pio Carrino, Marta R Costa-jussà, and José AR Fonollosa. 2020. Automatic spanish translation of squad dataset for multi-lingual question answering. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5515–5523. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al. 2022. Unified scaling laws for routed language models. In International Conference on Machine Learning, pages 4057–4086. PMLR. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics. Germán Coloma. 2015. Efectos de compensación entre indicadores de la complejidad de los idiomas. Technical report, Serie Documentos de Trabajo. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Miguel Ángel García Cumbreras, Eugenio Martínez Cámara, Julio Villena Román, and Janine García Morera. 2016. Tass 2015–the evolution of the spanish opinion mining systems. Procesamiento del Lenguaje Natural, I(56):33–40. Davis David. 2020. Swahili : News classification dataset. The news version contains both train and test sets. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Asier Gutiérrez Fandiño, Jordi Armengol Estapé, Marc Pàmies, Joan Llop Palao, Joaquin Silveira Ocampo, Casimiro Pio Carrino, Carme Armentano Oller, Carlos Rodriguez Penagos, Aitor Gonzalez Agirre, and Marta Villegas. 2022. Maria: Spanish language models. *Procesamiento del Lenguaje Natural*, 68. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 1725–1744. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. *arXiv* preprint arXiv:1508.01991. Go Inoue, Bashar Alhafni, Nurpeiis Baimukan, Houda Bouamor, and Nizar Habash. 2021. The interplay of variant, size, and task type in arabic pre-trained language models. In *Proceedings of the Sixth Arabic* Natural Language Processing Workshop, pages 92– 104. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. *arXiv preprint arXiv:2004.09095*. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 66–75. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A Smith. 2021. Probing across time: What does roberta know and when? In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 820–842. Cedric Lothritz, Bertrand Lebichot, Kevin Allix, Lisa Veiber, Tegawendé François D Assise Bissyande, Jacques Klein, Andrey Boytsov, Anne Goujon, and Clément Lefebvre. 2022. Luxembert: Simple and practical data augmentation in language model pretraining for luxembourgish. In Proceedings of the Language Resources and Evaluation Conference, 2022, pages 5080–5089. Gati Martin, Medard Edmund Mswahili, Young-Seob Jeong, and Jiyoung Woo. 2022a. SwahBERT: Language model of Swahili. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 303–313, Seattle, United States. Association for Computational Linguistics. Gati Martin, Medard Edmund Mswahili, Young-Seob Jeong, and Jeong Young-Seob. 2022b. Swahbert: Language model of swahili. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 303–313. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte De La Clergerie, Djamé Seddah, and Benoît Sagot. 2020. Camembert: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203–7219. Vincent Micheli, Martin d'Hoffschmidt, and François Fleuret. 2020. On the importance of pre-training data volume for compact language models. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 7853–7858. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. In *International Conference on Learning Representations*. Ethan Perez and Ian McKenzie. 2022. Inverse scaling prize: Round 1 winners. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Teemu Ruokolainen, Pekka Kauppinen, Miikka Silfverberg, and Krister Lindén. 2019. A finnish news corpus for named entity recognition. *Language Resources and Evaluation*, pages 1–26. Erik F Sang. 2002. Tjong kim (2002)."introduction to the conll-2002 shared task: Language-independent named entity recognition". In *COLING-02: The 6th* Conference on Natural Language Learning. Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *arXiv preprint arXiv:2201.08239*. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962. Gorka Urbizu, Iñaki San Vicente, Xabier Saralegi, Rodrigo Agerri, and Aitor Soroa. 2022. BasqueGLUE: A Natural Language Understanding Benchmark for Basque. In *Proceedings of the Language Resources* and Evaluation Conference, pages 1603–1612, Marseille, France. European Language Resources Association. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Manuel García Vega, Manuel Carlos DíazGaliano, Miguel Ángel García Cumbreras, Flor Miriam Plaza del Arco, Arturo Montejo-Ráez, Salud María Jiménez Zafra, Eugenio Martínez Cámara, César Antonio Aguilar, Marco Antonio Sobrevilla Cabezudo, Luis Chiruzzo, et al. 2020. In *IberLEF@* SEPLN. David Vilares, Marcos Garcia, and Carlos GómezRodríguez. 2021. Bertinho: Galician bert representations. *arXiv preprint arXiv:2103.13799*. Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. *arXiv preprint arXiv:1912.07076*. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. *Transactions* on Machine Learning Research. Survey Certification. Jason Wei, Yi Tay, and Quoc V Le. 2022b. Inverse scaling can become u-shaped. arXiv preprint arXiv:2211.02011. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, and Ves Stoyanov. 2022. Training trajectories of language models across scales. Rodolfo Zevallos, John Ortega, William Chen, Richard Castro, Nuria Bel, Cesar Toshio, Renzo Venturas, Hilario Aradiel, and Nelsi Melgarejo. 2022. Introducing qubert: A large monolingual corpus and bert model for southern quechua. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 1–13. Minjia Zhang, Wenhan Wang, Xiaodong Liu, Jianfeng Gao, and Yuxiong He. 2018. Navigating with graph representations for fast and scalable decoding of neural language models. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc. Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel Bowman. 2021. When do you need billions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1112–1125. ## A Linguistic Characteristics Of Selected Languages. Table 9 shows the linguistic characteristics of the languages we selected for our experiments, which are Basque (eu), Spanish (es), Swahili (sw) and Finnish (fi). On one hand, we have the language families they belong to, and on the other hand, their complexity in morphology, syntax, verb system and vocabulary according to Coloma (2015). ## B Pre-Training Details We use a cased sub-word vocabulary containing 50K tokens trained with the unigram language model based sub-word segmentation algorithm proposed by Kudo (2018). The vocabularies are learned from each training corpus with a character coverage of 99.95%, to ignore rare characters. Thus, we obtain 3 vocabularies for each language, one for each size of the pre-training corpora (5M, 25M, 125M), which are shared among LMs of different sizes throughout our experiments. We apply several Maskings to the same sentences, to create different examples from the same text17, which is a common practice during the preprocessing of the pre-training data. We applied 10 different random maskings to each text and we employed whole-word masking, where whole words are masked instead of the sub-word units. All models were trained on TPUv3-8 machines using the same set of default hyperparameters (Devlin et al., 2019) in all model sizes: a learning rate 1e−4, β1 = 0.9, β2 = 0.999, L2 weight decay of 0.01, a learning rate warmup of 10K steps, and training the models for a total of 500K steps with a batch size of 256 and a sequence length of 512. This means that we will be doing many epochs (500/2500/12500K) over the same corpus, a common practice, when there is not an enormous pre-training corpus available, for instance, in the original publication of BERT (Devlin et al., 2019), the model is pre-trained for 40 epochs. Although the models are trained for the same amount of steps and batch size, the time needed for training each of them is different, with larger models taking more time. We trained all our models using TPUv3-8 machines; in which we trained BERT124M models to 500K steps in 76 hours, BERT51M models in 32 hours and BERT16M models in 10 hours. 17I love cats: I love [MASK]; I [MASK] cats; [MASK] love cats ## C Mlm Evaluation Table 10 shows the accuracies obtained in the MLM task for each language (Basque, Spanish, Swahili and Finnish), for each model size and corpus size combination. As expected, larger models trained with the biggest corpora yield the best results, and a positive correlation exists between model/corpora size and accuracy in every language we compare. Moreover, results show that in overall, in these low-resource settings, it is preferable to increase the pre-training data over model size. Increasing pre-training data improve results on MLM for all languages, with the exception of the BERT16M model trained with the 125M token dataset. The gain obtained with the smallest BERT16M models as we keep adding training data diminishes, which suggests that performance is reaching a plateau in these models. On the other hand, increasing the model size only helps once we reach a certain amount of pre-training data. Increasing model size from BERT16M to BERT51M does not improve MLM accuracy for a 5M corpus, suggesting that 3M nonembedding parameters are enough to absorb the knowledge of such a small dataset. However, increasing model size from BERT51M to BERT124M for the same 5M corpus does improve the overall performance for all languages except for Finnish, This might be due to larger language models being more sample-efficient (Kaplan et al., 2020). Surprisingly, BERT51M outperforms BERT124M consistently across all languages when pre-trained with a 25M corpus; this goes against the intuition of larger models being more sample-efficient. Furthermore, the table shows that a slightly smaller model with more data can outperform a larger model with smaller corpora; every BERT51M model trained with 125M token corpora outperforms BERT124M model trained with 25M tokens. ## D Does Overfitting At Pre-Training Propagate To Finetuning At Downstream Tasks? The loss curves in Section 5.1 suggested that some model-dataset size ratios, which have the least data and more model parameters, have been trained for too long, to a degree in which the loss starts to increase significantly. To analyze if those overfitting issues from when we keep pre-training over and over again on the same training data propagates to the downstream | Language | Language family | Morphology | Syntax | Verb System | Vocabulary | |------------|-------------------------|--------------|----------|---------------|--------------| | Basque | Language isolate | 0.73 | 0.58 | 0.77 | 0.62 | | Spanish | Romance (Indo-European) | 0.64 | 0.42 | 0.62 | 0.69 | | Swahili | Bantu (Niger-Congo) | 0.64 | 0.42 | 0.54 | 0.31 | | Finnish | Uralic | 0.82 | 0.42 | 0.46 | 0.31 | Table 9: The four selected languages and their complexity in morphology, syntax, verb system and vocabulary. MLMeu 5M 25M 125M BERT16M 32.08 38.68 41.56 BERT51M 32.42 44.29 50.07 BERT124M 34.50 43.46 **53.19** MLMes 5M 25M 125M BERT16M 39.09 49.06 48.31 BERT51M 39.24 53.49 59.04 BERT124M 42.45 52.58 **62.00** MLMsw 5M 25M 125M BERT16M 38.03 45.71 44.98 BERT51M 38.08 50.12 55.27 BERT124M 40.43 49.06 **58.82** MLMf i 5M 25M 125M BERT16M 29.43 37.03 37.73 BERT51M 28.18 42.07 45.86 BERT124M 29.30 41.75 **49.88** tasks once finetuned, here we compare the checkpoints of the models at 100K steps, with the last checkpoint of our models at 500K steps. We did this comparison with 2 models, BERT16 and BERT51, both of them trained on the smallest corpora (5M words), which are among those with the most pronounced increasing loss curves. The results for each checkpoint for both models after finetuning on the tasks are shown in Table 12. All in all, the 500K step checkpoints are on a par with the 100K step counterparts, without a clear winner, but definitely equalizing the gap that there is at MLM loss. Thus, since the decline in loss when kept pretraining does not spread to the downstream tasks, we decide to employ the last checkpoints (500K steps), to evaluate and compare the models at Appendix C and Section 5.2, to avoid adding another variable to the evaluation. ## E Optimizing For A Fixed Budget We have shown that increasing the amount of pretraining data and model size improves their performance. Thus, the conclusion regarding data in low-resourced settings is to use all the data there is available, independently of the model size. With respect to the model size, however, even if the available corpus size suggests that increasing it improves the performance, there is usually a limited computational budget constraining this. Thus, we need to choose the best model size within our budget. Kaplan et al. (2020) concludes that convergence is inefficient, which means that we obtain optimal performance by training larger models and stopping significantly short of convergence when working with a fixed compute budget. For this purpose, we compare the BERT51 and BERT124, pre-trained on a comparable amount of compute. We employed the pre-training corpus of 125M words, and pre-trained BERT51 for 500K steps, and BERT124 for 200K steps. The results obtained are shown in Table 11. BERT124M, the model with the most parameters outperforms BERT51M in most of the tasks: 4/4 for Spanish, 3/4 for Swahili, 2/4 for Basque and 2/4 for Finnish. These results agree with the claim of Kaplan et al. (2020) that *convergence being inefficient*. However, since there is not a big gap in the results, other factors might be also considered. For example, an undertrained BERT124M model has more room for improvement with further pretraining, while BERT51M is cheaper and faster to finetune and deploy. ## F **Comparison With Sota On Downstream** Tasks In Table 13 we compare the results of our BERT124M trained over the corpora of 125M words, the baselines of Flair, and mBERT with the current state-of-the-art results on each language and task. We improve SotA results for NERC, topic ![14_image_1.png](14_image_1.png) classification18 and sentiment analysis for Swahili, and obtain similar results for topic classification for Finnish. ## G Mlm Loss Plots Zoomed In For ![14_Image_4.Png](14_Image_4.Png) Bert16M Since the loss curve lines for BERT16M for the corpora of 25M and 125M tokens are hard to see in Figures 1, 2, 3 and 4, we zoomed in on them in the figures 5, 6, 7 and 8 respectively. 18Our model is finetuned with a subset of the dataset ![14_image_0.png](14_image_0.png) ![14_image_2.png](14_image_2.png) ![14_image_3.png](14_image_3.png) ## H Takeaways From Scaling Laws For Low-Resource Settings In Tables 14-17 we compare our results to the predictions of previous scaling laws from Kaplan et al. (2020) 19 and Hoffmann et al. (2022) 20. Tables 14-17 show the estimates of Kaplan et al. (2020) do not hold in this low-resource setting, by several magnitudes of order. Table 15 shows how 19a = 0.73 and b = 0.27 20a = 0.5 and b = 0.5 | BERT16M | BERT51M | | | | | |-----------|-----------|-----------|-----------|-----------|-----------| | Lang | Task | 100K step | 500K step | 100K step | 500K step | | eu | MLM loss | 4.6724 | 5.0323 | 5.9511 | 8.4830 | | MLM acc | 31.56 | 32.08 | 33.60 | 32.42 | | | NERC | 64.75±0.6 | 63.90±0.5 | 70.13±0.4 | 70.14±0.4 | | | Topic | 68.56±0.4 | 68.00±0.6 | 70.18±0.6 | 69.98±0.6 | | | SA | 67.16±0.5 | 67.80±0.5 | 67.32±0.4 | 67.00±1.0 | | | QNLI | 68.20±1.2 | 68.10±2.3 | 64.64±1.6 | 65.06±0.8 | | | es | MLM loss | 3.9658 | 4.3367 | 5.8144 | 8.2425 | | MLM acc | 40.15 | 39.91 | 39.27 | 38.93 | | | NERC | 75.84±0.4 | 76.57±0.3 | 81.11±0.3 | 80.43±0.4 | | | Topic | 94.69±0.4 | 94.54±0.3 | 94.89±0.3 | 94.89±0.3 | | | SA | 37.26±0.8 | 37.67±1.5 | 35.68±3.9 | 36.05±2.1 | | | QNLI | 65.38±1.0 | 65.01±0.6 | 66.34±1.6 | 67.00±1.8 | | | sw | MLM loss | 4.2363 | 4.5952 | 5.9836 | 8.4719 | | MLM acc | 37.64 | 38.03 | 38.70 | 38.08 | | | NERC | 85.92±0.5 | 86.36±0.2 | 89.05±0.2 | 88.74±0.2 | | | Topic | 91.28±0.3 | 91.64±0.3 | 91.85±0.4 | 92.12±0.2 | | | SA | 71.31±0.5 | 71.52±0.6 | 71.01±0.6 | 70.49±0.7 | | | QNLI | 60.29±0.9 | 62.80±1.1 | 62.88±0.9 | 62.27±1.7 | | | fi | MLM loss | 5.2866 | 5.5322 | 6.5559 | 8.9016 | | MLM acc | 28.76 | 29.43 | 29.52 | 28.18 | | | NERC | 76.47±0.3 | 76.82±0.3 | 80.19±0.2 | 79.73±0.2 | | | Topic | 88.53±0.1 | 88.15±0.1 | 88.45±0.1 | 88.53±0.3 | | | SA | 89.95±0.1 | 89.69±0.2 | 90.50±0.2 | 89.61±0.6 | | | QNLI | 51.51±1.1 | 51.49±0.9 | 52.37±0.7 | 54.28±2.5 | | | BERT124M | Flair | mBERT | SotA | | | | |------------|-----------|-----------|-----------|-----------|-------------------------------------------|-------------------------------------------| | eu | NERC | 84.58±0.2 | 82.13±0.4 | 79.39±1.0 | 86.98±0.4 | roberta-euscrawl-l(Artetxe et al., 2022) | | Topic | 76.06±0.4 | 67.89±0.3 | 70.57±0.5 | 86.51±0.4 | ElhBERTeu (Urbizu et al., 2022) | | | SA | 68.91±0.5 | 68.17±0.3 | 67.34±0.7 | 70.87±0.5 | Berteus (Agerri et al., 2020) | | | QNLI | 74.09±1.7 | 48.66±5.2 | 78.48±1.9 | 76.04±1.5 | ElhBERTeu (Urbizu et al., 2022) | | | es | NERC | 87.28±0.3 | 87.09±0.3 | 87.21±0.4 | 88.51 | roBerta-b(Gutiérrez Fandiño et al., 2022) | | Topic | 96.27±0.3 | 94.08±0.4 | 95.92±0.6 | 97.14 | BETO(Gutiérrez Fandiño et al., 2022) | | | SA | 43.27±1.1 | 34.73±3.0 | 39.21±1.8 | 49.80 | Vega et al. (2020) | | | QNLI | 81.10±0.7 | 56.42±0.6 | 83.92±0.2 | 82.02 | roberta-l(Gutiérrez Fandiño et al., 2022) | | | sw | NERC | 92.09±0.2 | 92.04±0.1 | 91.17±0.1 | 88.60 | swahBERT (Martin et al., 2022b) | | Topic | 93.07±0.2 | 91.83±0.2 | 91.52±0.2 | 90.90 | swahBERT (Martin et al., 2022b) | | | SA | 79.04±0.7 | 73.60±0.5 | 69.17±1.2 | 71.12 | swahBERT (Martin et al., 2022b) | | | QNLI | 63.34±1.4 | 52.82±2.1 | 63.48±1.1 | 64.72±0.4 | swahBERT(our evaluation) | | | fi | NERC | 88.99±0.2 | 84.76±0.4 | 88.87±0.4 | 92.40±0.1 | finBERT(Virtanen et al., 2019) | | Topic | 90.14±0.1 | 86.58±0.7 | 88.16±0.3 | 90.57±0.2 | finBERT(Virtanen et al., 2019) | | | SA | 94.38±0.3 | 89.74±0.5 | 88.05±0.7 | 95.61±0.3 | finBERT(our evaluation) | | | QNLI | 58.56±1.1 | 51.54±1.2 | 52.79±3.0 | 57.18±2.6 | finBERT(our evaluation) | | ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) Table 14: Optimal FLOPs for each dataset size. | Optimal model size | | | | |----------------------|----------|----------|-----------| | Datasets | Kaplan | Hoffman | Ours | | 125M | 7.79E+21 | 1.25E+08 | >8.56E+07 | | 25M | 1.00E+20 | 2.50E+07 | >8.56E+07 | | 5M | 1.29E+18 | 5.00E+06 | 8.56E+07 | Table 15: Optimal model size in parameters for each dataset. Table 16: Optimal FLOPs for each model size. | Optimal FLOPs | | | | |-----------------|----------|----------|----------| | Model | Kaplan | Hoffman | Ours | | BERT124M | 7.36E+10 | 7.33E+15 | 3.37E+19 | | BERT51M | 1.40E+10 | 6.49E+14 | 1.00E+19 | | BERT16M | 8.47E+08 | 1.08E+13 | 1.29E+18 | Table 17: Optimal dataset size in tokens for each model size. | Optimal dataset size | | | | |------------------------|----------|----------|-----------| | Model | Kaplan | Hoffman | Ours | | BERT124M | 8.59E+02 | 8.56E+07 | >1.25E+08 | | BERT51M | 5.48E+02 | 2.55E+07 | >1.25E+08 | | BERT16M | 2.57E+02 | 3.29E+06 | 1.25E+08 | Hoffmann estimations for optimal model size are not that far from our results, but Table 17 suggests that the optimal dataset size required to train small language models is around an order of magnitude higger than what the scaling laws of Hoffmann et al. predict. Furthermore, Tables 14 and 16 show that the optimal FLOPs needed for those models are a few orders of magnitude higger than predicted by Hoffmann et al., where models are trained for a single epoch, which is clearly not optimal in lowresource settings. All in all, we can underline the following takeaways for NLP practitioners working on LMs in low-resource settings: - Use as much text as available. - Pretraining for several (100s) epochs is clearly beneficial. - Given a fixed computational budget, it is better to train big models instead of using the computational power to compute more model updates in smaller models. - For a dataset of 125M words: BERT124M, trained for at least 3.37E+19 FLOPs21 is recommended. - For a 25M dataset: BERT124M, trained for 5.39E+18 FLOPs22 is recommended, but a BERT51M model trained for 3.01E+18 FLOPs23, obtains similar results, and it is lighter for finetuning and inference. - For a 5M dataset: BERT124M, trained for 1.35E+18 FLOPs24 or less is recommended, but a BERT51M model trained for 7.01E+17 FLOPs25, obtains similar results, which is lighter for finetuning and inference. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations (after conclusions) ✗ A2. Did you discuss any potential risks of your work? We are not aware of any potential risks of our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and introduction (1). ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 Experimental Setup B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3 Experimental Setup and 4 Evaluation Settings B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 5 Results And Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 Experimental Setup, 5 Results and Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 Experimental Setup and Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3 Experimental Setup and 5 Results ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 Experimental Setup, 4.6 Systems and Baselines and 5 Results D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xu-etal-2023-pre
Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion
https://aclanthology.org/2023.findings-acl.493
Temporal Knowledge graph completion (TKGC) is a crucial task that involves reasoning at known timestamps to complete the missing part of facts and has attracted more and more attention in recent years. Most existing methods focus on learning representations based on graph neural networks while inaccurately extracting information from timestamps and insufficiently utilizing the implied information in relations. To address these problems, we propose a novel TKGC model, namely Pre-trained Language Model with Prompts for TKGC (PPT). We convert a series of sampled quadruples into pre-trained language model inputs and convert intervals between timestamps into different prompts to make coherent sentences with implicit semantic information. We train our model with a masking strategy to convert TKGC task into a masked token prediction task, which can leverage the semantic information in pre-trained language models. Experiments on three benchmark datasets and extensive analysis demonstrate that our model has great competitiveness compared to other models with four metrics. Our model can effectively incorporate information from temporal knowledge graphs into the language models.
# Pre-Trained Language Model With Prompts For Temporal Knowledge Graph Completion Wenjie Xu1, Ben Liu1, Miao Peng1, Xu Jia1**, Min Peng**1∗ 1School of Computer Science, Wuhan University, China {vingerxu,liuben123,pengmiao,jia_xu,pengm}@whu.edu.cn ## Abstract Temporal Knowledge graph completion (TKGC) is a crucial task that involves reasoning at known timestamps to complete the missing part of facts and has attracted more and more attention in recent years. Most existing methods focus on learning representations based on graph neural networks while inaccurately extracting information from timestamps and insufficiently utilizing the implied information in relations. To address these problems, we propose a novel TKGC model, namely Pre-trained Language Model with Prompts for TKGC (PPT). We convert a series of sampled quadruples into pre-trained language model inputs and convert intervals between timestamps into different prompts to make coherent sentences with implicit semantic information. We train our model with a masking strategy to convert TKGC task into a masked token prediction task, which can leverage the semantic information in pre-trained language models. Experiments on three benchmark datasets and extensive analysis demonstrate that our model has great competitiveness compared to other models with four metrics. Our model can effectively incorporate information from temporal knowledge graphs into the language models. The code of PPT is available at https://github.com/JaySaligia/PPT. ## 1 Introduction In recent years, temporal knowledge graphs(TKGs) have attracted much attention. TKGs describe each fact in quadruple (subject, relation, object, timestamp). Compared to static knowledge graphs, TKGs need to consider the impact of timestamps on events. For example, (*Donald Trump, PresidentOf,* America, 2018) holds while (*Donald Trump, PresidentOf, America, 2022*) does not. There are missing entities or relations in the TKGs, therefore, ∗*Corresponding author ![0_image_0.png](0_image_0.png) temporal knowledge graph completion (TKGC) is one of the most important tasks of temporal knowledge graphs. TKGC task can be divided into two categories: interpolation setting and extrapolation setting(Jin et al., 2020). Interpolation setting aims to predict missing facts in the known timestamps while extrapolation setting attempts to infer future facts in the unknown ones. The latter is much more challenging, and in this work, we focus on the extrapolation setting. Some TKGC methods are developed from static knowledge graph completion (KGC). Such as adding time-aware score functions to KGC models(Jiang et al., 2016; Dasgupta et al., 2018), adding time-aware relational encoders to graph neural networks (Jin et al., 2020; He et al., 2021), adding a new time dimension to the tensor decomposition(Lacroix et al., 2020; Shao et al., 2022), etc. In addition to those KGC-based models, reinforcement learning(Sun et al., 2021), time-aware neural network modeling(Zhu et al., 2021), and other methods are also applied to TKGC. However, the methods mentioned above have some drawbacks, as follows: (1) **Insufficient temporal** information extraction from timestamps. Most existing TKGC methods model timestamps explicitly or implicitly. Explicit modeling utilizes lowdimensional vectors to represent timestamps. However, real-life timestamps are infinite, and explicit modeling cannot learn all timestamp representations and predict events with unseen timestamps. Implicit modeling does not represent timestamps directly but takes timestamps to connect multiple knowledge graphs by determining the sequential relationship of these knowledge graphs. This approach often requires modeling the knowledge graph one by one, requires a lot of computation, and timestamps are used only to determine before and after things happen. All the above methods do not give full play to the temporal information of timestamps. (2) **Insufficient information mining** of associations in relations in TKGC. Existing methods often focus on the structural information of the triples or quadruples when modeling KGs without enough consideration of the implied information in relations. This problem is particularly evident in TKGs because some relations contain information with potential temporal hints. As shown in Figure 1, between three different pairs of subject and object entities, after establishing relation *Discuss by telephone*, one day apart, they all establish relation *Consult*. If relation *Discuss by telephone* is established between the same pair of entities, there is a high probability that they will establish relation *Consult* within a short period. Among the entity pairs in ICEWS14, there are 10,887 types of relation pairs, out of which 2,652 exhibit obvious temporal correlations, where one relation in the pair high probably occurred before the other, and they have a stable time interval between them. To address these problems, we propose a novel temporal knowledge graph completion method based on pre-trained language models (PLMs) and prompts. TKGs contain timestamps, and events occurring at different occurrence times have sequential relationships with each other, which are well-suited as inputs to sequence models. Inspired by the successful application of pre-trained language models in static knowledge graph representation(Yao et al., 2019; Kim et al., 2020; Petroni et al., 2019; Lv et al., 2022), we apply PLMs to temporal knowledge graph completion to get implicit semantic information. However, simply splicing entities and relations in the input of PLMs generates incoherent sentences, resulting in the inability to use PLMs(Lv et al., 2022) fully. Therefore, We sample the quadruples in TKGs and construct prompts for each type of timestamps, which we call time-prompts. Then we train PLMs with a masking strategy. In this way, TKGC can be converted into a masked token prediction task. The contributions of our work can be summarized as follows: - To the best of our knowledge, we are the first to convert the temporal knowledge graph completion task into the pre-trained language model masked token prediction task. - We construct prompts for each type of interval between timestamps to better extract semantic information from timestamps. - We apply our experiments on a series of datasets of ICEWS and achieve satisfactory results compared to graph neural network learning methods. ## 2 Related Work 2.1 Static Kg Representation Static KG representation learning can roughly be divided into distance-based models, semantic matching models, graph neural network models, and PLM-based models. Distance-based models represent the relation of two entities into a translation vector, such as TransE(Bordes et al., 2013), RotatE(Sun et al., 2019), TransH(Wang et al., 2014). Semantic matching models measure the plausibility of facts using a triangular norm, such as RESCAL(Nickel et al., 2012), Distmult(Yang et al., 2015), ConvE(Dettmers et al., 2018), ComplEx(Trouillon et al., 2016). Graph neural network models use feed-forward or convolutional layers or extend Laplacian matrix to learn the representation of entities and relations, such as GCN(Kipf and Welling, 2017), GAT(Velickovic et al., 2018), RGCN(Schlichtkrull et al., 2018), SAGE(Hamilton et al., 2017). PLM-based models have also been considered for static KG representation in recent years due to the ability to capture context information. KGBERT(Yao et al., 2019) first introduces PLMs into static KG representation. Among PLM-based models, prompt-learning has attracted much attention in recent years and has been shown to be effective on many NLP tasks. LAMA(Petroni et al., 2019) first introduces prompt-based knowledge to PLM. Other prompt-based models based on LAMA are dedicated to improving the presentation of KGs by automatic prompt generation or by adding soft prompts(Shin et al., 2020; Zhong et al., 2021; Liu et al., 2021). PKGC(Lv et al., 2022) proposes a new prompt-learning method to accommodate the open-world assumption based on KG-BERT. ## 2.2 Temporal Kg Representation Temporal KG representation requires consideration of how the facts are modeled in time series. Some temporal KG representation models are extended from static models. TTransE(Jiang et al., 2016) incorporates temporal information into the scoring function based on TransE(Bordes et al., 2013), and HyTE(Dasgupta et al., 2018) extends TransH(Wang et al., 2014) similarly. TNTComplEx(Lacroix et al., 2020) extends ComplEx(Trouillon et al., 2016) inspired by the CP decomposition of order-4 tensor. These expanded approaches consider timestamps as an additional dimension but lack consideration from a temporal perspective. Some models attempt to combine message-passing and temporal information to solve the problem. RE-NET(Jin et al., 2020) applies R-GCN(Schlichtkrull et al., 2018) for message passing for each snapshot and then uses temporal aggregation across multiple snapshots. HIP Network(He et al., 2021) utilizes structural information passing and temporal information passing to model snapshots. RE-GCN(Li et al., 2021) uniformly encodes the evolutional representations representation of entities and relations corresponding to different timestamps to apply to the extrapolational TKGC task. Besides, some models use other strategies to model TKG. CyGNet(Zhu et al., 2021) is divided into a copy mode and a generative mode to predict missing entities using neural networks with a time dictionary. TITer(Sun et al., 2021) introduces reinforcement learning in TKG representation learning. ## 3 Preliminary Temporal Knowledge Graph G is a set of networks of entities and relations that contain timestamps. It can be defined as G = {E, R, T , Q}, where E is the set of entities, R is the set of relations and T is the set of timestamps. Q = {(s, r, o, t)*} ⊆ E × R × E × T* is the quadruple set, where s and o are the subject entity (head entity) and object entity (tail entity), r is the relation between them at timestamp t. Gt = {(*s, r, o*) ⊆ E × R × E} is called the TKG snapshot at t, and it can be taken as a static KG filtering the triple set from G at t. Temporal Knowledge Graph completion (TKGC) is the task of predicting the evolution of future KGs given KGs of a known period. Given a quadruple (s,r, ?, tn) or (?,r, o, tn), we have a set of known facts from TKG snapshots G(ti<tn) to predict the missing object entity or subject entity in the quadruple. The probability of prediction of missing the entity o in quadruple (s,r, ?, tn) can be formalized as follows: $$p(o|{\mathcal{G}}_{<t_{n}},\mathbf{s},\mathbf{r},t_{n}).$$ p(o|G<tn,s,r, tn). (1) ## 4 Methodology In this paper, we propose PPT, a novel PLM-based model with prompts to solve TKGC task. The framework of our model is illustrated in Figure 2. We sample quadruples and convert them into pretrained language model inputs. The prediction of [MASK] token is the completed result. ## 4.1 Prompts We design different prompts for entities (entprompts), relations (rel-prompts), and timestamps (time-prompts) to convert quadruples into a form suitable for input to PLMs. We add a soft prompt [EVE] before the beginning of each fact tuple due to introducing soft prompts in the input sentences can improve the expressiveness of the sentences(Han et al., 2021). Ent-prompts. We convert each entity into a special token **[ENT-i]** according to its index. We use a special token instead of the name of an entity because, in the prediction task, we need to predict the whole entity but not a part of it. To maintain the semantic information from entities, we do average pooling of embedding for all words in each entity as the initial embedding of its token. Rel-prompts. For each relation, we convert it into its original phrase. It is worth noting that to maintain the coherence of sentences, we supplemented each relation with the preposition it was missing. For example, we supplement the relation Make a visit to *Make a visit to*. Time-prompts. We convert the time interval between two timestamps into a phrase that can describe the period. We construct a dictionary called interval-dictionary, which maps each period to a prompt. As shown in Figure 3, we convert each ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) timestamp interval into a prompt. Each prompt contains two parts. The front part is a soft prompt indicating the length of time, such as **[SHT]** for a short time (less than 60 days), **[MID]** for a medium time (from 60 days to 365 days), and **[LNG]** for a long time (above 365 days); the back part is a statement describing the interval. During our analysis, we observed that news reports frequently use distinctive time descriptors to indicate time intervals, which inspired us to develop these prompts. ## 4.2 Construction For Graphs Unlike sampling one fact tuple as input to a pretrained language model in some static knowledge graph models(Yao et al., 2019; Lv et al., 2022), we sample multiple fact tuples simultaneously because we need to model the temporal relationship between facts. We take the head/tail entity for each quadruple in the training dataset and randomly sample each quadruple from the entire training dataset while fixing the head/tail entity. The sampled quadruples are then arranged in chronological order. We demonstrate different sampling strategies in A.1. The sampled list is called *Temporal Specialization Graph* (TSG). TSG can be described as a time-ordered sequence *T SG* = [q0, q1 . . . , qn], qi = (si, ri, oi, ti) ∈ Q, ti ≤ ti+1. We have a total of three types of TSG, which are T SGsobj , T SGrsub and T SGo rel: $$TSG^{s}_{obj}(n)=[q^{s}_{0},q^{s}_{1}\ldots,q^{s}_{n}],$$ $$q^{s}_{i}=(obj,r_{i},o_{i},t_{i})\in\mathcal{Q},t_{i}\leq t_{i+1},$$ $$TSG^{r}_{rel}(n)=[q^{r}_{0},q^{r}_{1}\ldots,q^{r}_{n}],$$ $$q^{r}_{i}=(s_{i},rel,o_{i},t_{i})\in\mathcal{Q},t_{i}\leq t_{i+1},$$ $$TSG^{o}_{sub}(n)=[q^{o}_{0},q^{o}_{1}\ldots,q^{o}_{n}],$$ $$q^{o}_{i}=(s_{i},r_{i},sub,t_{i})\in\mathcal{Q},t_{i}\leq t_{i+1},\tag{2}$$ where we fix object entity $obj$ to sample $TSG^{s}_{obj}$ fix subject entity sub to get sample T SGrsub, and fix relation rel to sample *T SG*o rel. We set a minimum sampling quadruple number Kmin and a maximum sampling quadruple number Kmax. The timestamps in TSGs are independent and cannot reflect the time relationship between events. We convert each TSG to a *Time Interval Graph* (TIG) by calculating the time interval of adjacent quadruples. We take the earliest time in TSG as the initial time τ0 and calculate the time interval between the timestamp in (si, ri, oi, ti) and the timestamp in (si−1, ri−1, oi−1, ti−1) as the new timestamp τi: $$\left\{\begin{array}{l}TIG_{*,*}=\{s,r,o\}=[p_{0}^{*},p_{1}^{*},\ldots,p_{n}^{*}],\\ p_{i}^{*}=(q_{i}^{*}(s,r,o),\tau_{i}),\\ \tau_{o}=0\\ \tau_{i}=t_{i}-t_{i-1}\end{array}\right.\tag{3}$$ where $q_{i}^{*}(s,r,o)$ means keeping the fact triple i (si, ri, oi) of q∗ i . ## 4.3 Training The algorithm of our training strategy can be summarized in Algorithm 1. We do not train each quadruple separately in the training set for each epoch because we believe that independent quadruples cannot provide temporal information in TKGs. We sample each entity multiple times by fixing it at the object entity position and the subject entity position, thus generating TSGs of entities. Similarly, we fix the relations in the quadruples and, for each relation generate the TSGs of the relations. Then we convert all the TSGs to TIGs. For each quadruple in a TIG, we convert the entities, relation, and time interval into PLM inputs with prompts described in Section.4.1. We use a pre-trained language model with the masking strategy (also known as a masked language model, MLM)(Devlin et al., 2019) to train our model. Masked language models aim to predict masked parts based on their surrounding context. When training, we mask 30% of tokens in an input sequence. Algorithm 1: Training for PPT Input: TKG G with training data, maximum number of epochs max_epochs, maximum number of sampling TSG of one entity or one relation B, minimum sampling sequence length Kmin, maximum sampling sequence length Kmax. epoch ← 1; S = {}; for b ← 1 to B do foreach ent ∈ E do // sample TSG for entities k = random(Kmin, Kmax); Sample a T SGent with length = k; Convert T SGent into T IGent; add T IGent to S; end foreach rel ∈ R do // sample TIG for relations k = random(Kmin, Kmax); Sample a T SGrel with length = k; Convert T SGrel into T IGrel; add T IGrel to S; end end foreach T IG ∈ S do // convert TIG into input with prompts seq = Prompt(T IG); // train in PLM with masking strategy MASK_TRAIN(P LM(seq)); end epoch ← epoch + 1; until epoch = max_epochs; ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ## Repeat ; 4.4 Objective Optimization Discussion The distribution of all facts in Eq 1 can be considered as the joint distribution of facts on all timestamps: $$p(\mathcal{G}_{<t_{n}})=p(\mathcal{G}_{t_{0}},\mathcal{G}_{t_{1}},\cdots,\mathcal{G}_{t_{n-1}})$$ $$=\prod_{t}\prod_{(s_{t},r_{t},o_{t})\in\mathcal{G}_{t}}p\left(s_{t},r_{t},o_{t}\mid G_{<t_{n}}\right).\tag{4}$$ It is not realistic to focus on all quadruples in the TKG. When predicting the missing subject entities, we fix the object entities because relations in the neighborhood are of most interest to entities. Further, we simulate the original quadruple distribution by sampling, thus Eq 4 can be approximated as: $$\begin{aligned} p(\mathcal{G}_{<t_{n}}) &\approx \prod_{t} \prod_{(\mathbf{s}, r_{t}, o_{t}) \in \mathcal{G}_{t}} p\left(\mathbf{s}, r_{t}, o_{t} \mid G_{<t_{n}}\right) \\ &\approx \prod_{k=1}^{K} p\left(\mathbf{s}, r_{k}, o_{k} \mid G_{<t_{n}}\right) \\ &\approx \prod_{k=1}^{K} p\left(TSG_{\mathbf{s}}^{s}[k] \mid G_{<t_{n}}\right) \\ &\approx \prod_{k=1}^{K} p\left(TIG_{\mathbf{s}}^{s}[k] \mid G_{<t_{n}}\right), \end{aligned}$$ where $K$ is the number of sampling. We calculate the generation probability of the quadruples by the pre-trained language model's ability to predict unknown words. We use seqk to present the converted inputs with prompts of T IGs s[k]: seq = Prompt(*T IG*s s[k]). (6) For example, as illustrated in Figure 2, here are two quadruples in TSG:(49, 62, 12, 2) in timestamp t1 and (49, 38, 18, 130) in timestamp tn−1, the time interval between them is 128 days, ∆1 = tn−1 − t1. Then the quadruple (49, 38, 18, 128) in TIG can be converted into an input sentence with prompts: **[EVE] [MID]** *After four months*, **[ENT49]** *Threaten* **[ENT-18]**. The formalization of prediction can be defined as follows: $$\begin{array}{l}\prod_{k=1}^{K}p\left(TIG_{\mathrm{s}}^{s}[k]\mid G_{<t_{n}}\right)\\ \par K\\ =\prod_{k=1}^{K}p(PLM(seq_{k})),\end{array}$$ where *P LM*(·) means inputting a sequence into the pre-trained language model. Combining Eq 1 and Eq 7, we convert the TKGC task into an MLM prediction task: $$\begin{array}{l}{{p(o|{\mathcal G}_{<t_{n}},\mathbf s,\mathbf r,t_{n})}}\\ {{\approx\prod_{k=1}^{K}p(P L M(s e q_{k}))}}\\ {{\cdot p(P L M(\mathbf{Prompt}(\mathbf s,\mathbf r,t_{n}))),}}\end{array}$$ where Prompt(·) means converting entities, relations, and timestamps into input sequences for PLM. By Eq 8, the original knowledge-completion task can be equated to the pre-trained language model masked token prediction task. ## 5 Experiments 5.1 Experimental Setup Datasets. Intergrated Crisis Early Warning System (ICEWS)(Boschee et al., 2015) is a repository that contains coded interactions between sociopolitical actors with timestamps. We utilize three TKG datasets based on ICEWS named ICEWS0515((García-Durán et al., 2018); from 2005 to 2015), ICEWS14((García-Durán et al., 2018); from 1/1/2014 to 12/31/2014) and ICEWS18((Boschee et al., 2015); from 1/1/2018 to 10/31/2018) to perform evaluation. Statistics of these datasets are listed in Table 1. Evaluation Protocals. Following prior work(Li et al., 2021), we split each dataset into a training set, validation set, and testing set in chronological order following extrapolation setting. Thus, we guarantee that timestamps of train < timestamps of valid < *timestamps of test*. Some methods(Jin et al., 2020; Zhu et al., 2021; Wu et al., 2020) apply filter schema to evaluate the results by removing all the valid facts that appear in the training, validation, or test sets from the ranking list. Since TKGs are evolving in time, the same event can occur at different times(Li et al., 2021). Therefore, we apply raw schema to evaluate our experiments by removing nothing. We report the result of Mean Reciprocal Ranks(MRR) and Hits@1/3/10 (the proportion of correct test cases that are ranked within the top 1/3/10) of our approach and baselines following raw schema. Baselines. We compare our model with two categories of models: static KGC models and TKGC models. We select DistMult(Yang et al., 2015), ComplEx(Trouillon et al., 2016), RGCN(Schlichtkrull et al., 2018), ConvE(Dettmers et al., 2018), ConvTransE(Shang et al., 2019), RotatE(Sun et al., 2019) as static models. We select HyTE(Dasgupta et al., 2018), TTransE(Jiang et al., 2016), TA-DistMult(García-Durán et al., 2018), RGCRN(Seo et al., 2018), CyGNet(Zhu et al., 2021), RE-NET(Jin et al., 2020), RE-GCN(Li et al., 2021) as baselines of TKGC. Hyperparameters. We use bert-base-cased1as our pre-trained model. Bert-base-cased has been pre-trained on a large corpus of English data in a self-supervised fashion. Bert-base-cased has a parameter size of 110M with 12 layers and 16 attention heads, and its hidden embedding size is 1https://huggingface.co/bert-base-cased | Dataset | E | R | #Granularity | #Train | #Valid | #Test | |------------|-------|-----|----------------|----------|----------|---------| | ICEWS05-15 | 10094 | 251 | 24 (hours) | 368868 | 46302 | 46159 | | ICEWS14 | 6869 | 230 | 24 (hours) | 74845 | 8514 | 7371 | | ICEWS18 | 23033 | 256 | 24 (hours) | 373018 | 45995 | 49545 | Table 1: Statistics of the datasets we use. | dataset | seq_len | min_sample | max_sample | |------------|-----------|--------------|--------------| | ICEWS05-15 | 256 | 2 | 16 | | ICEWS14 | 256 | 2 | 12 | | ICEWS18 | 256 | 2 | 16 | Table 2: Parameters for datasets. 768. Without loss of generality, we also list other pre-trained models in A.3. The input sequence length, min sampling number, and max sampling number of each dataset are listed in Table 2. When training, we mask 30% tokens randomly, and we choose AdamW as our optimizer. The learning rate is set as 5e-5. We make a detailed analysis of the parameters in A.2. ## 5.2 Results We report the results of PPT and baselines in Table 3. It can be observed that PPT outperforms all static models much better. Compared with ConvTransE, which has the best results among static models, we achieve 28.3%, 21.97%, and 14.69% improvement with MRR metric in the three datasets, respectively. We believe temporal information matters in TKGC tasks, while static models do not utilize temporal information. As can be seen that PPT performs better than HyTE, TTransE, and TA-DistMult. These models are under the interpolation setting. For instance, we achieve 41.22%, 46.53%, and 62.18% improvements with MRR metric in the three datasets compared to TA-DistMult. We believe that HyTE and TA-DistMult only focus on independent graphs and do not establish the temporal correlation between graphs. TTransE embeds timestamps into the scoring function while not taking full advantage of them. With MRR, Hits@1, and Hits@3 metrics on ICEWS05-15 and ICEWS14, PPT achieves the best results compared to other TKGC models. For instance, PPT improves 6.5% over the second-best result with Hit@1 metric. On ICEWS18, PPT has a slight gap with the best model RE-GCN. We believe this is because ICEWS18 has more entities than other datasets. GNN-based models using the message-passing mechanism have better learning ability for such graphs with many nodes. Furthermore, RE-GCN adds additional edges to assist learning for the static parts of the graph. Besides the masking strategy for our model, we also attempt other forms of application for pretrained language models, which are illustrated in A.3. ## 5.3 Ablation Study To investigate the contribution of time-prompts in our model, we conduct ablation studies for our model by testing all datasets under the same parameter settings of different variants. The experiment results are shown in Table 4. PPT w/o prompts denotes PPT without timeprompts. In this variant, we set all timestamps as 0. To ensure that the sequence length does not affect the experiments, we replaced all the timeprompts with on the same day. *PPT w/o prompts* gets worse results than raw PPT with all metrics on three datasets except with Hits@10 on ICEWS14. ICEWS14 has a smaller number of entities and data size than the other two datasets, so it is possible to achieve better results in some metrics after removing the timestamps. PPT rand prompts denotes PPT with random timestamps set. We replace raw timestamps in quadruples with other timestamps randomly. Random timestamps should not affect the results if our model does not learn the timestamp information correctly. As shown in Table 4, the raw model shows better results than this variant on all metrics. These experiments demonstrate that applying time-prompts in our model can benefit the learning of temporal information between events. ICEWS05-15 ICEWS14 ICEWS18 Method MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 DistMult 19.91 5.63 27.22 47.33 20.32 6.13 27.59 46.61 13.86 5.61 15.22 31.26 ComplEx 20.26 6.66 26.43 47.31 22.61 9.88 28.93 47.57 15.45 8.04 17.19 30.73 R-GCN 27.13 18.83 30.41 43.16 28.03 19.42 31.95 44.83 15.05 8.13 16.49 29.00 ConvE 31.40 21.56 35.70 50.96 30.30 21.30 34.42 47.89 22.81 13.63 25.83 41.43 ConvTransE 30.28 20.79 33.80 49.95 31.50 22.46 34.98 50.03 23.22 14.26 26.13 41.34 RotatE 19.01 10.42 21.35 36.92 25.71 16.41 29.01 45.16 14.53 6.47 15.78 31.86 HyTE 16.05 6.53 20.20 34.72 16.78 2.13 24.84 43.94 7.41 3.10 7.33 16.01 TTransE 16.53 5.51 20.77 39.26 12.86 3.14 15.72 33.65 8.44 1.85 8.95 22.38 TA-DistMult 27.51 17.57 31.46 47.32 26.22 16.83 29.72 45.23 16.42 8.60 18.13 32.51 RGCRN 35.93 26.23 40.02 54.63 33.31 24.08 36.55 51.54 23.46 14.24 26.62 41.96 CyGNet 35.46 25.44 40.20 54.47 35.45 26.05 39.91 53.20 26.46 16.62 30.57 45.58 RE-NET 36.86 26.24 41.85 57.60 35.77 25.99 40.10 54.87 26.17 16.43 29.89 44.37 RE-GCN 38.27 27.43 43.06 **59.93** 37.78 27.17 42.50 **58.84 27.51 17.82 31.17 46.55** PPT **38.85 28.57 43.35** 58.63 **38.42 28.94 42.5** 57.01 26.63 16.94 30.64 45.43 ![7_image_0.png](7_image_0.png) | Method | ICEWS05-15 | ICEWS14 | ICEWS18 | | | | | | | | | | |------------------|--------------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | PPT | 38.85 | 28.57 | 43.35 | 58.63 | 38.42 | 28.94 | 42.5 | 57.01 | 26.63 | 16.94 | 30.64 | 45.43 | | PPT w/o prompts | 38.44 | 28.09 | 43.09 | 58.46 | 38.24 | 28.52 | 42.4 | 57.31 | 25.44 | 15.68 | 29.26 | 44.88 | | PPT rand prompts | 37.43 | 27.05 | 42.16 | 57.49 | 36.84 | 26.89 | 41.41 | 55.73 | 24.22 | 14.31 | 28.09 | 44.32 | ## 5.4 Analysis 5.4.1 Attention Analysis To visually show that our model can learn from temporal knowledge graphs, as shown in Figure 4, we visualize attention patterns of PPT. We need to complete the missing tail entity in a test quadruple (263, 104, ?, 7536). As mentioned, we sample data from earlier than timestamp 7536 to form the input sequence and obtain the attention weights from the pre-trained model. In this example, the ground truth is **[ENT-262]**. We observe that in our model, the prediction of **[MASK]** is made by considering all the previous sampling samples together. PPT notes that the same relation physical assault to occurred a day earlier and captures the temporal information from token the, **next**, and day. Therefore, PPT can make correct predictions based on historical events and chronological relationships. ## 5.4.2 Time-Sensitive Relation Analysis Using ICEWS05-15 as an example, we analyze the time-sensitive relations present in the dataset. For different relations between the same pairs of entities, there is a clear order of occurrence among some of them. For example, the relation Obstruct passage, block is always followed by ones related to assistance, such as Appeal for aid, *Appeal for* humanitarian aid, and *Provide humanitarian aid*. Similarly, the relation *Acknowledge or claim responsibility* is always followed by those related to negotiation, such as *Express intent to cooperate* militarily, *Meet at a 'third' location*, and *Demand* material cooperation. We provide more examples in A.5. To verify the superiority of PPT in handling time-sensitive relations, a new test dataset named ICEWS05-filter is constructed from ICEWS05-15. Specifically, we select relations that have a clear chronological order within a predefined time window, resulting in a total of 139 relations. Only the quadruples containing these selected relations are retained to construct the new dataset. As demonstrated in Table 5, PPT achieves better performance when evaluated on the constructed test dataset, indicating its advantage in handling time-sensitive relations. | Dataset | MRR | Hits@1 | Hits@3 | Hits@10 | |----------------|-------|----------|----------|-----------| | ICEWS05-15 | 38.85 | 28.57 | 43.35 | 58.63 | | ICEWS05-filter | 39.4 | 29.02 | 43.91 | 59.31 | Table 5: Results of PPT on ICEWS05-15 and ICEWS05filter. ## 6 Conclusions This paper proposes a novel temporal knowledge graph completion model named pre-trained language model with prompts for TKGC (PPT). We use prompts to convert entities, relations, and timestamps into pre-trained model inputs and turn TKGC problem into a masked token prediction problem. This way, we can extract temporal information from timestamps accurately and sufficiently utilize implied information in relations. Our proposed method achieves promising results compared to other temporal graph representation learning methods on three benchmark TKG datasets. For future work, we plan to improve the sampling method in temporal knowledge graphs to get more timespecific inputs. We are also interested in combining GNNs and pre-trained language models in temporal knowledge graph representation learning. ## Limitations This paper proposes a pre-trained language model with prompts for temporal knowledge graph completion. However, there are some limitations in our method: 1) Our prompts in the temporal knowledge graphs, especially the time-prompts, are built manually. It needs to be reconstructed manually for different knowledge graphs. We are exploring a way to build prompts in temporal knowledge graphs automatically. 2) Our model uses a random sampling method, which suffers from the problem of few high-quality training samples and high sample noise. For future work, a more effective way to sample is worth exploring. ## Acknowledgements We would like to thank all the anonymous reviewers for their valuable and insightful comments. This work was supported by National Key Research and Development Program of China (No.2021ZD0113304), General Program of Natural Science Foundation of China (NSFC) (Grant No.62072346), Key R&D Project of Hubei Province (Grant NO.2021BBA099, NO.2021BBA029) and Application Foundation Frontier Project of Wuhan (Grant NO.2020010601012168). Our work was founded by Joint&Laboratory on Credit Technology. ## Ethics Statement All steps and data described in our paper follow the ACL Ethics Policy2. ## References Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In *NIPS*. Elizabeth Boschee, Jennifer Lautenschlager, Sean O'Brien, Steve Shellman, James Starz, and Michael Ward. 2015. ICEWS Coded Event Data. Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha P. Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In EMNLP. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *AAAI*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*. Alberto García-Durán, Sebastijan Dumancic, and Mathias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In *EMNLP*. William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In *NIPS*. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. PTR: prompt tuning with rules for text classification. *CoRR*, abs/2105.11259. 2https://www.aclweb.org/portal/content/ acl-code-ethics Yongquan He, Peng Zhang, Luchen Liu, Qi Liang, Wenyuan Zhang, and Chuang Zhang. 2021. HIP network: Historical information passing network for extrapolation reasoning on temporal knowledge graph. In *IJCAI*. Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, and Zhifang Sui. 2016. Towards time-aware knowledge graph completion. In *COLING*. Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2020. Recurrent event network: Autoregressive structure inferenceover temporal knowledge graphs. In EMNLP. Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In *COLING*. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In *ICLR*. Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In *ICLR*. Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, Huawei Shen, Yuanzhuo Wang, and Xueqi Cheng. 2021. Temporal knowledge graph reasoning based on evolutional representation learning. In SIGIR. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. *CoRR*. Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, and Jie Zhou. 2022. Do pretrained models benefit knowledge graph completion? A reliable evaluation and a reasonable approach. In ACL(Findings). Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing YAGO: scalable machine learning for linked data. In WWW. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In *EMNLP-IJCNLP*. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *ESWC*. Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. 2018. Structured sequence modeling with graph convolutional recurrent networks. In *ICONIP*. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structureaware convolutional networks for knowledge base completion. In *AAAI*. Pengpeng Shao, Dawei Zhang, Guohua Yang, Jianhua Tao, Feihu Che, and Tong Liu. 2022. Tucker decomposition-based temporal knowledge graph completion. *Knowl. Based Syst.* Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In *EMNLP*. Haohai Sun, Jialun Zhong, Yunpu Ma, Zhen Han, and Kun He. 2021. Timetraveler: Reinforcement learning for temporal knowledge graph forecasting. In EMNLP. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In *ICLR*. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *ICML*. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *ICLR*. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In *AAAI*. Jiapeng Wu, Meng Cao, Jackie Chi Kit Cheung, and William L. Hamilton. 2020. Temp: Temporal message passing for temporal knowledge graph completion. In *EMNLP*. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. CoRR, abs/1909.03193. Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: learning vs. learning to recall. In *NAACL-HLT*. Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, and Yan Zhang. 2021. Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks. In *AAAI*. ## A Appendix A.1 Sampling Analysis We design two sampling strategies, one is the uniform sampling strategy, and the other is the frequency-based sampling strategy. The uniform sampling strategy assigns equal sampling weights to each entity. The frequency-based sampling strategy assigns different weights to each entity based on the different frequencies of each entity appearing in the dataset, where entities with higher occurrences have a higher probability of being sampled. As shown in Table 6, the frequency-based sampling strategy has better results on ICEWS14. We believe this is because if an entity appears frequently, it is more likely to have relations with other entities and should get more attention. | Strategy | MRR | Hits@1 | Hits@3 | Hits@10 | |-----------------|-------|----------|----------|-----------| | uniform | 34.87 | 25.37 | 38.77 | 53.33 | | frequency-based | 38.42 | 28.94 | 42.5 | 57.01 | Table 6: Results of different sampling strategies of PPT on ICEWS14. ## A.2 Hyperparameter Analysis To test the effect of different sequence lengths and the maximum number of samples on the effect of the model, we analyze these hyperparameters on ICEWS14. Due to GPU performance limitations, we do not perform experiments on longer sequences. As shown in Table 7, we get the best results with setting seq_len = 256, max_*sample* = 12. We believe that the effect of sequence length is small while the number of samples matters. A larger number of samples can provide more semantic contextual information for the prediction but overly lengthy sampling can cause a decline in effectiveness by not focusing on the most effective information in learning. seq_len max_sample MRR Hits@1 Hits@3 Hits@10 128 2 35.33 25.71 39.56 53.83 128 4 37.21 27.59 41.08 56.3 128 8 37.67 28.16 41.73 56.22 256 8 37.67 27.78 42.31 56.72 256 12 **38.42 28.94 42.5 57.01** 256 16 37.72 27.74 42.1 56.91 Table 7: Results of different hyperparameters of PPT on ICEWS14. The best results are boldfaced and the second best ones are underlined. ## A.3 Variants In addition to the model we propose in the paper, we also try some variants, all experiments are done with seq_len = 256, max_*sample* = 12 on ICEWS14. As demonstrated in Table 8, PPT_CLS does not use the mask training strategy but takes **[CLS]** to do classification with a fully connected layer as the decoder; PPT_LSTM uses a bi-directional LSTM to encode all tokens, maxpool the out embeddings, and use a fully-connected layer as a decoder. These models do not get satisfactory results compared to our raw model. PPT_CLS only uses sequence embedding to predict the result is not enough because the sequence embedding is suitable for classification task which needs to be focused on the whole input sequence. However, in our task, we need to consider the impact of each token. For PPT_LSTM, we believe that the representation learned by the pre-trained language model is high-level semantic knowledge, especially when additional tokens (entities and relations) are added. Simple neural network models are unable to capture this high-level semantic knowledge and instead cause a decrease in effectiveness. | Variants | MRR | Hits@1 | Hits@3 | Hits@10 | |------------|-------|----------|----------|-----------| | PPT_CLS | 32.81 | 23.62 | 36.81 | 51.12 | | PPT_LSTM | 32.6 | 23.61 | 36.54 | 50.06 | | PPT | 38.42 | 28.94 | 42.5 | 57.01 | Table 8: Variants of PPT. ## A.4 Different Plms Besides *bert-base-cased*, we also attempt other pre-trained language models: bert-base-uncased3 and bert-large-cased4. As shown in Table 9. All experiments are done with setting seq_len = 128, min_*sample* = 2, max_*sample* = 8 on ICEWS14. We find that the experimental results with different PLMs are similar, indicating that our approach does not rely on a specific pre-trained language model and has the ability to generalize. Table 9: Experiments on different PLMs. | PLMs | MRR | Hits@1 | Hits@3 | Hits@10 | |-------------------|-------|----------|----------|-----------| | bert-base-cased | 37.67 | 28.16 | 41.73 | 56.22 | | bert-base-uncased | 37.75 | 28.06 | 41.74 | 56.84 | | bert-large-cased | 37.36 | 27.39 | 41.39 | 57.59 | | Pre-relation | Post-relation | |---------------------------------------|-----------------------------------------------------| | Demonstrate for policy change | fight with small arms and light weapons | | Demonstrate for policy change | Make optimistic comment | | Demonstrate for policy change | Conduct suicide, car, or other non-military bombing | | Obstruct passage, block | Appeal for aid | | Obstruct passage, block | Appeal for humanitarian aid | | Obstruct passage, block | Provide humanitarian aid | | Acknowledge or claim responsibility | Express intent to cooperate militarily | | Acknowledge or claim responsibility | Meet at a 'third' location | | Acknowledge or claim responsibility | Demand material cooperation | | Receive inspectors | Expel or deport individuals | | Receive inspectors | Express intent to provide material aid | | Receive inspectors | Return, release person(s) | | Demand release of persons or property | Use unconventional violence | | Demand release of persons or property | Demonstrate or rally | | Demand release of persons or property | Appeal for military aid | | Reject judicial cooperation | Appeal to others to settle dispute | | Reject judicial cooperation | Accuse of espionage, treason | | Reject judicial cooperation | Retreat or surrender militarily | Table 10: Examples of pre-relations and post-relations ## A.5 Pre-Relations And Post-Relations For one pair of entities, if relation *rel-A* always occurs before relation rel-B, *rel-A* is called a prerelation and *rel-B* is called a post-relation. Table 10 shows some of these relations. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? See Limitations. ✗ A2. Did you discuss any potential risks of your work? Our experiments are reproducible. ✓ A3. Do the abstract and introduction summarize the paper's main claims? See Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** See 5.1 Experimental Setup ✓ B1. Did you cite the creators of artifacts you used? See 5.1 Experimental Setup ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? See 5.1 Experimental Setup ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The tools we use are consensuses in the field like many other papers do. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data we use are consensuses in the field like many other papers do. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? See 5.1 Experimental Setup ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. See Table 1 ## C ✓ **Did You Run Computational Experiments?** See 5.1 Experimental Setup ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? See 5.1 Experimental Setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? See 5.1 Experimental Setup and Appendix A.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? See 5.2 Results ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? See 5.1 Experimental Setup D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
ju-etal-2023-continuous
Is Continuous Prompt a Combination of Discrete Prompts? Towards a Novel View for Interpreting Continuous Prompts
https://aclanthology.org/2023.findings-acl.494
The broad adoption of continuous prompts has brought state-of-the-art results on a diverse array of downstream natural language processing (NLP) tasks. Nonetheless, little attention has been paid to the interpretability and transferability of continuous prompts. Faced with the challenges, we investigate the feasibility of interpreting continuous prompts as the weighting of discrete prompts by jointly optimizing prompt fidelity and downstream fidelity. Our experiments show that: (1) one can always find a combination of discrete prompts as the replacement of continuous prompts that performs well on downstream tasks; (2) our interpretable framework faithfully reflects the reasoning process of source prompts; (3) our interpretations provide effective readability and plausibility, which is helpful to understand the decision-making of continuous prompts and discover potential shortcuts. Moreover, through the bridge constructed between continuous prompts and discrete prompts using our interpretations, it is promising to implement the cross-model transfer of continuous prompts without extra training signals. We hope this work will lead to a novel perspective on the interpretations of continuous prompts.
# Is Continuous Prompt A Combination Of Discrete Prompts? Towards A Novel View For Interpreting Continuous Prompts Tianjie Ju, Yubin Zheng, Hanyi Wang, Haodong Zhao, Gongshen Liu∗ School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University {jometeorie, zybhk21, why_820, zhaohaodong, lgshen}@sjtu.edu.cn ## Abstract The broad adoption of continuous prompts has brought state-of-the-art results on a diverse array of downstream natural language processing (NLP) tasks. Nonetheless, little attention has been paid to the interpretability and transferability of continuous prompts. Faced with the challenges, we investigate the feasibility of interpreting continuous prompts as the weighting of discrete prompts by jointly optimizing prompt fidelity and downstream fidelity. Our experiments show that: (1) one can always find a combination of discrete prompts as the replacement of continuous prompts that performs well on downstream tasks; (2) our interpretable framework faithfully reflects the reasoning process of source prompts; (3) our interpretations provide effective readability and plausibility, which is helpful to understand the decisionmaking of continuous prompts and discover potential shortcuts. Moreover, through the bridge constructed between continuous prompts and discrete prompts using our interpretations, it is promising to implement the cross-model transfer of continuous prompts without extra training signals. We hope this work will lead to a novel perspective on the interpretations of continuous prompts. ## 1 Introduction Continuous prompts for pre-trained language models (PLMs) have shown remarkable performance on almost every NLP field (Li and Liang, 2021; Lester et al., 2021; Liu et al., 2021b). However, trained continuous prompts tend to improve performance at the sacrifice of interpretability and transferability relative to discrete prompts (Liu et al., 2021a), which causes mistrust in people and makes crossmodel transfer challenging. Recent advancements spiked interest in understanding how prompts work and found the counterintuitive mechanism behind. (Webson and Pavlick, ∗*Corresponding author. ![0_image_0.png](0_image_0.png) 2022) conducted numerous experiments on various discrete prompts, finding the improvement in downstream tasks does not originate from the model understanding task instructions in a manner similar to how humans use them. (Kavumba et al., 2022) presented the first investigation of the exploitation of superficial cues by prompt-based models, finding the presence of superficial cues which prompt-based models exploit. Continuous prompts, on the other hand, are more complicated and incomprehensible. Recent attempts for interpreting continuous prompts came from (Khashabi et al., 2022), which introduced the *Prompt Waywardness Hypothesis* to prove the infeasibility of interpreting a learned continuous prompt with a single discrete prompt. To the best of our knowledge, no general post-hoc interpretable framework is proposed to translate continuous prompts into a comprehensible form. Towards filling this research gap, we propose the Combination Hypothesis, which argues the feasibility of utilizing combinations of discrete prompts as faithful interpretations for continuous prompts (§3.2). In other words, we treat the continuous prompt as an embedding lookup table with the one-hot restriction removed. For instance, a welltrained continuous prompt for sentiment classification should contain task-related tokens such as "drama" or auxiliary tokens such as "seem", "look" to stimulate the PLM for desired outputs (Fig. 1). To find the effective interpretation, a joint optimization framework is proposed to ensure both prompt fidelity and downstream fidelity (§3.3). Comprehensive experiments are conducted to support our hypothesis and framework. We first directly optimize parameters of the combination of discrete prompts to replace continuous prompts. Results show that the combination of discrete prompts has competitive performance in most scenarios (especially in few-shot learning), which verifies the feasibility of the *Combination Hypothesis* in practice (§5). As a significant property of interpretations, faithfulness is comprehensively verified to check how accurately it reflects the true reasoning process of the model (Jacovi and Goldberg, 2020). We first verify the prompt fidelity and downstream fidelity of the interpretations using discrete prompts and continuous prompts as the content to be interpreted (§6.1), then we verify the tokens selected from interpretations can better restore the performance of source prompts on downstream tasks (§6.2). Despite faithfulness, a high-quality interpretation should also contain plausibility, which refers to *how convincing the interpretation to humans* (Jacovi and Goldberg, 2020). By conducting a visual comparison with the nearest tokens to continuous prompts (§7.1), Our interpretations are shown to be more convincing and allow us to identify several "shortcuts" contained in the model's decisionmaking (§7.2). Furthermore, inspired by the readability and transferability of discrete prompts, we investigate the feasibility of cross-model transfer for continuous prompts using our interpretations. We argue its breakthrough since no previous work to achieve cross-model transfer for continuous prompts without any training signals on target PLMs. Experiments show that even continuous prompts trained on a simple structured PLM with 100-shot settings can be transferred to large PLMs using our method and achieve competitive performance (§8). ## 2 Related Work Prompt Engineering. Prompt engineering, as a crucial part of prompt learning, is the process of creating a prompt function that performs effectively on the downstream task (Liu et al., 2021a). It can be generally divided into discrete prompts and continuous prompts. Discrete prompts usually search for templates, i.e., natural language tokens in discrete spaces as prompt functions. There is a line of work focused on manually-designed prompts (Petroni et al., 2019; Brown et al., 2020; Scao and Rush, 2021). These methods rely excessively on prior knowledge, while even experts have difficulty finding optimal templates (Jiang et al., 2020). Therefore, recent explorations devoted much attention to automatically searching for templates in discrete spaces (Jiang et al., 2020; Shin et al., 2020; Gao et al., 2021; Haviv et al., 2021). Continuous prompts, on the other hand, relax the constraint that templates are natural language tokens (Li and Liang, 2021; Liu et al., 2021b; Lester et al., 2021; Zhong et al., 2021; Qin and Eisner, 2021; Zhang et al., 2022). These works effectively improve performance at the expense of interpretability. Khashabi et al. (2022) demonstrated the disconnection between continuous prompts and discrete prompts, In this paper, we investigate the feasibility of using discrete prompts to interpret continuous prompts from a novel view. Cross-model Transfer. Benefiting from the readability of discrete prompts, we can easily transfer manually-designed prompts to any PLM (Perez et al., 2021). Nonetheless, since the embedding dimensions and semantic spaces of different PLMs are inconsistent, it is tricky for cross-model transfer of continuous prompts. Su et al. (2022) devoted the first attempt by *prompt projectors*, which trained on another task to project continuous prompts into the semantic space of target PLMs. As a post-hoc interpretable framework, this paper investigates the feasibility of cross-model transfer without the help of additional task data. ## 3 Prompt Decoupling 3.1 Setup And Formulation Given a sequence with n continuous prompts P = {p1, p2, *· · ·* , pn} trained on the dataset D = {*x, y*}, we analyze the feasibility to interpret continuous prompts as a combination of discrete prompts R = {r1, r2, *· · ·* , rn}, where pi ∈ R dis a d-dimensional vector, ri ∈ R vis a v-dimensional vector which decouples piinto v discrete prompts (Fig. 1). In this paper, we are interested in generating an interpretation R with both faithfulness and plausibility (Jacovi and Goldberg, 2020). In addition, as a side effect of the interpretation, it is also expected to utilize the results for cross-model transfer of continuous prompts. ## 3.2 The Combination Hypothesis Continuous prompts are essentially trained on a large corpus of natural language. These incomprehensible prompts occupy the place of discrete prompts that are composed of natural language tokens, but better motivate the PLM to output desired results. Consequently, they are intuitively more likely to be associated with natural language tokens than to be isolated from them. Considering the infeasibility of one-to-one mapping (Khashabi et al., 2022), we propose the idea that the continuous prompt may be a combination of multiple discrete prompts. It is known that the essence of discrete prompt e(x) is a function of token x, which is parameterized by a one-hot embedding lookup table (Li et al., 2020a). If the onehot restriction is removed, the continuous prompt can be seen as the output of a fully connected layer with all discrete prompts as input. We formalize the idea as the following hypothesis. Hypothesis 1: *(Combination Hypothesis) For any* continuous prompt p ∈ R d *and a discrete* prompt matrix E ∈ R v×d *of a large pre-trained* model, there exists a vector r ∈ R vsuch that dist(r⊤E, p⊤) ≤ ∆, where dist(·) is the Euclidean distance function, ∆ *is the shortest distance to* p among all discrete prompts. In fact, it can almost be proved that the linear equation r⊤E = p⊤ has infinitely many solutions. For general PLMs, it is always satisfied that v ≫ d (e.g., v = 30522, d = 768 in the BERTbase model (Devlin et al., 2019)). Thus, for most cases, R(E⊤) = R(E⊤, p) < v, where R denotes the rank of the matrix. Nonetheless, although v ≫ d, it is still not guaranteed that these discrete prompts can necessarily constitute a set of bases in the vector space, which implies the non-existence of an exact solution. Thus, we relax the restriction in our hypothesis, which only proves the existence of a more faith- ![2_image_0.png](2_image_0.png) ful interpretation than the nearest discrete prompt. We consider the following two cases. 1. E **constitutes a set of bases in the vector** space. In this case, all vectors in the vector space can be represented by this set of bases. Therefore, there exists a solution r such that $$\operatorname{dist}(\mathbf{r}^{\top}\mathbf{E},\mathbf{p}^{\top})=0\leq\Delta.$$ $\eqref{eq:walpha}$ ⊤) = 0 ≤ ∆. (1) 2. E **is not sufficient to constitute a set of bases** in the vector space. Let e0 be the nearest discrete prompt to p, V be the linear subspace composed of E. If p ∈ V, then there exists a linear combination of discrete prompts that satisfies Eq.1. If p ̸∈ V (Fig. 2), we make a projection of p onto V, denoted p⊥, then $$\operatorname{dist}(\mathbf{p}_{\perp}^{\mathsf{T}},\mathbf{p}^{\mathsf{T}})\leq\operatorname{dist}(\mathbf{e}_{0}^{\mathsf{T}},\mathbf{p}^{\mathsf{T}})=\Delta.$$ $$\left(2\right)$$ ⊤) = ∆. (2) Since p⊥ is in the linear subspace V, it can be represented as a linear combination of discrete prompts. Therefore, in this case, the hypothesis also holds, which implies the existence of a more faithful interpretation than the discrete prompt. Empirically, simply summing rather than concatenating prompts does not seem to make sense. Suppose we have two input vectors and their concatenation, denoted as x1, x2 ∈ R dand xconcat = [x⊤ 1 ⊕ x⊤ 2 ]⊤ ∈ R 2d. Then we apply linear embedding projection e to x*concat*: $$e(\mathbf{x}_{\mathrm{concat}})=\mathbf{W}\mathbf{x}_{\mathrm{concat}}$$ * [4]**V** **X**concat $$=[\textbf{W}_{1}\oplus\textbf{W}_{2}]\cdot[\textbf{x}_{1}^{\top}\oplus\textbf{x}_{2}^{\top}]^{\top}$$ (3) $$=\textbf{W}_{1}\textbf{x}_{1}+\textbf{W}_{2}\textbf{x}_{2}$$ $$=e(\textbf{x}_{1})+e(\textbf{x}_{2}),$$ where W1 ∈ R d×d,W2 ∈ R d×d,W ∈ R d×2dare parameters of the linear projection. This indicates that summing is somehow equivalent to concatenating, which also supports the rationality of decoupling continuous prompts into discrete prompts. ## 3.3 Finding Interpretations The hypothesis indicates the existence of R, but it does not consider how to find a solution that better represents the continuous prompt. In this section, we first introduce an optimization method to find the interpretations that both satisfy the hypothesis and ensure downstream fidelity, then we reduce the vocabulary size by traversing datasets and thus speed up the optimization. Our post-hoc interpretable framework is similar to probes, which focus on simple linguistic properties of interest (Conneau et al., 2018). Therefore, following the view of Hewitt and Liang (2019), a simple model with only one linear layer is designed in our paper for interpreting continuous prompts. Since negative results can be confusing or controversial, the softplus activation function (Dugas et al., 2000) is applied in the output layer. To satisfy the *Combination Hypothesis*, we minimize the distance between continuous prompts and the combination of discrete prompts: $$\ell_{1}(\mathbf{r};\mathbf{E},\mathbf{p})=\operatorname{dist}(\mathbf{r}^{\top}\mathbf{E},\mathbf{p}^{\top}).$$ ⊤). (4) It is not sufficient to find the most reasonable solution with the loss above. As a consequence, we introduce the following loss function to ensure downstream fidelity: $$\ell_{2}({\bf r};{\bf E},{\bf p},{\cal D})=\mathbb{E}_{x\sim{\cal D}}\mathbb{E}_{a\sim{\bf r},{\bf e}\sim{\bf E}}\qquad\qquad({\bf5})$$ $$[a D_{\rm KL}(M({\bf p}\oplus x),M({\bf e}\oplus x))],$$ where DKL(·) is the Kullback Leibler distance function, M(·) is the output of the PLM. This loss function helps to find a more meaningful combination, i.e., discrete prompts with larger values should have outputs on downstream tasks that are as consistent as possible with the continuous prompt. We learn the interpretation r by jointly minimizing the loss ℓ1(·) for the *Combination Hypothesis* (Eq.4) and the loss ℓ2(·) for downstream fidelity (Eq.5): $$\ell^{\prime}\left(\mathbf{r};\mathbf{E},\mathbf{p},\mathcal{D}\right)=\ell_{1}(\mathbf{r};\mathbf{E},\mathbf{p})+\gamma\ell_{2}(\mathbf{r};\mathbf{E},\mathbf{p},\mathcal{D}),\tag{6}$$ $$\tilde{\mathbf{r}}=\operatorname*{arg\,min}_{\mathbf{r}\in\mathbb{R}^{v}}\ell^{\prime}\left(\mathbf{r};\mathbf{E},\mathbf{p},\mathcal{D}\right),\tag{7}$$ $$\mathbf{\Sigma}),$$ $$\mathbf{\Sigma}(6)$$ where γ is a hyperparameter. In this paper, we find γ = 0.09 to achieve a reasonable trade-off between prompt fidelity and downstream fidelity (see §9). Nonetheless, it is time-consuming since the second optimization requires traversing the vocabulary of the PLM. As a post-hoc interpretation, we argue that the decoupling result r should be *sparse*, i.e., most of the discrete prompts should correspond to 0. On the one hand, a dense interpretation is incomprehensible; on the other hand, as an effective prompt that motivates the PLM to output desired outputs, it should not have much useless token information. We propose a simple method that traverses the full downstream dataset and selects the v tokens with the highest frequency into our new vocabulary since it is intuitive that critical tokens contained in continuous prompts tend to appear in the dataset to be trained already. Moreover, since the parameters of the PLM are fixed, M(e ⊕ x) is invariant in different epochs. Thus, for a given discrete prompt e and sample x, we only need to compute the output once, which further speeds up the training. ## 4 **Studying P-Tuning: Experimental Setup** 4.1 Model And Training Details $\eqref{eq:walpha}$. P-tuning (Liu et al., 2021b), as a typical representative of continuous prompts, is used in this paper to study our proposed framework. For PLMs, we use the base version of BERT (Devlin et al., 2019), which is broadly adopted in the NLP field. We freeze the parameters of BERT and use the prompt template T = {x, [p1], [p2], y, [p3]}, where [p1], [p2], [p3] are the only trainable parameters with a two-layer LSTM (Hochreiter and Schmidhuber, 1997) head, respectively. We use a batch size of 8, initial learning rate of 0.00001, AdamW optimizier (Loshchilov and Hutter, 2019), and 15 training epochs for P-tuning; initial learning rate of 0.01, L1 loss coefficient of 0.01 and 4000 steps for training our interpretations with early stopping based on the validation set. Unless otherwise stated, all experiments are conducted in the 100shot scenario. ## 4.2 Studied Datasets Detailed experiments are conducted on the following 4 classification datasets: SST-2 (Socher et al., 2013), IMDB (Maas et al., 2011), Amazon Review Polarity (McAuley and Leskovec, 2013) and AGNews (Zhang et al., 2015). Statistics and target tokens for each dataset are attached in Appendix A | SST-2 | IMDB | Amazon | AGNews | | | | | | | | | | |---------------|----------|----------|----------|----------|-------|---------|----------|-------|---------|----------|-------|-------| | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | | | P-tuning | 71.11 | 78.36 | 86.91 | 68.08 | 71.50 | 87.21 | 72.26 | 78.08 | 92.35 | 84.49 | 86.46 | 90.36 | | Random | 49.92 | 49.92 | 49.92 | 51.75 | 51.75 | 51.75 | 50.43 | 50.43 | 50.43 | 39.67 | 39.67 | 39.67 | | Discrete-500 | 73.86 | 71.28 | 83.25 | 70.15 | 75.31 | 83.06 | 77.97 | 81.56 | 85.28 | 83.74 | 84.50 | 87.21 | | Discrete-768 | 77.70 | 78.42 | 85.06 | 67.32 | 72.45 | 83.98 | 77.65 | 80.88 | 86.38 | 84.74 | 84.74 | 87.54 | | Discrete-1000 | 69.69 | 75.62 | 82.92 | 71.72 | 72.16 | 83.12 | 79.17 | 80.52 | 86.25 | 83.64 | 84.38 | 86.96 | | Discrete-1500 | 75.56 | 76.55 | 83.80 | 70.19 | 78.16 | 83.60 | 76.25 | 77.72 | 86.51 | 83.92 | 84.99 | 87.56 | and B. Among all these datasets, test set accuracy is reported as our evaluation metric. ## 5 Hypothesis Verification The *Combination Hypothesis* argues the existence of combinations of discrete prompts in fairly small neighborhoods as an alternative to continuous prompts. Therefore, it should also be feasible to train combinations of discrete prompts individually for downstream tasks. The amount of loss is quantified as follows: ℓ(r; E, D) = Ex,y∼D[loss(M(r ⊤E⊕x), y)], (8) where loss(·) is the loss function on downstream task. We then minimize the loss function to obtain a replacement for continuous prompts. The optimized performance is provided in Table 1. Our method performs competitively, especially in fewshot scenarios. Furthermore, we find that v = 1500 is sufficient for the model to obtain good performance, while a larger vocabulary size is more likely to introduce noisy tokens, which is not conducive to optimization. Therefore, we set v = 1500 in the following research. Note that since the designed structure itself is difficult to optimize, we set the learning rate to 0.3 when trained on few-shot scenarios and 0.1 when trained on full datasets. Besides, the L1 loss function with a coefficient of 0.01 is added. This method does not aim to fully surpass P-tuning, but to verify the feasibility of the hypothesis that continuous prompt can be replaced by the full connection of discrete prompts without loss of precision and at the same time provide methods for faithfulness verification in §6. As an approximate alternative to continuous prompts, the loss of accuracy is unavoidable. For example, P-tuning is able to accurately find the simple connection between features and labels on full datasets, while it is more tricky for our method. | Prompt 1 | Prompt 2 | Prompt 3 | Downstream | | | | |------------|------------|------------|--------------|---------|-------|----------------------| | Token | PCT | Token | PCT | Token | PCT | ACC(p) ∆−→ ACC(r⊤E) | | What | 41.30 | exactly | 0.91 | things | 30.10 | 67.98 −4.22 −→ 63.76 | | feeling | 22.78 | drama | 4.53 | quality | 21.75 | 61.56 +2.47 −→ 64.03 | | cat | 1.32 | what | 25.68 | things | 28.63 | 62.77 −2.75 −→ 60.02 | ## 6 Faithfulness Verification 6.1 Do The Interpretations Faithfully Reflect The Source Prompts? In this section, We verify the prompt fidelity and downstream fidelity of the interpretations, i.e., the proximity of the weighted discrete prompts to the source prompts and the similarity in performance on downstream tasks. To obtain ground-truth labels, we first design three manual discrete templates on SST-2 and interpret them. The performance of prompt fidelity and downstream fidelity are shown in Table 2, where initial capitalization and plural forms are ignored. For most tokens, they account for more than 20% among the 1500 tokens. We consider this to be a fairly high value and the synonyms of the original tokens also achieve a high value. However, several tokens like "exactly", "drama" and "cat" still achieve a low value. For tokens like "exactly" and "drama", the interpretations discover their synonyms and give them an extremely high percentage (>20%), such as "completely" for "exactly" and "film" for "drama". For tokens like "cat", since they do not help with downstream tasks, the model can only attempt to optimize for the first objective (Eq. 4), leading to a jumbled interpretation. As for Nearest-1 0.0026 0.0030 0.0036 0.0047 Nearest-2 0.0027 0.0030 0.0037 0.0048 Ours **0.0025 0.0027 0.0032 0.0043** | SST-2 | IMDB | Amazon | AGNews | |---------|--------|----------|----------| Table 3: Performance of prompt fidelity on continuous prompts (average squared distance reported). Table 4: Performance of downstream fidelity on continuous prompts. downstream fidelity, the performance of the interpretations is similar to the source prompts in all 3 sets of experiments. Furthermore, we verify the fidelity of the interpretations to continuous prompts. Performance of prompt fidelity and downstream fidelity is shown in Table 3 and Table 4, respectively. For comparison, the two nearest tokens in the Euclidean space are selected as interpretations for continuous prompts. Among all tasks, the distance of our results from the source prompts is closer compared to the nearest discrete token, indicating that our method has a higher fidelity in the restoration of source prompts. Moreover, simply taking the two nearest discrete tokens as a replacement for continuous prompts performs quite poorly on downstream tasks, even similar to random predictions in most cases, while our method achieves comparable performance to the source prompts on downstream tasks. In summary, our interpretations consistently maintain higher fidelity than the only existing method (select the nearest discrete prompts) and reflect the decision process of the source prompts well. ## 6.2 How Reductive Are The Interpretations On Downstream Tasks? As described in §3.3, the interpretations are intended to be sparse, which means that the top few tokens of interpretations are supposed to contain the majority of information from source prompts. In this section, We select the top five tokens of the interpretations as vocabulary and train the weighting of these tokens using the optimization method in §5. Comparison with baselines is shown in Table 5. In all scenarios, the tokens selected by our in- | SST-2 | IMDB | Amazon | AGNews | | |-----------|--------|----------|----------|-------| | Nearest-1 | 49.86 | 50.12 | 50.06 | 49.55 | | Nearest-2 | 50.19 | 60.87 | 50.12 | 55.50 | | Ours | 75.18 | 66.77 | 69.72 | 74.21 | terpretations are more reductive than the randomly selected tokens and the tokens selected nearest to the continuous prompts, implying that these tokens do contain more information relevant to the downstream task from source continuous prompts. Moreover, for a more visual demonstration of the ability of the selected five tokens to restore performance, we show the test set accuracy of several baselines under different training scenarios, including Manually, LM-BFF (Gao et al. (2021)) and P-tuning. For Manually, we report the best performance among the five manually designed templates (see Appendix E). For LM-BFF, we only use it to automatically generate templates without changing target tokens and making additional fine-tuning. It can be found that the performance of the five tokens selected by our method can outperform the templates selected by Manually and LM-BFF in all cases, and is even comparable to P-tuning in fewshot scenarios, while random selection and nearest neighbor selection are not. This further shows that our selected tokens are reliable and faithful. ## 7 Plausibility Verification 7.1 What Do The Interpretations Look Like? Still taking the 100-shot scenario as an example, we show our interpretations on different tasks in Table 6. For each prompt, five tokens with the largest values are selected for display. As a comparison, the five nearest tokens to each prompt in the Euclidean space are also selected for display. As can be seen, our interpretations better reflect the decision-making of continuous prompts and output meaningful tokens compared to the *Nearest* baseline. For example, the continuous prompts in SST-2 induce the PLM to determine how great or terrible something in the input is, while prompts in IMDB and Amazon induce the PLM to judge how well someone thinks of something. To our surprise, there contains a large number of task-independent tokens which also induce the PLM to output desired target tokens. For example, the interpretations on SST-2 contain tokens like "taste", "material" and "quality". These tokens are irrelevant to movie review sentiment classification, but can prompt the PLM to output the target tokens "terrible" or "great". We consider that continuous prompts may sneak in *shortcuts* (Geirhos et al., 2020) during training, which will be briefly verified in §7.2. Nonetheless, there still remain several noisy to- | SST-2 | IMDB | Amazon | AGNews | | | | | | | | | | |----------|---------|----------|----------|---------|----------|-------|---------|----------|-------|---------|----------|-------| | Manually | 50.80 | 59.01 | 58.83 | 68.76 | | | | | | | | | | Scenario | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | 50-shot | 100-shot | Full | | LM-BFF | 64.85 | 64.85 | - | 57.91 | 57.91 | - | 59.05 | 59.05 | - | 55.37 | 55.37 | - | | P-tuning | 71.11 | 78.36 | 86.91 | 68.08 | 71.50 | 87.21 | 72.26 | 78.08 | 92.35 | 84.49 | 86.46 | 90.36 | | Random | 52.61 | 57.11 | 64.36 | 62.50 | 60.72 | 66.20 | 64.37 | 63.28 | 71.94 | 70.16 | 72.42 | 69.70 | | Nearest | 59.75 | 53.98 | 67.49 | 64.88 | 65.34 | 67.76 | 66.90 | 71.84 | 67.28 | 69.50 | 58.22 | 70.05 | | Ours | 75.01 | 74.79 | 74.90 | 72.43 | 70.17 | 73.45 | 78.96 | 79.44 | 80.76 | 78.49 | 79.01 | 79.80 | kens that are hard to understand for humans, especially in AGNews. These tokens seem irrelevant to the downstream task and it is difficult to spot potential shortcuts. We believe there are two reasons for the phenomenon. On the one hand, the tokens utilized by prompts are overcrowded in the semantic space, leading to the replacement of the interpreted tokens by irrelevant ones. On the other hand, the high complexity of the downstream task leads to a more difficult optimization of the interpretations. Future work will be conducted along these two directions. ## 7.2 Do Continuous Prompts Contain Shortcuts? As shown in Table 6, our interpretations reveal the possibility of continuous prompts using shortcuts, which perform well on benchmarks but may fail to transfer on the anomaly test set (Geirhos et al., 2020). Taking the interpretation of SST-2 as an example, It contains unexpected tokens like "something", "taste", etc. to induce the PLM for desired target labels "terrible" or "great". To test whether the model makes use of these shortcuts, we select several task-irrelevant texts containing shortcut tokens as suffixes to be added to the test set text and reverse the sentiment polarity of the added text to the test set labels on SST-2 (see Table 7.2). For example, "The food tastes delicious." is added if the ground-truth label is 0 (terrible), while "The food tastes unpalatable." is added if the ground-truth label is 1 (great). The significantly degraded performance suggests that the model utilizes a large number of shortcuts. To our surprise, these shortcuts do not disappear as the training data increases but are more fully exploited by the model, resulting in an accuracy of almost 0 after training on the full dataset. Obviously, continuous prompts of SST-2 are just baiting the PLM to output the target token terrible/great, not caring whether it is really a review of the movie or a review of food, cats, or something else. We present this phenomenon in the hope that it will attract more attention and research in the future. ## 8 Cross-Model Transfer Due to the inconsistent embedding dimensions and semantic spaces of different PLMs, cross-model transfer of continuous prompts is tricky. With our proposed interpretable framework that establishes connections between continuous and discrete prompts, it becomes feasible to transfer continuous prompts from source PLMs to target PLMs without extra training signals on target PLMs. Considering a scenario to transfer continuous prompts of the source PLM Ma to the target PLM Mb, we can first get the decoupling results r using the method presented in §3.3. Then the continuous prompts transferred to Mb are r⊤Eb, where Eb is the discrete prompt matrix of Mb. Following this idea, we investigate the feasibility of cross-model transfer from BERTbase (Devlin et al., 2019) to BERTlarge, RoBERTabase and RoBERTalarge (Liu et al., 2019) respectively in Table 8. Considering that only discrete templates are capable of cross-model transfer without extra training signals on target PLMs in existing studies, we choose (1) select the nearest tokens to continuous prompts; (2) manually designed templates that perform best on BERTbase; and (3) automatically generated templates using LM-BFF (Gao et al. (2021)) as the baselines. For LM-BFF, we automatically generate templates using T5base (Raffel et al., 2020) in the 100-shot scenario for cross-model transfer. Detailed results on the baseline (2) and (3) can be found in Appendix E. As can be seen, our method outperforms baselines in most scenarios, especially on tasks like AGNews where it is tricky to construct discrete | Prompt 1 | Prompt 2 | Prompt 3 | | | | | | | | |------------|------------|------------|-------------|--------------|------------|------------|------------|----------|-------| | Nearest | Ours | Nearest | Ours | Nearest | Ours | | | | | | the | something | 0.867 | of | dark | 0.245 | the | involving | 0.649 | | | his | those | 0.010 | the | taste | 0.168 | is | seem | 0.130 | | | of | horror | 0.004 | his | material | 0.047 | of | things | 0.046 | | | . | what | 0.004 | is | quality | 0.330 | was | touching | 0.033 | | | him | bad | 0.004 | him | drama | 0.275 | several | working | 0.018 | | | SST-2 | was | he | 0.334 | was | highly | 1.295 | was | anything | 0.345 | | . | during | 0.309 | . | how | 0.220 | were | ##ness | 0.229 | | | The | someone | 0.087 | were | particularly | 0.144 | the | atmosphere | 0.200 | | | were | having | 0.072 | the | himself | 0.051 | of | ##ful | 0.163 | | | the | obviously | 0.062 | The | acted | 0.048 | The | theater | 0.105 | | | IMDB | . | he | 0.856 | seemed | completely | 1.316 | seemed | became | 0.451 | | , | guy | 0.098 | performance | heat | 0.100 | was | down | 0.354 | | | the | having | 0.086 | seems | How | 0.097 | him | ##le | 0.238 | | | their | kid | 0.080 | him | nearly | 0.075 | became | seemed | 0.089 | | | of | terrible | 0.066 | became | totally | 0.059 | would | scene | 0.080 | | | Amazon | National | Free | 1.335 | National | future | 0.970 | National | should | 1.109 | | 2005 | than | 0.941 | 2005 | senior | 0.004 | 2004 | control | 0.046 | | | 2004 | toward | 0.223 | 2004 | Free | 0.004 | government | Department | 0.008 | | | 2006 | Chief | 0.031 | Central | ##ive | 0.003 | 2006 | might | 0.006 | | | senior | likely | 0.011 | national | Top | 0.002 | 2005 | Research | 0.004 | | | AGNews | | | | | | | | | | | 50-shot | 100-shot | Full | | |----------------------------------------|------------|--------|-------| | (Raw Test Set) | 71.11 | 78.36 | 86.91 | | The food tastes delicious/unpalatable. | 57.50 | 52.28 | 2.53 | | Those cats seem to be great/terrible. | 47.28 | 45.58 | 9.50 | | Something dark is of good/bad quality. | 43.71 | 40.03 | 3.84 | templates using prior knowledge. This enables zero-shot transfer of continuous prompts across arbitrary models without the restrictions of vector dimensionality and semantic space. For the poor performance on SST-2, we consider that the continuous prompts learned using BERTbase inherently contain a large number of shortcuts, which may no longer be applicable after being captured by the interpretations and transferred to larger PLMs. Therefore, the performance of cross-model transfer is affected by the robustness of the source prompts. If continuous prompts are trained on larger PLMs and datasets, better performance will be obtained using our interpretations and is expected to be applied to areas such as model compression. ## 9 Further Analysis Effect of Gamma. We analyze the effect of hyperparameter γ, i.e., the trade-off between prompt fidelity and downstream fidelity (Eq.6). Intuitively, as γ increases, the prompt fidelity decreases while the downstream fidelity goes up. When γ is 0, our method degenerates to use only prompt fidelity as the optimization objective. Fig. 3 shows the results of the grid search using the interpretations described in §3. As expected, the accuracy on BERTbase improves as gamma increases since the interpretations are directly optimized on it. Nonetheless, when γ is larger than 0.09, the performance of the interpretations for cross-model transfer decreases. As a consequence, we choose γ = 0.09 in this paper. ## 10 Conclusion In this paper, we present a novel view that interprets continuous prompts as a combination of discrete prompts. Contrary to the previous perspective which attempts to discover a one-to-one mapping between continuous prompts and discrete prompts, we demonstrate the continuous prompt to be an embedding lookup table with the one-hot restriction | SST-2 | IMDB | Amazon | AGNews | | | | | | | | | | |-------------------|--------|----------|----------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | P-tuning on Ma | 78.36 | 71.50 | 78.08 | 86.46 | | | | | | | | | | Transferred Model | Mb | Mc | Md | Mb | Mc | Md | Mb | Mc | Md | Mb | Mc | Md | | P-tuning | 74.52 | 87.70 | 73.09 | 77.02 | 80.70 | 88.84 | 85.25 | 92.21 | 84.07 | 86.66 | 85.63 | 82.82 | | Random | 50.58 | 50.01 | 50.03 | 61.67 | 73.37 | 76.54 | 61.20 | 79.88 | 77.16 | 42.95 | 61.64 | 53.96 | | Nearest | 50.19 | 50.52 | 49.97 | 57.02 | 59.29 | 62.38 | 51.41 | 87.03 | 75.80 | 56.37 | 51.38 | 44.34 | | Manually | 54.20 | 72.54 | 83.91 | 53.68 | 56.58 | 70.12 | 51.78 | 53.30 | 78.23 | 71.29 | 46.22 | 48.64 | | LM-BFF | 75.12 | 81.38 | 86.05 | 61.30 | 73.13 | 75.94 | 60.85 | 83.30 | 85.48 | 58.03 | 57.21 | 59.50 | | Ours | 69.58 | 69.74 | 74.63 | 72.52 | 75.69 | 80.18 | 75.02 | 82.30 | 90.33 | 76.04 | 69.91 | 68.75 | ![8_image_0.png](8_image_0.png) removed. Detailed experiments are conducted to verify that our interpretations faithfully reflect the reasoning of source prompts with both prompt fidelity and downstream fidelity. Furthermore, our interpretations exhibit promising readability and plausibility, which not only provides a tool for understanding model decisions but also offers a chance for discovering potential shortcuts contained in the prompts. Finally, with the bridge between continuous prompts and discrete prompts, we analyze the feasibility of cross-model transfer for continuous prompts with the proposed method. Results show that even trained on a small PLM (BERTbase) and 100-shot scenario, continuous prompts maintain good performance after transferring to various large PLMs. We hope that this work will bring a novel view for interpreting continuous prompts and encourage more research to explore the internal mechanisms of continuous prompts. ## Acknowledgements This work is partly supported by the Joint Funds of the National Natural Science Foundation of China under No. U21B2020 and the Shanghai Science and Technology Plan under No. 22511104400. ## Limitations Although the proposed method provides interpretations for continuous prompts with both faithfulness and plausibility, it can still only be used as an approximation to find the most likely combination, since the process of combining discrete prompts to continuous prompts is irreversible. Moreover, the output layer of PLMs tends to degenerate and occupy an anisotropic cone in the vector space (Wang et al., 2020; Li et al., 2020b), which significantly increases the difficulty of finding the correct interpretations. We encourage future research to take the magnitude of token vectors and the tokens in their neighborhoods into consideration for a more robust interpretation. Due to space and time constraints, we only perform detailed experiments on P-tuning and the bidirectional language models like BERT and RoBERTa, which ignored numerous SOTA works such as Prefix Tuning (Li and Liang, 2021), Prompt Tuning (Lester et al., 2021) for continuous prompts and GPT (Radford et al., 2019), T5 (Raffel et al., 2020) for PLMs. We encourage future research to conduct experiments on more prompt methods and PLMs to investigate the generalizability of our method. ## Ethical Statement We propose a novel view to interpret continuous prompts, which have been considered "black boxes", as combinations of human-understandable discrete tokens. Since the method itself is unbiased and faithful, and all experiments are conducted on publicly available datasets, we believe that our work does not create any potential ethical risk. Further, we discover shortcuts latent in continuous prompts, implying that systematic biases or discrimination may also exist in continuous prompts. These biases may originate from training datasets which are exploited by continuous prompts as a shortcut to the acquisition of true labels, or even originate from artificially implanted backdoors. We hope this work will provide the possibility to detect these potential biases in continuous prompts. Our created artifacts are intended to provide researchers or users with a tool for understanding decision-making and detecting possible unexpected shortcuts of continuous prompts, while at the same time offering the feasibility of cross-model transfer without extra training signals on target PLMs. They are compatible with the original access conditions. All use of existing artifacts is consistent with their intended use in this paper. ## References Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single \$&!\#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2126–2136. Association for Computational Linguistics. deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Haitao Zheng, and Maosong Sun. 2022. Openprompt: An open-source framework for promptlearning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - System Demonstrations, Dublin, Ireland, May 22-27, 2022, pages 105–113. Association for Computational Linguistics. Charles Dugas, Yoshua Bengio, François Bélisle, Claude Nadeau, and René Garcia. 2000. Incorporating second-order functional knowledge for better option pricing. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA, pages 472–478. MIT Press. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3816–3830. Association for Computational Linguistics. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2(11):665–673. Adi Haviv, Jonathan Berant, and Amir Globerson. 2021. Bertese: Learning to speak to BERT. In *Proceedings* of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 3618–3623. Association for Computational Linguistics. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2733–2743. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Comput.*, 9(8):1735– 1780. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In *Proceedings of* the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4198–4205. Association for Computational Linguistics. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know. *Trans. Assoc. Comput. Linguistics*, 8:423–438. Pride Kavumba, Ryo Takahashi, and Yusuke Oda. 2022. Are prompt-based models clueless? In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2333–2352. Association for Computational Linguistics. Daniel Khashabi, Xinxi Lyu, Sewon Min, Lianhui Qin, Kyle Richardson, Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer Singh, and Yejin Choi. 2022. Prompt waywardness: The curious case of discretized interpretation of continuous prompts. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3631–3643. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3045– 3059. Association for Computational Linguistics. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020a. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9119– 9130. Association for Computational Linguistics. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020b. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9119– 9130. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4582– 4597. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021a. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. CoRR, abs/2107.13586. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. GPT understands, too. *CoRR*, abs/2103.10385. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *The 49th Annual Meeting of the Association for* Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142–150. The Association for Computer Linguistics. Julian J. McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Seventh ACM Conference on Recommender Systems, RecSys '13, Hong Kong, China, October 12-16, 2013, pages 165–172. ACM. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035. Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. In *Advances in Neural Information Processing Systems 34:* Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 11054–11070. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics. Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5203–5212. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Teven Le Scao and Alexander M. Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2627–2636. Association for Computational Linguistics. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4222–4235. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631–1642. ACL. Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, and Jie Zhou. 2022. On transferability of prompt tuning for natural language processing. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3949– 3969. Association for Computational Linguistics. Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, and Quanquan Gu. 2020. Improving neural language generation with spectrum control. In *8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April* 26-30, 2020. OpenReview.net. Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 2300–2344. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 38–45. Association for Computational Linguistics. Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2022. Differentiable prompt makes pre-trained language models better few-shot learners. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657. Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5017–5033. Association for Computational Linguistics. ## A Dataset General descriptions and Statistics of the datasets we mentioned above are shown in Table 9 and 10. For few-shot scenarios, We randomly sample the same dataset for all tasks with the random seed set to 123. | Dataset | Language | Domain | |-----------|------------|----------------| | SST-2 | English | Moive Review | | IMDB | English | Moive Review | | Amazon | English | Product Review | | AGNews | English | News Report | Table 9: General descriptions of datasets. | Dataset | Train | Valid | Test | |-----------|---------|---------|--------| | SST-2 | 6920 | 872 | 1821 | | IMDB | 20000 | 5000 | 25000 | | Amazon | 2000000 | 1600000 | 400000 | | AGNews | 80000 | 40000 | 7600 | Table 10: Statistics of datasets. ## B Target Tokens Manual verbalizers are adopted in this paper. We rank the target tokens by their likelihoods and select the target token with the maximum likelihood as the classification output. The used target tokens for each task are shown in Table 11. ## C Usage Of Existing Packages The pre-processing steps and prompt-based methods are all implemented in OpenPrompt (Ding et al., 2022), an open-source framework for deploying prompt learning. Our interpretable method is implemented in PyTorch (Paszke et al., 2019), an open-source framework for deploying deep learning algorithms. For PLMs, we use "bertbase-cased" as the base model, "bert-large-cased", "roberta-base", "roberta-large" for cross-model transfer, and "T5-base" for generating templates in LM-BFF from Huggingface transformers (Wolf et al., 2020). All licenses of these packages allow us for normal research use. Identical hyperparameters are adopted regardless of the dataset. Detailed setups for P-tuning and our interpretable method are already shown in §4.1. | Dataset | Target Tokens | |-----------|----------------------------------------| | SST-2 | terrible, great | | IMDB | bad, good | | Amazon | bad, good | | AGNews | politics, sports, business, technology | Table 11: Target tokens of classification tasks. For the LM-BFF baseline, we fix the target tokens and only use T5base to search for the best discrete template with the training epochs of 10, learning rate of 0.00001, batch size of 2, and beam width of 100. ## D Experimental Details For all the experiments mentioned in this paper, we use 2 NVIDIA GeForce GTX 1080 Ti GPUs with 11G memory each. For training our interpretable framework, an additional linear layer with n × v parameters is introduced besides the source PLM, where n denotes the number of continuous prompts and v denotes the vocabulary size. In this paper, we set n = 3, v = 1500, which means only 4,500 extra parameters are introduced. Compared to large-scale PLMs such as BERT or RoBERTa, these parameters are almost negligible. ## E Performance Of Discrete Templates The performance of the manually designed templates (the first five rows of each table) and the templates generated by LM-BFF (the last row of each table) on each task and PLM is shown in Table 12-15. For manually designed templates, the bestperforming templates on BERTbase are selected as the baseline templates for cross-model transfer. | Templates | BERTbase | BERTlarge | RoBERTabase | RoBERTalarge | |-------------------------------------|------------|-------------|---------------|----------------| | {x}The sentiment :{y}. | 50.25 | 50.36 | 56.07 | 74.74 | | {x}Terrible or great :{y}. | 50.69 | 49.92 | 59.69 | 69.52 | | {x}Overall, it is a{y}film . | 50.14 | 58.05 | 73.15 | 71.94 | | {x}It feels{y}about the film . | 50.63 | 53.27 | 73.15 | 84.68 | | {x}The feeling of the review is{y}. | 50.80 | 54.20 | 72.54 | 83.91 | | {x}It's{y}. | 64.85 | 75.12 | 81.38 | 86.05 | Table 12: Performance of discrete templates on SST-2. | Templates | BERTbase | BERTlarge | RoBERTabase | RoBERTalarge | |-------------------------------------|------------|-------------|---------------|----------------| | {x}The sentiment:{y}. | 50.35 | 50.92 | 72.98 | 79.18 | | {x}Bad or good:{y}. | 59.01 | 53.68 | 56.58 | 70.12 | | {x}Overall, it is a{y}film. | 57.04 | 63.17 | 77.48 | 83.95 | | {x}It feels{y}about the film. | 50.54 | 51.75 | 72.22 | 72.46 | | {x}The feeling of the review is{y}. | 50.48 | 57.40 | 73.51 | 72.58 | | {x}Very{y}. | 57.91 | 61.30 | 73.13 | 75.94 | Table 13: Performance of discrete templates on IMDB. | Templates | BERTbase | BERTlarge | RoBERTabase | RoBERTalarge | |-------------------------------------|------------|-------------|---------------|----------------| | {x}The sentiment:{y}. | 50.25 | 50.73 | 53.29 | 85.27 | | {x}Bad or good:{y}. | 58.83 | 51.78 | 53.30 | 78.23 | | {x}Overall, it is a{y}product. | 50.17 | 57.60 | 77.67 | 78.35 | | {x}It feels{y}about the product. | 50.72 | 56.29 | 79.05 | 83.13 | | {x}The feeling of the review is{y}. | 50.09 | 53.99 | 73.09 | 66.54 | | {x}Very{y}. | 59.05 | 60.85 | 83.30 | 85.48 | Table 14: Performance of discrete templates on Amazon. | Templates | BERTbase | BERTlarge | RoBERTabase | RoBERTalarge | |--------------------------------|------------|-------------|---------------|----------------| | {x}The topic is about{y}. | 68.76 | 71.29 | 46.22 | 48.64 | | {x}The type of the news is{y}. | 41.57 | 51.66 | 50.70 | 59.96 | | {x}News category:{y}. | 51.95 | 75.29 | 80.78 | 79.64 | | {x}Overall, it is{y}news. | 45.75 | 46.14 | 52.96 | 37.72 | | {x}What type is the news?{y}. | 64.95 | 63.21 | 69.79 | 77.63 | | {x}in{y}. | 55.37 | 58.03 | 57.21 | 59.50 | Table 15: Performance of discrete templates on AGNews. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations. ✓ A2. Did you discuss any potential risks of your work? Section Ethical Considerations. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix C. ✓ B1. Did you cite the creators of artifacts you used? Section References. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix C. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section Ethical Considerations. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? All the data we use is derived from widely used open-source datasets, which have undergone public scrutiny. Since our paper focuses only on the interpretability and transferability of continuous prompts, potentially privacy-invasive or offensive content contained in these datasets is not further discussed. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix A. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 4.1, Section 5, Section 9 And Appendix D. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1, Section 5 and Section 9. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Since all experiments were run once with the same random seed 123, we did not report descriptive statistics such as error bars and summary statistics. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 and Appendix C. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chrupala-2023-putting
Putting Natural in Natural Language Processing
https://aclanthology.org/2023.findings-acl.495
Human language is firstly spoken and only secondarily written. Text, however, is a very convenient and efficient representation oflanguage, and modern civilization has made it ubiquitous. Thus the field of NLP has overwhelmingly focused on processing written ratherthan spoken language. Work on spoken language, on the other hand, has been siloed off within the largely separate speech processingcommunity which has been inordinately preoccupied with transcribing speech into text. Recent advances in deep learning have led to afortuitous convergence in methods between speech processing and mainstream NLP. Arguably, the time is ripe for a unification of thesetwo fields, and for starting to take spoken language seriously as the primary mode of human communication. Truly natural language processingcould lead to better integration with the rest of language science and could lead to systems which are more data-efficient and morehuman-like, and which can communicate beyond the textual modality.
# Putting Natural In Natural Language Processing Grzegorz Chrupała Department of Cognitive Science and Artificial Intelligence Tilburg University grzegorz@chrupala.me ## Abstract Human language is firstly spoken and only secondarily written. Text, however, is a very convenient and efficient representation of language, and modern civilization has made it ubiquitous. Thus the field of NLP has overwhelmingly focused on processing written rather than spoken language. Work on spoken language, on the other hand, has been siloed off within the largely separate speech processing community which has been inordinately preoccupied with transcribing speech into text. Recent advances in deep learning have led to a fortuitous convergence in methods between speech processing and mainstream NLP. Arguably, the time is ripe for a unification of these two fields, and for starting to take spoken language seriously as the primary mode of human communication. Truly natural language processing could lead to better integration with the rest of language science and could lead to systems which are more data-efficient and more human-like, and which can communicate beyond the textual modality. ## 1 Introduction The ACL 2023 theme track urges the community to check the reality of the progress in NLP. This position paper adopts an expansive interpretation of this question. It is definitely worth inquiring into the apparent advances of current NLP in their own terms. Here, however, I question these terms and argue that our field has focused on only a limited subset of human language which happens to be convenient to work with, and thus misses major aspects of human communication. ## 1.1 Human Language Is Primarily Spoken Humans are an exceptional species in many ways, and out of these, human language is one of the most salient. Unlike communication systems used by other organisms, human language is open-ended, capable of expressing abstract concepts, and of reference to events displaced in time and space. While the capacity to acquire language is universal and largely innate (Darwin, 1874; Pinker and Bloom, 1990) it also is culturally mediated and likely arose via gene-culture co-evolution (Deacon, 1998; Richerson and Boyd, 2010). One revolutionary technology which turbocharged human language was writing, which was invented a handful of times in the most recent few thousand years of the human story (Fischer, 2003). Writing, followed by the printing press, followed by the Internet, have made written text ubiquitous to the extent that it is easy to forget that the primary and universal modality for most human communication throughout history has been spoken.1 Even today many of the world's languages do not have a standardized written form. For those that do, the written modality originated as a compressed, symbolic representation of the spoken form. Children acquire a spoken language (and not infrequently two or more) within the first few years of their life with no or little explicit instruction, largely relying on weak, noisy supervision via social interaction and perceptual grounding. In contrast, they require hundreds of hours of explicit instruction and arduous conscious practice to learn to read and write, and most are only able to learn the written modality a couple of years at best after becoming fluent communicators in one or more spoken languages. ## 1.2 Reality Check Thus, arguably, the natural language for which we are biologically equipped is spoken. Written language is a secondary development, which happens to be very useful and widespread, but is nevertheless derivative of speech. This appears to be the 1I am using *spoken language* in the broad sense here, including both the oral and gestural (signed) modes of expression, and opposing these to the written modality. 7820 consensus view in linguistics going back at least a century (de Saussure, 1916; Bloomfield, 1933).2 Given these facts, is then the field of Natural Language Processing (NLP) a misnomer? Are we making less progress with getting machines to communicate via human language than current advances with processing written text would have us believe? ## 2 Nlp Is Written Language Processing To anyone with experience reading, reviewing and publishing papers in NLP conferences and journals (such the ACL conferences and TACL) it is evident that the field is very strongly focused on processing written language. While this is evident to practitioners, it is also largely tacit and implicit. ## 2.1 Unstated Assumptions The fact that a paper is concerned with written as opposed to spoken oral or sign language is almost invariably assumed to be the default and not explicitly stated. Furthermore, even if there is some interest in tackling a dataset of originally spoken language (for example in much work on dialog and child language acquisition), the usual approach is to use a written transcription of this data rather than the actual audio. This is partly a matter of convenience, but partly due to the assumption that the written form of language is the canonical one while the audio modality is just a weird, cumbersome encoding of it. To some extent such an implicit belief also lurks in much work within the speech community: the main thrust of speech research has always been on so called Automatic Speech Recognition (ASR), by which is meant automatically transcribing spoken language into a written form. Written text is treated as an interface and an abstraction barrier between the field of speech processing and NLP. In Sections 3 and 4 I address problems arising from the above assumptions, as well as the challenges and opportunities we have once we discard them. Firstly, however, it will be instructive to briefly quantify the assertion that NLP is Written Language Processing. by looking at historical publication patterns. ![1_image_0.png](1_image_0.png) ## 2.2 Publication Patterns Figure 1 shows the proportion of NLP papers explicitly mentioning speech-related terms in their title over the years covered by the ACL anthology (1950 through 2022), which is a comprehensive database of NLP papers from a wide variety of relevant conferences, workshops and journals.3 The fraction of speech-focused NLP papers varies quite a bit over the years, but mostly stays below 10%. There is a large peak going to 20% in 1989, followed by three years with around 10% of speech papers. A look at the underlying data reveals that the 1989 peak is associated with the inclusion in the anthology of the proceedings of the Speech and Natural Language Workshop (Hirshman, 1989) organized by the US Defense Advanced Research Projects Agency (DARPA), and featuring 79 papers. This workshop ran until 1992 and is thus largely responsible for the four-year run of sizable representation of spoken language research in the ACL anthology. The overview of the last edition of this event notes the then ongoing "paradigm shift in natural language processing towards empirical, corpus based methods" (Marcus, 1992). It is likely that this shift in NLP methodology was at least partly driven by this workshop, the associated DARPA program, and the resulting increased interaction between researchers working on spoken and written language. In recent years (since 2010) the proportion of NLP papers explicitly mentioning spoken language has resolutely stayed below 6%. While the major ACL events typically include speech processing as a topic in their calls for papers, as well as a track including the term *speech* in its name, such as *Speech and Multimodality*, processing of spoken language it clearly a rather minor concern of these conferences. Instead, speech work is published in different venues organized by a separate speech processing community. ## 3 Spoken Language Is Richer While the primacy of the spoken modality as means of communication is the consensus view in linguistics, Section 2.1 identifies unstated assumptions among NLP practitioners which amount to the opposite view. Here I outline why these assumptions contradicting the scientific view are not only incorrect but also detrimental to progress on understanding and processing real human language. ## 3.1 Key Features Of Spoken Language Speech and writing are two different modalities with different affordances, and there is no straightforward mapping between them. Some writing systems such as those used for English, Arabic or Chinese do not even represent the phonology of the spoken language in a direct way. More crucially, writing only captures a small proportion of the information carried in the equivalent audio signal. Writing discards most of the information falling within the general category of paralinguistic phenomena, such as that related to speaker identity, speaker emotional state and attitude; likewise, information conveyed by speech tempo and amplitude, including most of suprasegmental phonology such as intonation and rhythm is typically not present in writing. In addition to the auditory signal, oral spoken language can also feature visual clues in the form of accompanying gestures, facial expressions and body posture. Sign languages rely on the visual channel exclusively, and in fact there are no widely used writing systems for any of them (Grushkin, 2017). Unlike most text, speech also typically contains a variable amount of channel noise (Shannon, 1948) such as environmental sounds. Natural spontaneous speech contains fillers, hesitations, false starts, repairs and other disfluencies (Dinkar et al., 2023) which are usually edited out in the written form of language. Even more critically, spontaneous speech typically takes the form of a dialog between two or more participants. Dialog is unlike common written genres: crucially it features turn-taking behavior which is governed by complex and incompletely understood rules (Skantze, 2021). These features of natural dialog also mean that the traditional cascaded approach of ASR followed by NLP faces serious limitations, not least due to low ASR performance in this regime (Szymanski et al. ´ , 2020), but also due to its inherently interactive nature. For all these reasons, spoken language is more informationally rich than written language;4the same factors also make it more variable, complex and noisy, and consequently more challenging for automated processing (Shriberg, 2005). Thus any understanding of language as a human faculty gained via the written modality does not necessarily generalize to the spoken modality. The same is also the case about language applications: for example the successes and shortcomings of state-of-the-art text chatbot systems (e.g. Stiennon et al., 2020) are likely to be substantially different from those of spoken dialog systems. ## 3.2 Challenges Of Speech As an illustrative example, let us consider the effectiveness of self-supervision: inducing representations of words and phrases from just listening to speech or reading text. For text, this general family of methods has been successful since around the time of Latent Semantic Analysis (Dumais, 2004), and currently large written language models exhibit a constantly expanding range of abilities (Wei et al.). In contrast, self-supervision with spoken language has met with a limited amount of success only in the last few years (e.g. Baevski et al., 2020; Hsu et al., 2021), and these models as of now are usually only fine-tuned on the task of ASR. One obvious difference is that items such as words and morphemes are either explicitly delimited or easily discovered in text, but finding them is an unsolved research problem in speech, due to the inherent variability of this modality. On the other hand, learning spoken language becomes much more tractable when self-supervision is augmented with grounding in perception. The cross-modal correlations, though unreliable and noisy, are often sufficient to substantially facilitate the discovery and representation of words (Peng and Harwath, 2022; Nikolaus et al., 2022) and syllables (Peng et al., 2023) in spoken language. For written language, grounding in the visual modality 4One exception to this general pattern is the presence of two spatial dimensions in written language, and the role of 2D layout in textual publications. has also been found to help in some cases (e.g. Tan and Bansal, 2020) but it does not appear crucial, as the dominance of text-only language models demonstrates. Since spoken language is richer in information content, it should in principle be possible to exploit this extra signal for improving performance. One obstacle to such developments is the increased variability and channel noise. Perhaps less obviously, a second obstacle is that widely used benchmarks are often designed in a way which obstructs obtaining such gains. For example the 2021 Zerospeech challenge (Dunbar et al., 2021) which aimed to benchmark spoken language modeling, evaluates systems according to the following criteria: phoneme discrimination, word recognition, syntactic acceptability and correlation to human judgments of word similarities. None of these metrics would benefit much from modeling speaker characteristics, speech tempo, pitch, loudness or even suprasegmental phonology. Except for the first one, these metrics would be very well suited for models trained exclusively on written language. The combined effect of these two obstacles was evident in the results of Zerospeech 2021 where written-language toplines, such as RoBERTa (Liu et al., 2019), outperformed spoken language models on the latter three metrics, often by large margins. ## 4 Unifying Speech Processing And Nlp As evident from the examples highlighted above, spoken language is in some ways quite different from written language and presents a distinct set of challenges and potentials. In order to understand how much progress the fields of speech and NLP are making in understanding and implementing human language, we need to take speech seriously qua language, not just a cumbersome modality, and measure our progress accordingly. ## 4.1 Converging Methodology The time is ripe for a closer integration of the speech and NLP communities and for a unified computational science of language. The set of methodologies used in speech and text processing used to be quite distinct in the past. Since the adoption of deep learning both fields have converged to a large extent: currently the state-of-the-art models for both spoken and written language rely on transformer architectures (Vaswani et al., 2017) self-trained on large amounts of minimally preprocessed data, with optional fine-tuning. The technical communication barriers across disciplinary boundaries are thus much lower. The recent emergence of the concept of *textless NLP* (Lakhotia et al., 2021) exemplifies the potential of unifying these two fields. ## 4.2 Opportunities The following paragraphs outline the most important benefits of making NLP more natural, ranging from basic science to practical applications. Modeling language acquisition. An increased attention to spoken language within NLP has the potential to lead to a more realistic understanding of how well our current methods can replicate key human language abilities. Acquiring language under constraints that human babies face is the big one. There is a large amount of work on modeling human language acquisition which uses exclusively written data (at best transcribed from the original audio). Hopefully by this point the reader will be convinced that the relevance of this work to the actual issue under consideration is highly questionable. We stand a much better chance of figuring out human language acquisition if we refocus attention on spoken language. Data efficiency. Linzen (2020) argues convincingly for language models which are human-like in their data-efficiency and generalization capabilities. It is, however, unclear whether these properties can even be properly evaluated via the medium of written language. Since the informational density and the signal-to-noise ratio in written vs spoken language are so very different, it makes little sense to compare human children with language models trained on text. Furthermore, the challenges of pure self-supervision may motivate us to take seriously the impact of grounding in perception and interaction, which humans use universally as a learning signal. Unwritten languages. Many modes of human communication lack standard written representation. These range from major languages spoken by millions of people such as Hokkien (Mair, 2003), to small or non-standard language varieties, to sign languages. Shifting the emphasis of NLP research from text to the primary, natural oral and gestural modalities will benefit the communities using these varieties. Spoken dialog systems. Dingemanse and Liesenfeld (2022) argue that language technology needs to transition from the text to talk, and provide a roadmap of how to harness conversational corpora in diverse languages to effect such a transition. Indeed, one of the most obvious benefits of spoken language NLP would be dialog systems that do not need to rely on ASR and are able to exploit the extra information lost when transcribing speech, enabling them to understand humans better and interact with them in a more natural way. Non-textual language data. Finally, there is a large and increasing stream of non-textual language data such as podcasts, audio chat channels and video clips. Processing such content could also benefit from an end-to-end holistic treatment without the need of going through the lossy conversion to text. ## 4.3 Recommendations If you are an NLP practitioner and view spoken language as outside the scope of your field, reconsider. Getting into speech processing does require understanding its specifics, but it is not as technically daunting as it used to. Conversely, if you are a speech researcher, consider that ASR and text-tospeech is not all there is: we can get from sound to meaning and back without going through the written word. Both fields would do well to consider the whole of human language as their purview. Increased collaboration would benefit both communities, and more importantly, would give us a chance of making real progress towards understanding and simulating natural language. ## 5 Limitations The main limitation of this paper is the one applying to any opinion piece: it is subjective and personal, as the views of the authors are inherently limited by their expertise and experience. More specifically, this paper argues for an increased interaction between the speech and NLP communities, but the author is more strongly embedded in the latter, and thus addresses this audience primarily. Additionally, the short paper format imposes significant constraints on the amount of nuance, detail and discussion of relevant literature, and thus readers may find some of the claims to be less strongly supported and less hedged than would be ideal, or proper in a longer treatment of this topic. ## Acknowledgements I would like to thank Hosein Mohebbi, Afra Alishahi, Mark Dingemanse, Tanvi Dinkar, Piotr Szymanski and three anonymous reviewers for their ´ valuable feedback on this paper. ## References P. G. Aaron and R. Malatesha Joshi. 2006. Written language is as natural as spoken language: A biolinguistic perspective. *Reading Psychology*, 27(4):263– 311. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. *Advances in Neural Information Processing* Systems, 33:12449–12460. Leonard Bloomfield. 1933. *Language*. Henry Holt, New York. Charles Darwin. 1874. *The descent of man, and selection in relation to sex*. D. Appleton and Company, New York. Ferdinand de Saussure. 1916. *Cours de linguistique* générale. Payot, Paris. Terrence W Deacon. 1998. *The Symbolic Species: The* Co-evolution of Language and the Brain. WW Norton & Company. Mark Dingemanse and Andreas Liesenfeld. 2022. From text to talk: Harnessing conversational corpora for humane and diversity-aware language technology. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 5614–5633, Dublin, Ireland. Association for Computational Linguistics. Tanvi Dinkar, Chloé Clavel, and Ioana Vasilescu. 2023. Fillers in spoken language understanding: Computational and psycholinguistic perspectives. Traitement Automatique des Langues, 63(3). Susan T Dumais. 2004. Latent semantic analysis. Annual Review of Information Science and Technology (ARIST), 38:189–230. Ewan Dunbar, Mathieu Bernard, Nicolas Hamilakis, Tu Anh Nguyen, Maureen de Seyssel, Patricia Rozé, Morgane Rivière, Eugene Kharitonov, and Emmanuel Dupoux. 2021. The Zero Resource Speech Challenge 2021: Spoken language modelling. Steven Roger Fischer. 2003. *History of writing*. Reaktion books. Donald A Grushkin. 2017. Writing signed languages: What for? What form? *American annals of the deaf*, 161(5):509–527. Lynette Hirshman. 1989. Overview of the DARPA speech and natural language workshop. In *Proceedings of the workshop on Speech and Natural Language*, pages 1–2. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460. Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. On generative spoken language modeling from raw audio. Transactions of the Association for Computational Linguistics, 9:1336–1354. Tal Linzen. 2020. How can we accelerate progress towards human-like linguistic generalization? In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5210– 5217, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. Victor H Mair. 2003. How to forget your mother tongue and remember your national language. Mitchell P Marcus. 1992. Overview of the fifth DARPA speech and natural language workshop. In Proceedings of the workshop on Speech and Natural Language, pages 3–4. Mitja Nikolaus, Afra Alishahi, and Grzegorz Chrupała. 2022. Learning English with Peppa Pig. *Transactions of the Association for Computational Linguistics*, 10:922–936. Puyuan Peng and David Harwath. 2022. Word Discovery in Visually Grounded, Self-Supervised Speech Models. In *Proc. Interspeech 2022*, pages 2823– 2827. Puyuan Peng, Shang-Wen Li, Okko Räsänen, Abdelrahman Mohamed, and David Harwath. 2023. Syllable discovery and cross-lingual generalization in a visually grounded, self-supervised speech mode. In Proc. Interspeech 2023. Steven Pinker and Paul Bloom. 1990. Natural language and natural selection. *Behavioral and Brain Sciences*, 13(4):707–727. Peter J Richerson and Robert Boyd. 2010. Why possibly language evolved. *Biolinguistics*, 4(2-3):289– 306. Claude Elwood Shannon. 1948. A mathematical theory of communication. *The Bell system technical* journal, 27(3):379–423. Elizabeth Shriberg. 2005. Spontaneous speech: how people really talk and why engineers should care. In Proc. Interspeech 2005, pages 1781–1784. Gabriel Skantze. 2021. Turn-taking in conversational systems and human-robot interaction: A review. Computer Speech & Language, 67:101178. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In *Advances in Neural Information Processing Systems*, volume 33, pages 3008–3021. Curran Associates, Inc. Piotr Szymanski, Piotr ´ Zelasko, Mikolaj Morzy, ˙ Adrian Szymczak, Marzena Zyła-Hoppe, Joanna Ba- ˙ naszczak, Lukasz Augustyniak, Jan Mizgajski, and Yishay Carmiel. 2020. WER we are and WER we think we are. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3290–3295, Online. Association for Computational Linguistics. Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 2066–2080, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. *Transactions on Machine Learning Research*. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 ✗ A2. Did you discuss any potential risks of your work? It's a position paper and does not propose or implement any particular method. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
altinisik-etal-2023-impact
Impact of Adversarial Training on Robustness and Generalizability of Language Models
https://aclanthology.org/2023.findings-acl.496
Adversarial training is widely acknowledged as the most effective defense against adversarial attacks. However, it is also well established that achieving both robustness and generalization in adversarially trained models involves a trade-off. The goal of this work is to provide an in depth comparison of different approaches for adversarial training in language models. Specifically, we study the effect of pre-training data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of transformer-based language models. Our findings suggest that better robustness can be achieved by pre-training data augmentation or by training with input space perturbation. However, training with embedding space perturbation significantly improves generalization. A linguistic correlation analysis of neurons of the learned models reveal that the improved generalization is due to {`}more specialized{'} neurons. To the best of our knowledge, this is the first work to carry out a deep qualitative analysis of different methods of generating adversarial examples in adversarial training of language models.
# Impact Of Adversarial Training On Robustness And Generalizability Of Language Models Enes Altinisik Hassan Sajjad♣ **Husrev Taha Sencar** Safa Messaoud Sanjay Chawla {ealtinisik,hsencar,smessaoud,schawla}@hbku.edu.qa Qatar Computing Research Institute, HBKU Research Complex, Doha, Qatar hsajjad@dal.ca ♣Faculty of Computer Science, Dalhousie University, Halifax, Canada ## Abstract Adversarial training is widely acknowledged as the most effective defense against adversarial attacks. However, it is also well established that achieving both robustness and generalization in adversarially trained models involves a trade-off. The goal of this work is to provide an in depth comparison of different approaches for adversarial training in language models. Specifically, we study the effect of pretraining data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of transformer-based language models. Our findings suggest that better robustness can be achieved by pre-training data augmentation or by training with input space perturbation. However, training with embedding space perturbation significantly improves generalization. A linguistic correlation analysis of neurons of the learned models reveal that the improved generalization is due to 'more specialized' neurons. To the best of our knowledge, this is the first work to carry out a deep qualitative analysis of different methods of generating adversarial examples in adversarial training of language models. ## 1 Introduction Language Models (LMs) have emerged as the backbone of many tasks in AI and have extended their reach beyond NLP applications into vision and even reinforcement learning (Brown et al., 2020; Reed et al., 2022; Ramesh et al., 2022). Thus it is imperative that the generalizability and robustness of LMs be carefully assessed and evaluated. Generalizability is the ability of a model to perform well on unseen data. Transformer-based models that are pre-trained on large unlabeled text have shown remarkable generalization ability. However, when confronted with carefully designed adversarial samples, their robustness - the ability to gracefully deal with small perturbations, suffers significantly. For example, a recent study has shown that on a classification task on a YELP data set, accuracy dropped by almost 90%, when a standard test set was replaced by an adversarial counterpart (Jin et al., 2020; Yoo and Qi, 2021; Yuan et al., 2021). Adversarial training is a pragmatic approach to attain both generalizability and robustness. The idea is straightforward. For a given model M, generate adversarial samples that target M and then use the samples to incrementally re-train the model. This can be done either at the pre-training or the fine-tuning stage (Liu et al., 2020). Adversarial samples can be generated both in the input space and in the embedding space. The original work on the creation of adversarial samples for computer vision was in the input space. For example, the fast gradient sign method (FGSM) (Goodfellow et al., 2014) that perturbs a data point x along the direction of the sign gradient of the loss function with respect to the input is an example of a perturbation in the input space. In the context of natural language inputs, perturbing text is challenging due to its discrete nature. Unlike continuous data, there is no systematical way to guarantee an increase in the loss function when perturbing text. For instance, if we aim to make a small modification to the word "robust" we can choose to replace a single letter within the word or substitute it with a near synonym. However, both of these perturbations may seem ad-hoc and not sufficiently principled to intentionally *increase* the loss function. Therefore, in language settings, it is often more appropriate to perform perturbations in the embedding space, where continuous representations can be manipulated in a more structured manner. Furthermore, despite the widespread use of adversarial training to increase the robustness of models, it is not clear what their impact is on downstream tasks beyond the model's overall accuracy. For example, a deeper analysis of language models has shown that different parts of the network are responsible for different parts of speech (Belinkov et al., 2017; Conneau et al., 2018; Liu et al., 2019; Dalvi et al., 2022; Durrani et al., 2020). In this regard, the change in the network due to adversarial training has not yet been investigated. Overall our contributions in this paper are threefold. Firstly, we introduce two techniques in the context of adversarial training in the embedding space, representing the regularization- and gradient-based approaches commonly used by latent space techniques. We compare these techniques using a simple one-dimensional model and hypothesize their behavior in adversarial scenarios. Secondly, we evaluate the effectiveness of inputand embedding-space adversarial training methods in terms of their generalization ability and robustness against various types of adversarial attacks in sentiment analysis. Lastly, we conduct a thorough linguistic analysis of an adversarially trained model and demonstrate that incorporating robustness through adversarial training leads to more "focused" neurons that are associated with distinct Part of Speech (POS) tags. The rest of the paper is organized as follows. In Section 2, we discuss adversarial attacks and defenses, with a specific focus on the NLP domain. Section 3 provides a detailed explanation of embedding space adversarial techniques. In Section 4, we conduct experiments to analyze the trade-off between robustness and generalization achieved by data augmentation, input-space training, and embedding space training approaches, considering various well-known adversarial attacks. Additionally, we present our findings from linguistic correlation analysis of neurons in robust models within the same section. Finally, we finalized the paper in the concluding section. ## 2 Related Work Adversarial Attacks: The purpose of an adversarial attack is to cause a model to output conflicting decisions for an input and its 'imperceptibly' modified version. An adversarial sample is defined as: $$x^{\prime}=x+\delta;||\delta||\leq\epsilon\wedge f(x,\theta)\neq f(x^{\prime},\theta)$$ ′, θ) (1) where x′is the adversarial sample, δ is the perturbation added to the original data x, ||δ|| is a generic norm, ϵ is the limit of the maximum norm of the perturbation, and f(*x, θ*) is the output of the model parameterized by θ for input x. The quality of an adversarial sample is typically evaluated depending on how well δ is minimized, i.e., the minimum distortion that changes the prediction of the model on a sample. Obtaining an exact solution for the perturbation δ is a very challenging problem. Further, even when close approximations are considered, the solution gets computationally very expensive (Szegedy et al., 2013). To solve this problem more efficiently, gradient-based methods were introduced. Accordingly, the perturbation δ is computed by taking one (Goodfellow et al., 2014) or more steps iteratively (Madry et al., 2017; Dong et al., 2018) in the direction of the gradient to maximize the loss function. Then, this high loss point is projected back onto the input space to determine the norm-bounded perturbation. In practice, projected gradient descent (PGD) approaches that, take several small steps in the direction of the gradient, are used most frequently to create strong adversarial samples (Madry et al., 2017; Papernot et al., 2016). Other than gradient based approaches, *Jacobianbased Saliency Map Attack* (JSMA) (Papernot et al., 2016) uses the Jacobian matrix created from forward derivation of input to identify to importance of each input component to the target attack. *DeepFool* (Moosavi-Dezfooli et al., 2016), alternatively, iteratively linearizes the classifier to identify the minimum perturbation that causes a change in the classification label. Carlini & Wagner Attack (C&W) proposed defensive distillation strategy (Hinton et al., 2015) based approach. Adversarial Attacks in NLP: Running adversarial attacks against Natural language processing (NLP) models is more challenging than widely used vision models. The discrete nature of word representations, combined with the tokenization of words into word pieces, effectively invalidates any algorithm that applies differential changes on the model input when generating an adversarial sample. Moreover, quantification of the extent to which semantic similarity and contextual relations are preserved between a text input and its modified version is not trivial. To circumvent these limitations, many adversarial sample generation algorithms adopted the approach of substituting one or more words in the input until a misprediction occurs. The crux of this attack lies in identification of alternative words or phrases that retain the semantic intactness of the original input. For this, several methods based on word-embedding similarity (Jin et al., 2020), word synonymity (Ren et al., 2019; Zang et al., 2019), and masked language model predictions (Li et al., 2020) are proposed. However, finding appropriate word candidates may get computationally very intensive. For a sentence consisting of m words with n candidates to substitute each word, there are (n + 1)m possible combinations to test. To perform this search efficiently, greedy search (Ren et al., 2019), genetic algorithm (Alzantot et al., 2018), and particle swarm optimizationbased (PSO) (Zang et al., 2019) approaches are proposed and incorporated with word importance as determined by gradient measurements (Yoo and Qi, 2021) and word deletion (Ren et al., 2019). An alternative approach to above substitutionbased approach is applying perturbations in the embedding space directly to word embeddings. This approach avoids the expensive search step to identify the best word substitution configuration, but it requires devising a mapping from perturbed embeddings to the text domain in order to create an adversarial sample. To realize this, recent work (Yuan et al., 2021) adapted a gradient-based adversarial sample generation method to compute perturbations associated with each word embedding. Perturbed embeddings are then translated to input domain using a pre-trained masked-language modeling (MLM) head, as in (Li et al., 2020; Garg and Ramakrishnan, 2020), to create an adversarial sample that is semantically similar to the original input. Adversarial Defence in NLP: The most commonly deployed method for attaining robustness against an adversarial attack is through addition of adversarial samples into the training set (Szegedy et al., 2013). This approach is known to increase model robustness in both computer vision and NLP domains. Further, it is also reported that this defence approach decreases the generalization error of a model in the absence of any attack (Yuan et al., 2021), which contradicts the commonly held opinion that there is a trade-off between generalization and robustness E: (Tsipras et al., 2019). This finding can essentially be attributed to the use of a larger training set enhanced with adversarial samples. The second approach augments the training set with newly constructed, synthetic samples. While this may seem equivalent to adding adversarial samples to the training set, data augmentation methods do not need to have an adversarial nature. Common data augmentation methods include word replacement, i.e., substituting words with their synonyms or inserting random words, random word deletions, and swapping of words between sentences (Wei and Zou, 2019). Rather than using manually-designed heuristics, the power of existing NLP models can also be harnessed for data augmentation. Reverse translation, which involves re-translation of samples from a target language back to their source language constitutes one such method that ideally preserves the semantic similarity of original and augmented samples (Edunov et al., 2018; Xie et al., 2020). The use of MLM via masking words in a sentence and replacing them with model predictions (Ng et al., 2020) is another augmentation method. The third approach to adversarial training involves applying perturbations in the latent space (Zhu et al., 2019; Liu et al., 2020; Li and Qiu, 2021; Pan et al., 2022). This yields a simpler training procedure as it removes the need for generating adversarial samples in the input space. In (Zhu et al., 2019), a model is incrementally fine-tuned on sets of adversarially perturbed word embeddings computed after each fine-tuning step. Li et al. (2021) demonstrate that this method performs better when no constraint on the amount of perturbation is imposed. In Li and Qiu (2021), it is observed that rather than initializing the PGD step with random noise when computing perturbations for each token, using a token-dependent random noise that is fixed across all inputs is more effective. Recently, Pan et al. (2022) proposed the use of contrastive objective (Oord et al., 2018) for ensuring invariant representations by forcing the model to learn the differences between the normal input and its adversarial version. In addition to empirical methods, certified defense methods are proposed to identify and eliminate adversarial samples. These techniques minimize misclassification within an l∞ ball bound, particularly in the vision domain (Raghunathan et al., 2018; Wong and Kolter, 2018). In the NLP domain, two main categories of certified defense methods have emerged: Interval Bound Propagation (IBP) (Jia et al., 2019; Huang et al., 2019; Shi et al., 2020) and randomized smoothing (Ye et al., 2020; Zeng et al., 2021). IBP techniques estimate the output range by iteratively applying interval constraints from the input layer to subsequent layers. However, the requirement to modify the model structure poses challenges in incorporating these methods into pre-trained models. Randomized smoothing-based methods offer an alternative approach that is independent of the model structure. These methods utilize stochastic ensembles of input texts and leverage the statistical properties of these ensembles to offer provable robustness certification. A common approach to achieve this is by generating a few randomly modified versions of the original sample. This can be done through techniques such as random word substitutions using synonyms, as demonstrated in SAFER (Ye et al., 2020), or by employing a mask language model to substitute words, as shown in RanMASK (Zeng et al., 2021). The final prediction is then made based on the decisions made by these randomly generated samples. Throughout the rest of the paper, we do not delve into a detailed discussion of these techniques for several reasons. Firstly, the main focus of this paper is on empirical methods and evaluating their impact. Secondly, randomized smoothing methods can be integrated into various techniques, making them applicable in different contexts. Lastly, previous findings suggest that while randomized smoothing methods demonstrate strong defense performance, they tend to underperform compared to latent space adversarial training (Li et al., 2021). ## 3 At With Embedding Space Perturbations Among all adversarial defenses developed for language processing models, moving the adversarial training from the input space to the embedding space offers the most advantage. This essentially allows the adoption of gradient-based adversarial training approaches that are computationally less demanding than input space methods. Although a plethora of such adversarial training methods exists, they are all essentially guided by two main principles in their approach. The first one essentially sets the training objective to minimize the loss due to worst-case perturbation induced on the training samples, instead of the average loss computed from training samples by the standard training. This group of methods essentially differ in the way they approximate the worst-case perturbation (Madry et al., 2017; Miyato et al., 2018; Zhang et al., 2019) as well as the extent and nature of perturbation applied during generation of adversarial samples (Ding et al., 2018; Wang et al., 2019; Liu et al., 2020). The second approach primarily relies on the premise that smoothness is an important requirement of a robust model. To this objective, these methods focus on minimization of a regularized version of the loss instead of optimizing only the standard, training loss. The regularization term here ensures that there is a wide enough margin around each training data point with the decision boundary of the model through minimizing the difference between the predictions of natural and adversarial samples. Methods following this approach are distinguished based on their formulation of regularization (Szegedy et al., 2016; Zhang et al., 2019) and their coupling with the training loss described above (Gan et al., 2020; Pan et al., 2022). In our analysis, we consider two representative methods that most effectively exemplify each approach. In practice, due to its computational efficiency, the PGD attack is most frequently used for the creation of adversarial samples. We will refer to this generic adversarial training approach as PGDAT. The latter approach is also best characterized by the use of PGD in ensuring local distribution smoothness around natural samples. This alternative method will be referred to as LDS. We must note that improved variants of the two base methods should be expected to perform better. In this regard, robustness-generalization performance of the PGD-AT and LDS can be interpreted as lowerbounds. The steps of both methods are presented in Algorithm 1 where the lines that differ between the two methods are highlighted as pink for PGD-AT and blue for LDS. Both methods start by randomly initializing δ with normal distribution with a mean of zero and standard deviation of σ. The loss is then calculated between the model's output of the perturbed input depending on the method, PGD-AT or LDS. The δ value is then updated by the gradient and clipped to within ±ϵ by the projection function Π. These steps are repeated for S times. The loss value is then updated by combining the standard loss with the loss associated with each method. Gradient update is then applied to model parameters. To better examine the behavior of the two methods, we analyze a simple one-dimensional linear Algorithm 1 PGD-AT and LDS based adversarial training Input: E: the number of epochs, D = {(x(i), y(i))} n i=1: the dataset, f(*x, θ*): the machine learning model parametrized by θ, δ: the perturbation initialized by σ and limited by ϵ, τ : the global learning rate, µ: the adversarial learning rate, S: the ![4_image_1.png](4_image_1.png) ![4_image_0.png](4_image_0.png) ![4_image_2.png](4_image_2.png) regression model: $$\epsilon\sim N(0,\sigma^{2})$$ $$y=\theta.x+\epsilon,$$ y = θ.x + ϵ, ϵ ∼ N(0, σ2) Assuming a fixed perturbation δ, we determine how the two loss functions, given in Algorithm 1, estimate the model parameter θ under noisy observations. Table 1 presents the loss functions corresponding to PGD-AT and LDS as well as the one corresponding to the standard ordinary least squares (OLS) estimation in the absence of δ. The estimates for the parameter θ for the three loss functions are also given in the table (third column). Comparing PGD-AT and LDS, it can be deduced that LDS will converge to OLS only as the noise ϵ gets severe, suppressing the effect of δ in the denominator. Whereas PGD-AT can be expected to follow OLS more closely at all noise levels as δ appears both at the numerator and the denominator, thereby absorbing its effect on the estimate. We also designed an experimental setup to test these hypotheses. A single neuron is trained based on randomly generated (*x, y*) pairs as defined above assuming θ = 1 2 and for two different noise distributions, (σ = 0.01 and σ = 0.1) for each loss function. The models are trained for 2K epochs at a learning rate of 0.005 starting with the OLS loss. For PGD-AT and LDS models, the OLS loss is substituted by their loss function after epoch 1750 and δ values are computed as defined in Algorithm 1. The distributions of the estimated scalar model parameter θ obtained after 25 runs is displayed in Fig. 1. Essentially, the spread of the distribution signifies the robustness of a model against adversarial samples and the distribution mean relates to the generalizability of the model. In this regard, PGD-AT is seen to perform better than LDS as it yields a tighter spread in both cases. However, at higher noise levels, it can be seen that LDS provides a more accurate estimate of θ. Overall, we can expect that a model trained with PGD-AT to be more robust while yielding a generalizability behavior closer to that of LDS. ## 4 Experiments We first compare the robustness, generalization and run-time complexity of different AT strategies, following the pipeline in Fig. 2. Then, we perform a Linguistic Correlation Analysis (LCA, Dalvi et al., Figure 3: LCA pipeline of models learned using different adversarial training approaches. ![5_image_0.png](5_image_0.png) | Attack | Dataset | BERT | AT-IP | AT-DA | AT-EP | | | | | |------------|-----------|-------------|---------|-----------------|---------|--------|------|------|------| | A2T | A2T_MLM | BERT_Attack | SSMBA | BackTranslation | LDS | PGD-AT | | | | | TextFooler | MR | 82.1 | 77.9 | 79.7 | 79.3 | 79.8 | 72.8 | 88.6 | 87.8 | | IMDB | 80.6 | 86.5 | 65.3 | 72.8 | 81.3 | 45.9 | 91.5 | 94.7 | | | A2T | MR | 33.8 | 27.6 | 30.8 | 26.5 | 34.2 | 30.4 | 22.3 | 20.9 | | IMDB | 59.5 | 51.1 | 43.7 | 49.4 | 59.2 | 36.4 | 56.9 | 43.0 | | | BAE | MR | 52.1 | 44.1 | 45.5 | 44.0 | 49.2 | 47.1 | 55.3 | 52.9 | | IMDB | 68.8 | 65.0 | 52.5 | 57.9 | 61.5 | 41.4 | 66.5 | 61.0 | | | PSO | MR | 79.8 | 75.0 | 72.7 | 74.7 | 78.1 | 75.6 | 79.8 | 80.7 | | IMDB | 46.4 | 35.3 | 35.4 | 30.2 | 41.8 | 42.8 | 70.8 | 66.2 | | | Average | MR | 62.0 | 56.1 | 57.2 | 56.1 | 60.3 | 56.5 | 61.5 | 60.5 | | IMDB | 63.8 | 59.5 | 49.2 | 52.6 | 61.0 | 41.6 | 71.4 | 66.2 | | 2019) as implemented in the NeuroX toolkit (Dalvi et al., 2023) to gain better insights into the dynamics of the learned models, as illustrated in Fig. 3. Baselines: We compare standard BERT (Devlin et al., 2018) with seven versions of adversarially trained BERT models using methods from three families of AT approaches: (1) AT with pre-training data augmentation (AT-DA), (2) AT with input space perturbations (AT-IP) and (3) AT with embedding space perturbations (AT-EP), on the task of sentiment classification. Specifically, for ATDA, we experiment with SSMBA (Ng et al., 2020) and BackTranslation (Xie et al., 2020). For AT-IP, we use A2T, A2T_MLM (Yoo and Qi, 2021) and BERT_attack (Li et al., 2020). For AT-EP, we report results on LDS (Szegedy et al., 2016; Zhang et al., 2019) and PGD-AT (Gan et al., 2020; Pan et al., 2022). Datasets: We fine-tune all models on the Internet Movie Database (IMDB, Maas et al., 2011) and Movie Reviews (MR, Pang and Lee, 2005) datasets and test on the corresponding testing splits, as well as on YELP dataset (Zhang et al., 2015) for out-of-distribution assessment of the models. Attack methods: We assess the robustness of the models under four different attacks which replace words in the input space using different strategies. (1) TextFooler (Jin et al., 2020) first searches for the word that results in the highest change in the sentiment score, when removed, then replaces it with the nearest neighbouring word in the embedding space. (2) BAE (Garg and Ramakrishnan, 2020) masks a portion of the text and using a BERT masked language model to generate alternatives for the masked words. (3) A2T (Yoo and Qi, 2021) selects the word with the largest loss gradient w.r.t its embedding and replaces it with a synonym generated from a counterfitted word embedding (Mrkšic et al. ´ , 2016). (4) PSO (Zang et al., 2019) uses sememe-based word substitution and particle swarm optimizationbased search algorithm to find good adversarial examples for a given input text. Evaluation metrics: we assess (1) generalization via computing the accuracy values on indistribution and out-of-distribution datasets, (2) robustness using the Attack Success Rate (ASR) representing the ratio of the number of successful attacks to the number of samples, as well as (3) the time complexity measured via the fine-tuning run-time of the BERT model over 4 epochs. Implementation details: For AT-DA and AT-IP methods, we use the parameters proposed by the corresponding papers. For our PGD-AT and LDS approaches, we limit the number of PGD steps to 3 and the perturbations L2-norm to 0.003. All experiments are conducted on Nvidia v100 Tensor Core GPU. Run-time results: We report the time for finetuning the models over 4 epochs in Tab. 3. The ATDA approaches results in the shortest fine-tuning time as adversarial examples are generated once for every sample before the training, unlike in AT-IP and AT-EP where adversarial examples are generated at every training iteration. AT-EP methods, are around 1.5 times slower to fine-tune than the standard BERT model as generating the adversarial examples requires an additional backward pass for computing the gradient of the loss, at every training iteration. As expected, AT-IP methods are the most time consuming as they involve a combinatorial search over a large number of input space configurations. For example, the fastest approach in this class, A2T, needs 6 seconds for a single adversarial example generation, which is around 10 times slower than the other approaches. | Models | Run Time (in min) IMDB MR | | | | |-----------------|-----------------------------|--------|-------|------| | BERT | 79.0 | 38.2 | | | | AT-DA | | SSMBA | 112.8 | 46.4 | | BackTranslation | 210.5 | 66.0 | | | | A2T | 1600.5 | 448.5 | | | | AT-IP | A2T_MLM | 1494.3 | 504.7 | | | BERT_Attack | 1495.2 | 461.5 | | | | AT-EP | LDS | 163.4 | 64.2 | | | PGD-AT | 158.2 | 69.0 | | | Robustness results are shown in Tab. 2. The lower the ASR the better is the model in withstanding the attack. As expected, the most effective methods against adversarial attacks are the AT-IP ones. This is due to the fact that the only class of approaches were it's possible to match the attack and the defense strategies, i.e., train on perturbations generated from the attack strategies, is AT-IP, as attacks in language models operate in the input space. Among AT-AD methods, BackTranslation is the most robust method on the IMDB dataset. We found that this is due to IMDB having in average long sentences which makes it easier to generate good and diverse adversarial examples to train on, via back translation. Our results show that AT-EP methods are the least robust. In particles, LDS-AT struggle in the sentiment classification task due to noisy ground-truth label, i.e., sentiments are mostly not binary but the ground truth labels are. Generalization results are reported in Tab. 4. AT-DA accuracy values are comparable to BERT. Hence, it looks like AT-DA generalization capabilities are not traded-off for better robustness as it is the case of AT-IP approaches. This is due to the fact that adversarial examples from SSMBA (self-supervised-based) and BackTranslation (translation-based) are generated while taking the global context into account. So they are unlikely to change the semantics of the input text and hence the decision boundaries. These methods are however unpractical for usage inside of the training loop. More efficient techniques, e.g., based on local search in the embedding space, are used by AT-IP methods. This however might not always lead to preserving the semantics of the original input text, which also means that assigning the label of the ground truth input to these adversarial examples might be inappropriate or noisy. Such hard examples are well known to encourage overfitting and hence reduce the generalization ability of the model. This explains the significant drop in both in and out-of-distribution accuracy values of AT-IP approaches. The best generalization results are obtained using AT-EP methods. We notice that PGD-AT consistently improves upon BERT. This phenomena doesn't occur in vision where generalization is well know to drop in adversarially trained models. To the best of our knowledge, we are the first to report this in language models trained with embedding space perturbation. In order to gain a better understanding of the reasons behind this phenomena, we investigate the learned dynamics of deepnets trained with AT-EP methods using Linguistic Correlation Analysis (next paragraph). Specifically, we want to validate that the achieved accuracy was due to better learning to solve of the task at hand and not just due to memorizing the training data. Linguistic Correlation Analysis (LCA, **Dalvi** et al., **2019)** is used to identify the most salient neurons for a given linguistic property like a Parts-ofSpeech (POS) tag (Sajjad et al., 2022). To achieve this, we first match words to neurons, then assess if the matched words have the linguistic property of interest. As the sentiment prediction task is not appropriate for word level analysis, i.e., same words can be part of different sentiment classes, we focus on POS tagging task. We fine-tune BERT models using AT-EP methods on the publicly available Penn Treebank dataset (Marcinkiewicz, 1994). We use LCA to generate a list of the top-5 firing neu- | IMDB | MR | | | | | | |-----------------|-------|---------|-------|-------|-------|-------| | Models | IMDB | YELP | MR | YELP | | | | BERT | 93.49 | 91.24 | 85.27 | 87.06 | | | | AT-DA | | SSMBA | 93.49 | 91.17 | 85.24 | 87.72 | | BackTranslation | 93.44 | 91.50 | 84.96 | 87.77 | | | | A2T | 92.59 | 89.97 | 83.58 | 83.62 | | | | AT-IP | | A2T_MLM | 92.70 | 89.15 | 83.90 | 81.79 | | BERT_Attack | 92.63 | 90.04 | 84.61 | 80.41 | | | | AT-EP | | LDS | 93.24 | 92.09 | 86.49 | 81.80 | | PGD-AT | 93.80 | 92.11 | 86.59 | 88.16 | | | Table 5: LCA results. The association strength between POS tags and neurons. POS BERT LDS PGD-AT Match Total % Match Total % Match Total % JJ 2 15 13.33 2 15 13.33 6 10 **60.00** JJR 3 9 33.33 4 9 44.44 7 13 **53.84** MD 0 5 0.00 3 5 **60.00** 2 5 40.00 VBD 5 5 **100.00** 0 5 0.00 0 5 0.00 : 0 5 0.00 1 5 20.00 3 5 **60.00** VBZ 4 10 40.00 7 9 77.77 9 10 **90.00** RB 9 10 90.00 9 10 **90.00** 6 10 60.00 VBG 10 12 83.33 14 18 77.77 15 15 **100.00** rons for every POS tag and leverage these lists to perform two types of analysis: (1) neurons-POS tags association strength analysis and a (2) a neural ablation analysis. To assess the neurons-tag association strength, given the list of the top-firing neurons from LCA, we next generate a list of the words in the testing data with the highest activation values for these neurons. Then, we compute the intersection between the generated word list and the ground-truth one, i.e., the list of words with label being the POS tag of interest in the testing data. A large intersection set means that the neurons learned to specialize in predicting specific POS tags, i.e., they learned the linguistic nuances of the task and are unlikely to have just memorized the training data. Results in Tab. 5 1show that our AT-EP learn more 'focused' neurons as measured by the intersection ratio (match/total). In particular, PGD-AT significantly improves upon the standard BERTB model. Table 6 provides words corresponding to select 1Definitions of POS tags with their order in the table: adjective; adjective, comparative; modal; verb, past tense; colon, semi-colon; verb, 3rd person singular present; adverb; verb, gerund or present participle POS tags obtained from the models trained with the *BERT*B, the LDS, and PGD-AT methods. For the second analysis, i.e., the neural ablation study, we create a linear regression model using only activations of the top 10 ranked neurons. Results are shown in Tab. 7. PGD-AT and LDA achieve a significantly higher performance than BERT, which further support the observation that AT helped better learn the intricacies of the tasks and explains the improvement of the generalization abilities of the AT-EP approaches (e.g., in Tab. 4). ## 5 Conclusions In this paper we have carried out an extensive study of adversarial training methods (ATMs) to understand their impact on robustness and generalizability for transformer-based deep language models. We can draw the following conclusions from our study. First, non-adversarial data augmentation improves both generalization and robustness over the baseline BERT model. Adversarial training in the input space yields better robustness compared to both non-adversarial data augmentation and embedding space adversarial training. In contrast, adversarial training in the embedding space exhibits best generalization behavior. Among PGD-AT and LDS methods, our results show that the PGD-AT is consistently more robust and generalizable. Overall, our results show that unlike in computer vision domain where gradient-based adversarial training yields the best robustness and generalization tradeoff, for language processing models input-space training methods are indispensable. For future work we will consider combining data augmentation, input-space training, and embedding space training approaches together. We would also like to extend our theoretical understanding of the trade-off between robustness and generalizability for language models. In connection, the impact of ATMs for other downstream applications needs to be studied. ## Limitations All our experiments are performed using the BERTsmall language model due to the computational requirements of generating and testing models considering many configurations of adversarial training and attack methods. Although using larger language models might have provided different performance measurements, our findings that compare input- and embedding-space adversarial training Table 6: Examples of the most related words for different POS tags for models trained with the *BERT*B, the LDS, and PGD-AT methods. The words are bolded when their actual tags match with the associated tag, where the actual tags correspond to the most frequent tags of the words based on the POS-tagged training data. POS *BERT*B LDS PGD-AT | POS | BERTB | LDS | PGD-AT | |----------------------------------------|------------------------------------------|------------------------------------|-----------------------------------| | indicates teenage And begins | indicates denies erodes explains | indicates accounts refuses agrees | | | VBZ | reflects explains evil Previously | resembles And runs | is has believes And | | automatic reckless | adds trains | adds begins | | | Rae away little Springs Nelson | Aktiebolaget least plummeted Do policies | bright away what high | | | JJ | live equal What explain Giants | little told What equal securities | strong cold skyrocketed | | Who Aktiebolaget skyrocketed what rung | Dallara added said most cardboard | green What same | | | newer meaning greater punish | included newer greater smaller | included newer stronger meaning | | | included banking close | indicated shipbuilding arranged | smaller greater indicated planning | | | JJR | smaller her | Higher her | higher close Higher lower least | | MD | associated bright required severe denied | apart shall might must fallen | fallen shall expected might apart | | VBD | restored bothered notched mixed began | expire face exist become buy | expire face become exist disagree | Table 7: LCA results. Neural ablation study. | BERT | LDS | PGD-AT | |--------|-------|----------| | 34.2% | 38.6% | 35.3% | methods are expected to remain unchanged. Another limitation of our work is the semantic gap between attacks in input and embedding space needs further research. Specifically, how do perturbations in the embedding space get translated in the input space? Finally, other forms of robustness techniques, besides adversarial training, in the context of large language models require examination. ## Ethics Statement The work studied the impact of several adversarial training methods on robustness and generalization. The work did not result in any new dataset and model and it has no potential ethical issues. On the positive side, the work targets two important attributes of trustworthy AI i.e. robustness and generalization. Our work provides an insightful comparison of the input-space and embedding space adversarial training approaches and will positively impact the future research work in this area. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. *arXiv preprint arXiv:1804.07998*. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do Neural Machine Translation Models Learn about Morphology? In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (ACL), Vancouver. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, D. Anthony Bau, and James Glass. 2019. What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. In *Proceedings* of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI, Oral presentation). Fahim Dalvi, Abdul Rafae Khan, Firoj Alam, Nadir Durrani, Jia Xu, and Hassan Sajjad. 2022. Discovering latent concepts learned in BERT. In *International* Conference on Learning Representations. Fahim Dalvi, Hassan Sajjad, and Nadir Durrani. 2023. Neurox library for neuron analysis of deep nlp models. In *Proceedings of the Association for Computational Linguistics (ACL)*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. 2018. Mma training: Direct input space margin maximization through adversarial training. *arXiv preprint arXiv:1812.02637*. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In *Proceedings* of the IEEE conference on computer vision and pattern recognition, pages 9185–9193. Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. 2020. Analyzing individual neurons in pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4865–4880, Online. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. *arXiv preprint arXiv:1808.09381*. Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. *Advances in Neural Information Processing Systems*, 33:6616–6628. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. *arXiv preprint arXiv:2004.01970*. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. *arXiv preprint* arXiv:1909.01492. Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. *arXiv preprint arXiv:1909.00986*. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pages 8018–8025. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial attack against bert using bert. *arXiv preprint* arXiv:2004.09984. Linyang Li and Xipeng Qiu. 2021. Token-aware virtual adversarial training in natural language understanding. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 35, pages 8410–8418. Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021. Searching for an effective defender: Benchmarking defense against adversarial word substitution. *arXiv preprint arXiv:2108.12777*. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. *arXiv preprint arXiv:2004.08994*. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*. Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. *Using* Large Corpora, 273. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semisupervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582. Nikola Mrkšic, Diarmuid O Séaghdha, Blaise Thom- ´ son, Milica Gašic, Lina Rojas-Barahona, Pei-Hao Su, ´ David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. *arXiv preprint arXiv:1603.00892*. Nathan Ng, Kyunghyun Cho, and Marzyeh Ghassemi. 2020. Ssmba: Self-supervised manifold based data augmentation for improving out-of-domain robustness. *arXiv preprint arXiv:2009.10195*. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. Lin Pan, Chung-Wei Hang, Avirup Sil, and Saloni Potdar. 2022. Improved text classification via contrastive adversarial training. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pages 11130–11138. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. *arXiv preprint cs/0506075*. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In *2016 IEEE European symposium on security and privacy (EuroS&P)*, pages 372–387. IEEE. Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial examples. In *International Conference on Learning Representations*. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. *arXiv* preprint arXiv:2204.06125. Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. 2022. A generalist agent. *arXiv preprint arXiv:2205.06175*. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In *Proceedings of the 57th annual meeting of the association for computational linguistics*, pages 1085– 1097. Hassan Sajjad, Nadir Durrani, and Fahim Dalvi. 2022. Neuron-level interpretation of deep NLP models: A survey. *Transactions of the Association for Computational Linguistics*, 10:1285–1303. Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In *International Conference* on Learning Representations. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Robustness may be at odds with accuracy. In *International Conference on Learning Representations*. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. 2019. Improving adversarial robustness requires revisiting misclassified examples. In *International Conference on Learning* Representations. Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. *arXiv preprint arXiv:1901.11196*. Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International conference on machine learning, pages 5286–5295. PMLR. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. *Advances in Neural* Information Processing Systems, 33:6256–6268. Mao Ye, Chengyue Gong, and Qiang Liu. 2020. Safer: A structure-free approach for certified robustness to adversarial word substitutions. arXiv preprint arXiv:2005.14424. Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of nlp models. arXiv preprint arXiv:2109.00544. Lifan Yuan, Yichi Zhang, Yangyi Chen, and Wei Wei. 2021. Bridge the gap between cv and nlp! a gradientbased textual adversarial attack framework. *arXiv* preprint arXiv:2110.15317. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2019. Word-level textual adversarial attacking as combinatorial optimization. arXiv preprint arXiv:1910.12196. Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, and Xuanjing Huang. 2021. Certified robustness to text adversarial attacks by randomized [mask]. *arXiv preprint arXiv:2105.03743*. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. *Advances in neural information processing* systems, 28. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for language understanding. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Under the Limitation ✓ A2. Did you discuss any potential risks of your work? Under the Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? All datasets are publicly available ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. All datasets are publicly available ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wang-etal-2023-benchmarking
Benchmarking Diverse-Modal Entity Linking with Generative Models
https://aclanthology.org/2023.findings-acl.497
Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables. While existing entity linking (EL) models work well on per modality configuration, such as text-only EL, visual grounding or schema linking, it is more challenging to design a unified model for diverse modality configurations. To bring various modality configurations together, we constructed a benchmark for diverse-modal EL (DMEL) from existing EL datasets, covering all three modalities including text, image and table. To approach the DMEL task, we proposed a generative diverse-modal model (GDMM) following a multimodal-encoder-decoder paradigm. Pre-training GDMM with rich corpora builds a solid foundation for DMEL without storing the entire KB for inference. Fine-tuning GDMM builds a stronger DMEL baseline, outperforming state-of-the-art task-specific EL models by 8.51 F1 score on average. Additionally, extensive error analyses are conducted to highlight the challenge of DMEL, facilitating future researches on this task.
# Benchmarking Diverse-Modal Entity Linking With Generative Models Sijia Wang1∗**, Alexander Hanbo Li**2† , Henry Zhu2, Sheng Zhang2**, Chung-Wei Hang**2, Pramuditha Perera2, Jie Ma2, William Wang2, Zhiguo Wang2**, Vittorio Castelli**2 Bing Xiang2**, Patrick Ng**2 1 Virginia Tech 2 AWS AI Labs sijiawang@vt.edu {hanboli,henghui,zshe,cwhang,pramudi,jieman,wyw, zhiguow,vittorca,bxiang,patricng}@amazon.com ## Abstract Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables. While existing entity linking (EL) models work well on per modality configuration, such as text-only EL, visual grounding, or schema linking, it is more challenging to design a unified model for diverse modality configurations. To bring various modality configurations together, we constructed a benchmark for diverse-modal EL (**DMEL**) from existing EL datasets, covering all three modalities including text, image, and table. To approach the DMEL task, we proposed a generative diverse-modal model (**GDMM**) following a multimodal-encoder-decoder paradigm. Pre-training GDMM with rich corpora builds a solid foundation for DMEL without storing the entire KB for inference. Fine-tuning GDMM builds a stronger DMEL baseline, outperforming state-of-the-art task-specific EL models by 8.51 F1 score on average. Additionally, extensive error analyses are conducted to highlight the challenges of DMEL, facilitating future research on this task. ## 1 Introduction Linking ambiguous mentions to unambiguous referent in a knowledge base (KB) such as Wikipedia, known as **Entity linking** (EL) (Shen et al., 2015), is an essential component for applications like question answering (Ferrucci, 2012; Chen et al., 2017; Lewis et al., 2020) and recommendation systems (Yang et al., 2018). **Diverse-Modal Entity Linking** (DMEL) extends the scope of interest from textual entity linking to heterogeneous input formats, such as linking visual and textual expressions to KB (Adjali et al., 2020b,a; Moon et al., 2018; Gan et al., 2021a; Zheng et al., 2022; Wang et al., 2022d; Gan et al., 2021a; Cui et al., 2021) and linking mentions in natural language to tables or database (DB) ![0_image_0.png](0_image_0.png) schema (Liu et al., 2021; Katsakioris et al., 2022; Shi et al., 2020; Lei et al., 2020; Chen et al., 2020; Wang et al., 2022a). Figure 1 demonstrate three examples of DMEL, including (a) classical textual entity linking, (b) textual-visual entity linking in which the question or mentions are paired with image(s), and (c) tabular schema linking in which the mentions are linked to column names or cell values. Retrieval-based contrastive learning or ranking mechanism is the mainstream for early visual entity linking by leveraging a matching score between the mention and the KB entities (Cui et al., 2021; Wang et al., 2022d; Zheng et al., 2022). However, these methods require storage of dense representations of all KB entities, and when the size of entities increases (e.g. Wikipedia has 6M articles), it raises concerns for space complexity and also inference-time latency. Meanwhile, linking mentions to tables or DB schemes, known as schema linking, remains an important but under-explored task. For example, in text-to-SQL generations, incorrect schema linking usually counts for a large portion of the errors (Zhong et al., 2017; Yu et al., 2018; Shi et al., 2020; Lei et al., 2020; Taniguchi et al., 2021). Previous string matching heuristic (Chen et al., 2020) or embedding matching methods (Chen et al., 2020; Wang et al., 2022a; Guo et al., 2019; Wang et al., 2020) lack semantic and schema understanding, and can hardly generalize well to new domains. Last but not the least, previous endeavors of entity linking are limited to individual tasks including textual EL, textual-visual EL, or schema linking, and lack a general view for the DMEL problem. To this end, we propose a unified DMEL task that includes existing EL datasets on all three modalities - text, image, and table. The unified DMEL task is challenging because the model needs to handle a wide spectrum of modality configurations together. On the modeling side, because storing all entity information (e.g. all the images in the entire KB) is expensive at inference time, we propose to use a unified generative model that can take diverse-modal input and generate entity names in an autoregressive fashion. Additionally, the mention diversity and ambiguity issue in schema linking can be addressed by pre-training the generative model. In this work, we build a generic diversemodal architecture for end-to-end DMEL. The DMEL dataset is constructed from five existing datasets, including GERBIL benchmark, WikiDiverse, MELBench-Wikipedia, Squall, and SLSQL, covering diverse EL tasks. The proposed generative diverse-modal model (**GDMM**) is first pretrained on large-scale text corpus BLINK and images corpus from Wikipedia KB, offering profound prior knowledge. Extensive experiments are then conducted on the DMEL benchmark to compare our proposed generative model to previous stateof-the-art methods. Experimental results show that GDMM achieves strong performance on the DMEL dataset and outperforms state-of-the-art task-specific EL models. Our contributions include: - We define a novel diverse-modal Entity Linking task, which links an entity mention within heterogeneous information sources to a knowledge base. A unified dataset is constructed for rigorous DMEL examination. - A generative diverse-modal model GDMM is proposed following a multimodal-encoderdecoder structure. The multimodal encoder allows collective representation between each modality. The autoregressive structure enables us to directly predict the entity name | Dataset | Modality | Size | | | |-------------|------------|--------|--------|-------| | #L | #V | #U | | | | GERBIL | L → L | 42,854 | 0 | 0 | | WikiDiverse | LV → L | 7,823 | 6,924 | 0 | | MELBench | LV → L | 18,880 | 18,880 | 0 | | Squall | LU → L | 11,274 | 0 | 2,108 | | SLSQL | LU → L | 8,034 | 0 | 166 | | DMEL | LVU → L | 88,865 | 25,804 | 2,274 | without storing the entire KB. The pre-training experimental results confirm that a candidate trie created from entity names is sufficient for inference. - The experimental results show that the proposed model obtains state-of-the-art performance on (almost) each individual EL task. ## 2 Problem Formulation We assume to have a KB (e.g., Wikipedia or a DB schema) where each entity is a unique entry in the KB. We formulate the following DMEL problem: given a multimodal input {xi, vi, ui} of textual (L), visual (V), and tabular (U) modality respectively, an entity mention mi within the input, and a candidate set Ci = {c 1 i , *· · ·* , c K i}, the task is to link the mention mito one entity in Ci. We assume the entity span is given. Sometimes the candidate set can be the entire entity collection E. Particular instances of DMEL problem include but are not limited to: *Textual Entity Disambiguation* where a given mention miin xi will be linked to one entity in Ci; *Textual-Visual Entity Disambiguation* where the mi and a given image vi will be linked to one entity in Ci; *Schema linking* where a miin a SQL query will be linked to table schema, i.e., a column name within given tables ui. If the mention is not a valid entity or not in Ci, the target label is "nil". ## 3 Dmel Benchmark We build the DMEL benchmark from five existing datasets, including GERBIL benchmark (Verborgh et al., 2018), WikiDiverse (Wang et al., 2022d), MELBench-Wikidata (as MELBench in the rest of the paper) (Gan et al., 2021a), Squall (Shi et al., 2020), and SLSQL (Lei et al., 2020). We evaluate textual-visual entity disambiguation capability on WikiDiverse and MELBench, and evaluate tabular ![2_image_1.png](2_image_1.png) ![2_image_0.png](2_image_0.png) | Dataset | Modality | # Sentences / Tables Train Valid Test | | | |-------------|------------|-----------------------------------------|-------|--------| | GERBIL | L → L | 18,448 | 4,791 | 19,614 | | WikiDiverse | VL → L | 6,312 | 755 | 757 | | MELBench | VL → L | 13,216 | 1,888 | 3,776 | | Squall | LU → L | 9,028 | 2,246 | - | | SLSQL | LU → L | 7,000 | 1,034 | - | schema linking on Squall and SLSQL. All datasets are in English, and we summarize the data statistics in Table 1. We will release this benchmark. We assume that the mention span is given across all datasets, thus (1) on GERBIL benchmark, we investigate textual entity disambiguation using the same candidate sets as in Le and Titov (2018) and De Cao et al. (2021). (2) On WikiDiverse, we investigate entity disambiguation performance with retrieved Top-10 candidates by Wang et al. (2022d) (3) On MELBench-Wikidata, we follow the original setting in which the given entity mention will be linked to its referent in the knowledge base. (4) On Squall, the given entity mention in the natural question will be linked to column names in the target table. (5) On SLSQL, the entity mention in the natural question will be linked to the column name within multiple tables. Statistics for each individual EL task are shown in Table 2. Note that for the GERBIL benchmark, the training split refers to AIDA-train, the validation split refers to AIDA-dev, and the test split includes all test split as in (De Cao et al., 2021). ## 4 Gdmm Model We build a generative entity linking model that enables diverse-modal vision, language and table understanding and inference. We show how this can be achieved with a generative encoder-decoder structure. ## 4.1 Input Processor As shown in Figure 2, our model can process inputs of three modalities, including texts, images, and tables. Formally, given a multimodal input {xi, vi, ui}, the input processor serves to encode the data and group the modalities as follows. (1) Text Given an input text xi, we first tokenize and embed it into a list of word vectors xi following Devlin et al. (2019). (2) **Image** Given an input image vi, we first resize it to a fixed size and split it into patches, following Kim et al. (2021). (3) **Table** We follow the table representation proposed in TAPEX (Liu et al., 2022). The table is flattened and represented as u∗ i = [head]*, col*1, · · · *, col*M, [ROW], 1, cell11, · · · *, cell*1M, [ROW] · · · , where [head] and [ROW] are special tokens denoting the beginning of table headers and rows, and the number after [ROW] is used to denote the row index. *cell*ij represents a cell in ith row and jth column. Then the table representation u∗ i will be tokenized and embedded into ui. Finally, given the multimodal input {xi, vi, ui}, our input processor outputs {xi Lui, vi}, where L is a concatenation operator. ## 4.2 Gdmm Model Architecture Multimodal Encoder The multimodal encoder consists of an image encoder, a text encoder, and a fusion encoder, following previous work (Singh et al., 2022; Yang et al., 2022). The text encoder and image encoder use the same ViT architecture (Dosovitskiy et al., 2021) with different parameters. The text input and visual input {xi Lui, vi} are passed into the text encoder and vision encoder individually. The text hidden state vectors {h LU i} and the image embeddings {h V i} are then projected and concatenated into a single list. A fusion encoder is applied to the concatenated list, which allows cross-attention between the projected unimodal representations and fuses the two. The output is a list of hidden states {hM i}. The multimodal encoder parameters are initialized with pre-trained FLAVA (Singh et al., 2022) parameters. Decoder We exploit the transformer architecture (Vaswani et al., 2017) for the decoder. Previous study (Rothe et al., 2020) points out that combining models with same vocabulary has stronger overall performance, thus we initialize the decoder with the BERT (Devlin et al., 2019) pre-trained parameters. Training and Inference The GDMM is trained with standard autoregressive objective, i.e., maximizing the output sequence likelihood pθ(yi|xi, vi, ui) with respect to the model's parameters θ. We rank each candidate c k i ∈ Ci by computing a score with an autoregressive formulation: score(c k i|xi, vi, ui) = pθ(y k i|xi, vi, ui) = QN j=1 pθ(y k j|y k <j , xi, vi, ui), where N is the number of tokens of c k i . If the score is lower than a threshold θ, the prediction becomes "nil". The threshold will be decided by the development set. Constrained decoding When the candidate set Ciis very large (e.g., the entire entity space E), naturally, it is intractable to compute a score for every element. Thus we exploit Constrained Beam Search (Sutskever et al., 2014; De Cao et al., 2021), a tractable decoding strategy to efficiently search the valid entity space. It is tractable as the average time cost depends on beam size and the average length of entity representations (e.g. 6 BPE tokens on average for entities in Wikipedia KB), instead of the size of Ci. An entity trie Ti for Ci will be created so that the output is limited to the target space. The constraint is defined as, for each node t ∈ T , its children indicate all allowed continuations from the prefix traversing from root to t. For example, as shown in Figure 2, given four candidates Manchester United F.C., Manchester City F.C., Manchester City W.F.C, and City College Manchester, a candidate trie will be created as shown in the figure. The decoding will strictly follow the top-down order in the trie with a certain beam size. ## 4.3 Pre-Training Gdmm Pre-training is critical to our architecture though the encoder and decoder are initialized with pretrained weights because the mapping between the encoder and decoder are randomly initialized and they have not been pre-trained simultaneously with the encoders and the decoder. Pre-training data A pre-training corpus is constructed from BLINK (Wu et al., 2020) and images in Wikipedia KB. BLINK is a commonly used corpus for textual entity linking pre-training, including 9M unique annotations of document-mentionentity triples from Wikipedia. Meanwhile, the images in Wikipedia KB are naturally linked to their respective entity names. The two together are well-suited for pre-training DMEL models. Aside from **text-only** BLINK, we construct **LVpaired** pre-training data by linking BLINK and Wiki-images. An image pool (Wiki-images) is collected from Wikipedia KB if the entity can be linked to mentions in BLINK. The image pool contains 797,436 downloaded images of 495,149 entities in Wikipedia KB. We then randomly attach an image of the target entity, if exists, to each mention in BLINK. In total, the LV-paired pre-training data includes 5,445,264 mentions and 678,385 distinct images in the training set, and 5,816 mentions and 5,414 images in the development set. Pre-training details We pre-train GDMM on text-only BLINK and LV-paired pre-training data in two stages. Note that not all the BLINK entities appear in Wiki-images. There are over 2.5M BLINK mentions not covered by LV-paired pretraining data. To fully leverage the BLINK annotations, we first pre-train on text-only BLINK and then pre-train on the LV-paired data. With text-only BLINK, we freeze parameters in the image encoder and fusion layers and only update parameters in the text encoder and decoder. In the second stage, all the parameters are updated. ## 4.4 Unified Learning Upon the pre-trained model, one straightforward strategy for downstream tasks is single-task finetuning. We take one step further and investigate unified learning. Specifically, we'll investigate (1) single-task finetuning (ST-F), which refers to ![4_image_0.png](4_image_0.png) finetuning on individual tasks; (2) multi-task finetuning (MT-F) which combines the mixed training data of all datasets (Raffel et al., 2022); and (3) multitask fine-tuning with prefixes (MT-FP), where we prepend task-specific prefixes like "entity linking" and "schema linking" to the input context. ## 5 Experiments Model Variants We primarily report results on two model variants: **GDMM-base** where the decoder is initialized with BERT-base parameters, and **GDMM-large** where the decoder is initialized with BERT-large parameters. To investigate which modality provides dominant information for visual-text entity linking, three configurations are explored: L+V where both visual and textual information are given, L where only textual input are given, and V where only image are given. We report experimental results with a **single generic** model for the three modality settings. It is achieved by randomly masking out one modality during training. Implementation details are in Appendix B. ## 5.1 Results Pre-training The pre-training of GDMM consists of two stages, text-only pre-training and LVpaired pre-training. The pre-training performance is investigated with two methods: zero-shot or fine-tuned on WikiDiverse. Zero-shot refers to directly evaluating on WikiDiverse without training, while fine-tuned refers to further fine-tuning on the WikiDiverse. The first-stage pre-trained checkpoint is directly evaluated on WikiDiverse with only text information (note that WikiDiverse is a dataset with both image and text input), achieving 75.43 zero-shot F1 score. It greatly outperform the baseline model in (Wang et al., 2022d) by 4.36 F1 score, even though only text information are leveraged. The evaluation demonstrates that pretraining on BLINK builds a strong foundation for the proposed model. After that, we investigate the effect of paired pre-training data size in the second stage and visualize it in Figure 3. It shows that the pre-training data size has a positive effect on inference with both image and text modality (L+V) and with only image modality (V). Text-only (L) performance is not affected even though the visual modality is introduced in the second pre-training stage. ## Experimental Results On Dmel **Benchmark** The experimental results of the proposed GDMM on DMEL are shown in Table 3. We compare visual-language entity disambiguation result on WikiDiverse with LXMERT (Wang et al., 2022d), and visual-language entity linking performance on MELBench with Gan et al. (2021a). GDMM achieves better performance on both datasets, especially on MELBench. GDMM strikingly improves the F1 score by over 31%, demonstrating the effectiveness of the proposed architecture. The schema linking performance is evaluated on Squall and SLSQL. For schema linking, we compare our model with the baseline model GENRE (De Cao et al., 2021), as it has competitive performance in entity disambiguation. For a fair comparison, we fine-tune GENRE with their pretrained checkpoint on BLINK and investigate two options, one without a flattened table (GENRE) and another with the flattened table (GENRE+) where the table content is leveraged identically as in GDMM. Only experimental results for GENRE+ are reported in Table 3 since it has better performance than GENRE. The fact that the flattened table has better performance demonstrates the effectiveness of the table representation. Detailed results can be found in Appendix C. Unified learning As mentioned in Section 4.4, we report unified learning results for ST-F, MT-F, and MT-FP in Table 4. To confirm whether the pre-trained checkpoints build a competitive foundation for visual-language entity linking, zero-shot (ZS) performance is also reported in the same table. The pre-trained checkpoint is competitive because the zero-shot performance (i.e. ZS column) outperforms previous state-of-the-art fine-tuned re- | Data | Task | Modality | Previous SOTA | GDMM-base | GDMM-large | |-------------|--------|------------|----------------------------|-------------|--------------| | GERBIL | ED | L → L | 88.8 (De Cao et al., 2021) | 86.11±0.24 | 82.57±0.22 | | WikiDiverse | VED | LV → L | 71.07 (Wang et al., 2022d) | 79.10±0.35 | 78.69±0.33 | | MELBench | VED | LV → L | 40.5 (Gan et al., 2021a) | 68.01±0.75 | 72.41±0.65 | | Squall | SL | LU → L | 82.10±2.41 (GENRE+) | 89.69±0.77 | 89.12±1.03 | | SLSQL | SL | LU → L | 82.80 (GENRE+) | 81.48±1.06 | 84.43±0.92 | | Avg. | 72.93 | 80.88 | 81.44 | | | | Dataset | GDMM-base | | | | |-------------|-------------|-------------|-------------|------------| | ZS | ST-F | MT-F | MT-FP | | | GERBIL | 84.00 | 93.75 ±0.26 | 93.63 ±0.14 | 93.56±0.52 | | WikiDiverse | 76.92 | 80.97±0.39 | 80.02±0.29 | 80.65±0.35 | | MELBench | 54.76 | 67.41±0.97 | 63.64±2.04 | 65.64±1.44 | | SQUALL | 47.52 | 89.69±0.77 | 88.00±1.31 | 88.37±0.99 | | SLSQL | 30.92 | 81.48±1.06 | 83.59±1.90 | 83.60±0.85 | | Avg. | 58.82 | 82.66 | 81.78 | 82.36 | ![5_image_0.png](5_image_0.png) sults for WikiDiverse in Table 8 and MELBench in Table 9. On average, ST-F and MT-FP achieve the best and the second-best performance, with a small gap between the two. It is expected that ST-F achieves the best performance as each fine-tuned model is able to fit the target dataset distribution. Considering ST-F trains five models while MT-FP trains a single model, the competitive MT-FP performance suggests that model efficiency can be achieved at the cost of a minor performance drop, that is, 0.30 average F1 score drop for GDMMbase. Additionally, the fact that MT-FP constantly outperforms MT-F aligns with previous findings that task-specific prefixes is effective in informing the model of the target tasks (Dong et al., 2019; Raffel et al., 2022). ## 5.2 Error Analysis Figure 4 shows the error breakdown on WikiDiverse. The errors are divided into four categories: retrieval error where the target entity is not in the candidates; misidentification where the prediction does not match the ground truth entity; under predict where the model predicts "nil" and the ground truth entity is not "nil"; over prediction where the ground truth entity is "nil". Representative error examples are presented in Table 5 for (a) retrieval error, (b) misidentification, (c) over prediction, and (d) under prediction. Error type (a) contributes to over half of the errors, emphasizing the need for a good retriever. It cannot be addressed by our model, because the ground truth entity is not in the set. Example (b) is due to candidate confusion, as Cape Canaveral Air Force Station is a previously used name for Cape Canaveral Space Force Station from 1974 to 1994 and from 2000 to 2020. Such errors indicate the necessity for a coreference system at inference time. The over-prediction example as shown in (c) calls for a better discrimination strategy for plausible candidates. Example (d) is a challenging example, which asks future models to possess more profound prior knowledge. Table 6 further shows four types of errors for schema linking, name ambiguity, inference difficulty, prime key confusion, and unknown strings. (a) Name ambiguity is a common challenge for schema linking especially when the column names have overlapped tokens. (b) Sometimes the model fails to make inferences on subtle entity expressions. (c) Another common error type for schema linking is the confusion in prime keys, as the prime key "pet id" is shared by multiple tables in the example. (d) Another challenge is unknown strings or composite tokens since it is usually intractable to recover the original expression from those mentions. ![6_image_0.png](6_image_0.png) | ID | Error Type | Text | GT | Prediction | |------|-----------------------|--------------------------------------------------------------------------------|---------------------|--------------------| | (a) | Column name ambiguity | show the name and the release year of the song by the youngest singer . | singer # song name | singer # name | | (b) | Lack of inference | which model be lighter than 3500 but not build by the ' Ford Motor Company ' ? | cars data # weight | model list # model | | (c) | Prime key confusion | find the id of the pet own by student whose last name be ' Smith ' . | has pet # pet id | pets # pet id | | (d) | Unknown strings | ... list the car makeid and make name . | car names # make id | car names # make | Table 6: Case study on schema linking errors. The hash symbol connects the table name and column name. ![6_image_1.png](6_image_1.png) ## 5.3 Ablation Study We further discuss the indispensability of information within each modality through an ablation study on WikiDiverse. Three evaluations are conducted under settings: (1) L+V where both text and image are leveraged; (2) L where only texts are used for prediction; (3) V where only images are used for prediction. The experimental results are reported in Table 8. It shows each model achieves the best performance with both text and image modalities (L+V). While textual content provides the most inference clues, visual content provides complementary information. Additionally, the proposed model GDMM outperforms LXMERT with a single generic model trained for various configurations, while three modality-specific models for each configuration (L+V, L, V) are trained in LXMERT. This observation demonstrates the effectiveness of the proposed model for various modality configurations. Table 7 shows several misinformation examples where the model fails without visual information. In (a), the image of a soccer stadium provides extra semantics when the model misses the semantic indicator from "the 2006 World Cup." The image in example (b) is in Wiki-KB and has been used for pre-training. Its visibility during the pretraining makes the image a strong indicator of the target entity. The optical character "NEW SCOTLAND YARD" in example (c) is indispensable for the mention to be correctly identified. Without the | Method | F1 | | | | |------------------|------------|------------|------------|-------| | L+V | L | V | | | | LXMERT (Wang et | al., | 71.07 | 63.65 | 40.16 | | 2022d) GDMM-base | 79.10±0.35 | 76.79±0.32 | 40.59±2.32 | | | GDMM-large | 78.69±0.33 | 77.05±0.29 | 37.65±0.25 | | optical features, inferring the mention of "headquarters" is challenging given the ambiguous text. Examples (d) and (e) emphasize the dependence on facial features for entity disambiguation. In the absence of the image in (d), it is impossible to disambiguate between "Cristiano Ronaldo" and "Ronaldo ( Brazilian Footballer )" as both players served Real Madrid. ## 6 Related Works Textual Entity Linking Early entity linking researches (Hoffart et al., 2011; Daiber et al., 2013) reply on probabilistic approaches, based on textual similarity and corpus occurrence. A more recent line of research is neural networks based retrievalreranking approaches (El Vaigh et al., 2020; Zhang et al., 2022; Mrini et al., 2022), which first retrieve top candidates given the input text, and then score each candidate with semantic similarity or correlation. End-to-end entity linking models (Broscheit, 2019; Martins et al., 2019; El Vaigh et al., 2020) approach this problem by directly detecting the entity mentions and linking them to their corresponding entities in the KB. For example, autoregressive entity linking models (De Cao et al., 2021; De Cao et al., 2021; Petroni et al., 2021; Mrini et al., 2022) formulate entity linking as a language generation problem using an encoder-decoder model. Textual-Visual Entity Linking The growing trend towards multimodality significantly advanced research in multimodal entity linking. Due to the difficulty in collecting and cleaning multimodal entity linking data, previous researchers limit their attention to a specific domain such as social media data (Adjali et al., 2020b,a; Moon et al., 2018; Gan et al., 2021a) and news domain(Zheng et al., 2022; Wang et al., 2022d), or a limited scope like person and organization recognition (Gan et al., 2021a; Cui et al., 2021). Previous work (Wang et al., 2022d) represents each entity with one image, which limits the visual expression of entities. We overcome this limitation by pre-training GDMM with multiple images per entity to obtain diverse visual representations. Tabular Schema Linking Schema linking (Guo et al., 2019; Wang et al., 2020) is an instance of entity linking in the context of linking to the relational database schema. Previous research shows that good schema linking (Liu et al., 2021; Katsakioris et al., 2022; Shi et al., 2020; Lei et al., 2020; Chen et al., 2020) can substantially improve downstream tasks such as Text-to-SQL parsing. However, entity mentions in existing benchmarks such as Spider (Yu et al., 2018) can almost exactly match the corresponding schema entities (Chen et al., 2020). Therefore, current Text-to-SQL semantic parsers normally address this problem with string-matching heuristics (Chen et al., 2020) or embedding matching modules (Chen et al., 2020; Wang et al., 2022a; Guo et al., 2019; Wang et al., 2020). However, due to the diversity and ambiguity in natural language mentions, such heuristics are hard to generalize to new domains (Chen et al., 2020; Wang et al., 2022a). Multimodal Models Multimodal models have attracted increasing attention in computer vision and natural language processing communities. Recent transformer-based approaches (Kim et al., 2021; Radford et al., 2021; Singh et al., 2022) that leverage the attention between the visual and textual embeddings manifest the effectiveness of the attention mechanism. However, the proposed learning objectives are usually limited to predefined scopes, such as text-image matching or alignment (Xu et al., 2021; Radford et al., 2021; Biten et al., 2022; Ho et al., 2022; Li et al., 2022; Yang et al., 2022; Huang et al., 2023), semantic segmentation, object detection, classification (Xu et al., 2022; Guo et al., 2022; Assran et al., 2022), and masked language modeling (Li et al., 2020; Ni et al., 2022; Tong et al., 2022; Appalaraju et al., 2021). Instead, we proposed a generic generative model that is open to diverse downstream tasks. Additionally, GDMM differs from previous generative multimodal models(Li et al., 2023; Wang et al., 2021) in that GDMM can process and comprehend information from heterogeneous instead of a single source; GDMM differs from VL-T5 (Cho et al., 2021) and others (Wang et al., 2022b; Bao et al., 2022; Wang et al., 2022c) in that GDMM enables thorough encoding for each modality, instead of ## 7 Conclusion In this paper, a novel DMEL problem is formulated, which links the entity mention within heterogeneous information to a defined KB. A generic DMEL dataset is built covering diverse EL tasks. We propose a unified generative model for DMEL, GDMM. Comprehensive experiments are conducted over the DMEL dataset. Experimental results show that the proposed GDMM outperforms state-of-the-art models on almost each individual EL task. Broader Impact In contrast to previous work (Pan et al., 2022; OpenAI, 2022) that only leverage textual content, the proposed model has the potential to deal with misinformation (text only as a data source might be prone to misinformation or fake context/information). This research will lead to a clearer understanding of misinformation issues and encourage better leverage of multimodal information. ## Limitations GDMM establishes a compelling starting point for DMEL research. In spite of this, the proposed approach has several shortcomings. First, GDMM currently generates entity name within the entity candidate set, however, we saw how retrieval errors limit entity linking performance. Thus, how to work collectively with the retrieval system to diminish errors takes appropriate action. Second, how to handle large tables still remains under-explored. It is infeasible to represent a huge database with the table flattening technique. In practice, it is possible to filter out less likely candidates to compress the search space, but a more promising approach is to represent the table more efficiently. GDMM also enables studies on more diversemodal tasks. New tasks can be easily framed based on the proposed architecture, such as visual question answering, grounded generation, and diversemodal commonsense reasoning. We believe that with more follow-up work on diverse tasks, this approach will turn out to be a more comprehensive generative diverse-modal framework. ## References Omar Adjali, Romaric Besançon, Olivier Ferret, Hervé Le Borgne, and Brigitte Grau. 2020a. Multimodal entity linking for tweets. In *Advances in Information Retrieval: 42nd European Conference on IR* Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part I, pages 463–478, Berlin, Heidelberg. Springer-Verlag. Omar Adjali, Romaric Besançon, Olivier Ferret, Hervé Le Borgne, and Brigitte Grau. 2020b. Building a multimodal entity linking dataset from tweets. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4285–4292, Marseille, France. European Language Resources Association. Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R. Manmatha. 2021. Docformer: End-to-end transformer for document understanding. In *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pages 973–983. Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. 2022. Masked siamese networks for label-efficient learning. Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2022. BEit: BERT pre-training of image transformers. In *International Conference on Learning Representations*. Ali Furkan Biten, Ron Litman, Yusheng Xie, Srikar Appalaraju, and R. Manmatha. 2022. Latr: Layoutaware transformer for scene-text vqa. *2022* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16527–16537. Samuel Broscheit. 2019. Investigating entity knowledge in BERT with simple neural end-to-end entity linking. In *Proceedings of the 23rd Conference on* Computational Natural Language Learning (CoNLL), pages 677–685, Hong Kong, China. Association for Computational Linguistics. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In *Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 423–433, Sofia, Bulgaria. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Sanxing Chen, Aidan San, Xiaodong Liu, and Yangfeng Ji. 2020. A tale of two linkings: Dynamically gating between schema linking and structural linking for text-to-SQL parsing. In *Proceedings of the 28th* International Conference on Computational Linguistics, pages 2900–2912, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. In *ICML*. Claire Yuqing Cui, Apoorv Khandelwal, Yoav Artzi, Noah Snavely, and Hadar Averbuch-Elor. 2021. Who's waldo? linking people across text and images. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision (ICCV), pages 1374– 1384. Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In *Proceedings of the 9th International Conference on Semantic Systems*, I-SEMANTICS '13, pages 121–124, New York, NY, USA. Association for Computing Machinery. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Highly parallel autoregressive entity linking with discriminative correction. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 7662–7669, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May* 3-7, 2021. OpenReview.net. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In *33rd Conference on Neural Information Processing Systems (NeurIPS 2019)*. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations. Cheikh Brahim El Vaigh, François Torregrossa, Robin Allesiardo, Guillaume Gravier, and Pascale Sébillot. 2020. A correlation-based entity embedding approach for robust entity linking. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 949–954. D. A. Ferrucci. 2012. Introduction to "this is watson". *IBM Journal of Research and Development*, 56(3.4):1:1–1:15. Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving textto-sql evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351–360. Association for Computational Linguistics. Jingru Gan, Jinchang Luo, Haiwei Wang, Shuhui Wang, Wei He, and Qingming Huang. 2021a. Multimodal entity linking: A new dataset and a baseline. In *Proceedings of the 29th ACM International Conference* on Multimedia, MM '21, pages 993–1001, New York, NY, USA. Association for Computing Machinery. Yujian Gan, Xinyun Chen, Qiuping Huang, Matthew Purver, John R. Woodward, Jinxia Xie, and Pengsheng Huang. 2021b. Towards robustness of textto-SQL models against synonym substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2505– 2515, Online. Association for Computational Linguistics. Yujian Gan, Xinyun Chen, and Matthew Purver. 2021c. Exploring underexplored limitations of cross-domain text-to-SQL generalization. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 8926–8931, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and D. Zhang. 2019. Towards complex text-to-sql in cross-domain database with intermediate representation. In ACL. Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, MingMing Cheng, and Shi-Min Hu. 2022. Visual attention network. *arXiv preprint arXiv:2202.09741*. Chih-Hui Ho, Srikar Appalaraju, Bhavan Jasani, R Manmatha, and Nuno Vasconcelos. 2022. Yorolightweight end to end visual grounding. In ECCV 2022 Workshop on International Challenge on Compositional and Multimodal Perception. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In *Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing*, pages 782–792, Edinburgh, Scotland, UK. Association for Computational Linguistics. Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing Xu, and Xiaodan Liang. 2023. Nlip: Noise-robust language-image pretraining. In *AAAI 2023*. Miltiadis Marios Katsakioris, Yiwei Zhou, and Daniele Masato. 2022. Entity linking in tabular data needs the right attention. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In *ICML*, pages 5583– 5594. Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1595–1604, Melbourne, Australia. Association for Computational Linguistics. Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, and Tat-Seng Chua. 2020. Re-examining the role of schema linking in text-toSQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6943–6954, Online. Association for Computational Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459– 9474. Curran Associates, Inc. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In *ICML*. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020. What does BERT with vision look at? In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 5265–5275, Online. Association for Computational Linguistics. Minghao Li, Tengchao Lv, Jingye Chen, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, and Furu Wei. 2023. Trocr: Transformer-based optical character recognition with pre-trained models. In AAAI 2023. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022. TAPEX: Table pre-training via learning a neural SQL executor. In International Conference on Learning Representations. Qian Liu, Dejian Yang, Jiahui Zhang, Jiaqi Guo, Bin Zhou, and Jian-Guang Lou. 2021. Awakening latent grounding from pretrained language models for semantic parsing. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 1174–1189, Online. Association for Computational Linguistics. Pedro Henrique Martins, Zita Marinho, and André F. T. Martins. 2019. Joint learning of named entity recognition and entity linking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 190– 196, Florence, Italy. Association for Computational Linguistics. Seungwhan Moon, Leonardo Neves, and Vitor Carvalho. 2018. Multimodal named entity disambiguation for noisy social media posts. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2000– 2008, Melbourne, Australia. Association for Computational Linguistics. Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, and Hamed Firooz. 2022. Detection, disambiguation, re-ranking: Autoregressive entity linking as a multi-task problem. In *Findings of* the Association for Computational Linguistics: ACL 2022, pages 1972–1983, Dublin, Ireland. Association for Computational Linguistics. Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. 2022. Expanding language-image pretrained models for general video recognition. In Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, pages 1–18, Berlin, Heidelberg. Springer-Verlag. OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. https://openai.com/blog/ chatgpt/. Liangming Pan, Wenhu Chen, Min-Yen Kan, and William Yang Wang. 2022. ContraQA: Question answering under contradicting contexts. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics. P. J. Price. 1990. Evaluation of spoken language systems: the ATIS domain. In *Speech and Natural Language: Proceedings of a Workshop Held at Hidden* Valley, Pennsylvania, June 24-27,1990. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. *Transactions of the Association for Computational Linguistics*, 8:264–280. Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Transactions on Knowledge and Data Engineering, 27(2):443–460. Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daumé III, and Lillian Lee. 2020. On the potential of lexico-logical alignments for semantic parsing to SQL queries. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1849–1864, Online. Association for Computational Linguistics. Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. Flava: A foundational language and vision alignment model. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15617–15629. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, pages 3104–3112, Cambridge, MA, USA. MIT Press. Yasufumi Taniguchi, Hiroki Nakayama, Takahiro Kubo, and Jun Suzuki. 2021. An investigation between schema linking and text-to-sql performance. *ArXiv*, abs/2102.01847. Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. 2022. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pretraining. In *Advances in Neural Information Processing Systems*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proceedings of the 31st International* Conference on Neural Information Processing Systems, NIPS'17, pages 6000–6010, Red Hook, NY, USA. Curran Associates Inc. Ruben Verborgh, Michael Röder, Ricardo Usbeck, and Axel-Cyrille Ngonga Ngomo. 2018. Gerbil - benchmarking named entity recognition and linking consistently. *Semant. Web*, 9(5):605–625. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for textto-SQL parsers. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7567–7578, Online. Association for Computational Linguistics. Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, and Alexis Conneau. 2021. Largescale self- and semi-supervised learning for speech translation. Lihan Wang, Bowen Qin, Binyuan Hui, Bowen Li, Min Yang, Bailin Wang, Binhua Li, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022a. Proton: Probing schema linking information from pre-trained language models for text-to-sql parsing. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '22, pages 1889–1898, New York, NY, USA. Association for Computing Machinery. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022b. Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In *International Conference on Machine Learning*. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei. 2022c. Image as a foreign language: Beit pretraining for all vision and visionlanguage tasks. Xuwu Wang, Junfeng Tian, Min Gui, Zhixu Li, Rui Wang, Ming Yan, Lihan Chen, and Yanghua Xiao. 2022d. WikiDiverse: A multimodal entity linking dataset with diversified contextual topics and entity types. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4785–4797, Dublin, Ireland. Association for Computational Linguistics. Ledell Yu Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero-shot entity linking with dense entity retrieval. In *EMNLP*. Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, and Xiaolong Wang. 2022. Groupvit: Semantic segmentation emerges from text supervision. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. 2021. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591, Online. Association for Computational Linguistics. Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, and Junzhou Huang. 2022. Vision-language pretraining with triple contrastive learning. In *CVPR* 2022. Yi Yang, Ozan Irsoy, and Kazi Shefaet Rahman. 2018. Collective entity disambiguation with structured gradient tree boosting. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 777–786, New Orleans, Louisiana. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Wenzheng Zhang, Wenyue Hua, and Karl Stratos. 2022. EntQA: Entity linking as question answering. In *International Conference on Learning Representations*. Qiushuo Zheng, Hao Wen, Meng Wang, and Guilin Qi. 2022. Visual Entity Linking via Multi-modal Learning. *Data Intelligence*, 4(1):1–19. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. ## A Dmel Selection The DMEL benchmark's datasets were carefully chosen after conducting extensive research on publicly available datasets. The DMEL benchmark includes five datasets: GERBIL, WikiDiverse, MELBench, Squall, and SLSQL. Each of these datasets is necessary and represents the best option for a comprehensive evaluation for Diverse-Modal Entity Linking. The selected datasets best align with the DMEL problem, among more than 20 datasets we looked into. The reasons why other datasets are excluded in the benchmark are: i) most dataset collected from social media cannot be reproduced because some of the data are no longer accessible. This category includes Twitter-MEL (Adjali et al., 2020b), SnapCaptionsKB(Moon et al., 2018); ii) Dataset is not publicly available (Zheng et al., 2022); iii) Annotated dataset for schema linking is limited. Existing work that investigates entities in tables are Text-to-SQL datasets. However, annotations for schema linking are not available, such as Spider (Yu et al., 2018), Spider-Syn (Gan et al., 2021b), Spider-DK (Gan et al., 2021c), WikiSQL(Zhong et al., 2017), ATIS (Price, 1990), Freebase917 (Cai and Yates, 2013), and WikiTableQuestions (Pasupat and Liang, 2015). Furthremore, there is no overlap between the datasets. For tabular datasets, Squall is built upon WikiTableQuestions, while SLSQL is based on the Spider text-to-SQL dataset. Lastly, GENRE is widely recognized as the standard dataset for the task of textual entity linking. ## B Implementation Details For every dataset in DMEL dataset, the fine-tuning procedure runs for 5 epochs with a batch size of 16. For both pre-training and fine-tuning, the learning rate is 3 × 10−5, with a linear scheduler with 0.1 warmup ratio. Fine-tuning takes 5 hours for WikiDiverse, 2 for MELBench-Wiki, 1 for Squall, and 2 for SLSQL. We run each setting 5 times and report the mean and variance unless stated otherwise (except the zero-shot setting when evaluated with the pretrained checkpoint, since there is no randomness with the pre-trained checkpoint ). One MELBench, since the dataset split is not given along with the released MELBench-Wikidata data, we randomly split the dataset according to their split statistics and repeat the experiment 5 times to get average evaluation metrics. On Squall, we reported a 5-fold cross-validation result following the released split. For Squall and SLSQL, all the hyperparameters are tuned on the training set since there is no test set. Additionally, Wikipedia images are collected through hyperlinks shared by Wang et al. (2022d) at https://github.com/wangxw5/wikidiverse. Annotations on SLSQL are adapted from Spider, excluding train_others.json that are from Restaurants, GeoQuery, Scholar, Academic, IMDB, and Yelp prepared by (Finegan-Dollak et al., 2018). For GERBIL benchmark results, we report average F1 scores on six test sets, including Aidatest, MSNBC-test AQUAINT-test, ACE2004-test, WNED-CWEB-test, and WNED-WIKI-test following De Cao et al. (2021). ## C Detailed Experimental Results Experimental results for each individual dataset are shown in this section. Specifically, Table 9 shows results for MELBench, Table 10 shows the experimental result for Squall, and Table 11 shows the experimental result for SLSQL. ## D Domain Adaption Results We investigate zero-shot performance in unseen domains on the WikiDiverse dataset. Specifically, we choose the five domains as seen domains, including politic, crime, sports, entertainment, and technology, and the rest five domains as the unseen domains, including disaster, health, economy, weather, and education. Training data includes instances from the seen domain, and the instances from the unseen domains are randomly split into validation and test set. Note that the data used for this experiment is from the WikiDiverse training and validation set, the data in the test set are excluded. The domain adaption result on WikiDiverse is shown in Table 12. These experiments facilitate studying knowledge transfer between seen and unseen domains. The experiment results show that (a) pre-training is indispensable for new domains as it provides profound prior knowledge for the MEL task in general; (b) knowledge learned from seen domains can indeed transfer to the unseen domain as the average F1 score improves by 3.61 percentage points. | Method | F1 | | |------------------------------|------------|------------| | Top-1 | Top-10 | | | MELBench (Gan et al., 2021a) | 40.5 | 69.6 | | GDMM-base | 68.01±0.75 | 73.31±0.77 | | GDMM-large | 72.41±0.65 | 76.34±0.78 | Table 9: Results of entity linking on MELBench | Method | F1 | | |------------|------------|------------| | GENRE | 75.92±4.29 | | | GENRE+ | 82.10±2.41 | | | GDMM-base | Zero-shot | 47.52±1.06 | | Finetuned | 89.69±0.77 | | | GDMM-large | Zero-shot | 49.14±1.26 | | Finetuned | 89.12±1.03 | | Table 10: Results of schema linking on Squall. GENRE+ denotes augment table representations to the input text as described in Section 4.1 | Method | F1 | | |------------|------------|-------| | GENRE | 70.41 | | | GENRE+ | 82.80 | | | GDMM-base | Zero-shot | 30.92 | | Finetuned | 81.48±1.06 | | | GDMM-large | Zero-shot | 28.44 | | Finetuned | 84.43±0.92 | | Table 11: Results of schema linking on SLSQL | Domain | F1 | | | |-----------|-------|------------|-------| | FT w/o PT | PT | FT with PT | | | Health | 40.37 | 76.19 | 82.48 | | Weather | 39.80 | 79.79 | 78.84 | | Economy | 46.67 | 78.95 | 84.17 | | Disaster | 34.67 | 78.81 | 81.04 | | Education | 40.23 | 73.49 | 81.48 | | Overall | 39.17 | 78.05 | 81.66 | Table 12: Evaluation on unseen domains in WikiDiverse with GDMM-base. FT and PT stand for fine-tuning on seen domains and pre-training. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 ✓ A2. Did you discuss any potential risks of your work? Section 5 and Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 3, 4, 5 ✓ B1. Did you cite the creators of artifacts you used? Sections 3, 4, 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix B ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix B ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 and Appendix B ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 and Appendix B ## C ✓ **Did You Run Computational Experiments?** Section 5 And Appendix B ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5 and Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 and Appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
cai-etal-2023-improving
Improving Empathetic Dialogue Generation by Dynamically Infusing Commonsense Knowledge
https://aclanthology.org/2023.findings-acl.498
In empathetic conversations, individuals express their empathy towards others. Previous work has mainly focused on generating empathetic responses by utilizing the speaker{'}s emotion. Besides, external commonsense knowledge has been applied to enhance the system{'}s understandings of the speaker{'}s situation. However, given an event, commonsense knowledge base contains various relations, potentially leading to confusion for the dialogue system. Consequently, inconsistencies arise among the emotion, generated response and speaker{'}s contextual information. To this end, we propose a novel approach for empathetic response generation, which incorporates an adaptive module for commonsense knowledge selection to ensure consistency between the generated empathetic responses and the speaker{'}s situation. This selected knowledge is used to refine the commonsense cognition and empathy expression for generated responses. Experimental results show that our approach significantly outperforms baseline models in both automatic and human evaluations, exhibiting the generation of more coherent and empathetic responses. Moreover, case studies highlight the interpretability of knowledge selection in the responses and the effectiveness of adaptive module in our model. Code: \url{https://github.com/Hanscal/DCKS}.
# Improving Empathetic Dialogue Generation By Dynamically Infusing Commonsense Knowledge Hua Cai∗† Xuli Shen∗ **Qing Xu Weilin Shen Xiaomei Wang** Weifeng Ge Xiaoqing Zheng Xiangyang Xue UniDT Technology, Shanghai, China School of Computer Science, Fudan University, Shanghai, China ## Abstract In empathetic conversations, individuals express their empathy towards others. Previous work has mainly focused on generating empathetic responses by utilizing the speaker's emotion. Besides, external commonsense knowledge has been applied to enhance the system's understandings of the speaker's situation. However, given an event, commonsense knowledge base contains various relations, potentially leading to confusion for the dialogue system. Consequently, inconsistencies arise among the emotion, generated response and speaker's contextual information. To this end, we propose a novel approach for empathetic response generation, which incorporates an adaptive module for commonsense knowledge selection to ensure consistency between the generated empathetic responses and the speaker's situation. This selected knowledge is used to refine the commonsense cognition and empathy expression for generated responses. Experimental results show that our approach significantly outperforms baseline models in both automatic and human evaluations, exhibiting the generation of more coherent and empathetic responses. Moreover, case studies highlight the interpretability of knowledge selection in the responses and the effectiveness of adaptive module in our model. Code: https://github.com/Hanscal/DCKS. ## 1 Introduction Empathy is a desirable human ability in our daily conversations. It is known as a complex multidimensional construct encompassing social, cognitive, and emotional processes, which enables us to experience the emotion of others through various emotional stimuli and to understand the implicit mental states of others (Davis, 1983; Zheng et al., 2021). Previous research (Rashkin et al., 2019; Lin et al., 2019; Majumder et al., 2020; Li et al., 2021b) has been conducted on dialogue systems to enhance ∗These authors contributed equally to this work. †Corresponding author: Hua Cai (hua.cai@unidt.com) ![0_image_0.png](0_image_0.png) its empathy ability in open-domain. In order to generate empathetic responses, one line of growing interests is incorporating commonsense knowledge into conversation modeling (Ghosal et al., 2020; Zhou et al., 2021; Sabour et al., 2021). Yet, understanding speaker's emotion and showing the contextually appropriate comprehension of her/his situation are still challenges in empathetic conversations. When interacting with a dialogue system, the speakers are not expected to explicitly share all the information about their situation and how they may feel. As humans, we use our commonsense knowledge to make connections between what is explicitly mentioned and what is implied. Hence, to address above issues, some prior works (Zhou et al., 2018b; Wu et al., 2020) implement external knowledge to identify the speaker's situation, to acknowledge the speaker's status and to bring diversity for generated response. However, straightforward knowledge merging method confuses the system and the response consistency would be deteriorated. This is demonstrated in Figure 1, where the irrelevant knowledge (*Need*) may potentially form empathetic responses, which conflicts with the information about speaker's emotion (*content*). Accordingly, the speaker displays the satisfaction of her/his experience, which provides potential informative cognitions based on one unified commonsense. We can assume that if the most appropriate commonsense cognition (*Intent*) is selected with respect to emotion status, the generated response shows better consistency and empathy. Therefore, we believe dialogue systems with rectified knowledge, which aims at unifying the contextual emotion, lead to more consistent and empathetic responses. In this paper, we address the task of empathetic dialogue generation by dynamically infusing commonsense knowledge. Such additional commonsense knowledge is used to improve the cognitive understanding about the speaker's situation and feelings, thus enhance the empathy expression in the generated responses. Meanwhile, the dynamical selection stage avoids the confusion of knowledge in dialogue system and enhance the response consistency with context history. In general, our main contributions are summarized as follows: - We introduce a novel approach that incorporates the inferred commonsense knowledge to enhance empathetic response generation. - We propose an effective knowledge selecting paradigm that could dynamically select the commonsense knowledge, which is most relevant to speaker's cognitive empathy. To the best of our knowledge, it is the first work to study commonsense knowledge dynamical selection for empathetic dialogue generation. - Experiments show that with incorporating the selected commonsense, our model is able to generate more empathetic and interpretable responses compared with the previous methods. ## 2 Related Works 2.1 Empathetic Dialogue Generation In recent years, research on implementing empathy in open domain dialogue systems and generating empathetic responses has gained considerable attention. Rashkin et al. (2019) consider a richer and evenly distributed set of emotions and release a dataset EmpatheticDialogues, where a listener responds to a speaker who is under an emotional situation in an empathetic way. Ghosal et al. (2020) demonstrate that detecting the speaker's emotion is an essential part of generating empathetic responses. Prior studies on emotion-related conversational systems mainly focused on rule-based systems, which heavily rely on hand-craft features (Zhou and Wang, 2018; Zhou et al., 2018a). Recently, many neural emotional dialogue generation approaches have been explored to control the emotional expression in the target response (Lin et al., 2019; Majumder et al., 2020). However, Li et al. (2021a) reveal that conventional empathetic conversation systems face an emotional inconsistency problem as they strive to produce emotionally rich responses based on predefined user-input emotions. ## 2.2 Connecting Knowledge And Dialogue Leveraging knowledge from commonsense knowledge base has been demonstrated for gaining a better understanding of the implied emotions within the context (Tu et al., 2022; Lee et al., 2022). ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2019) are commonsense knowledge bases. ConceptNet consists of 36 relations focusing mostly on taxonomic, lexical and physical commonsense knowledge. Distinguished from ConceptNet, ATOMIC consists 9 relations that cover social commonsense knowledge including event-centered causes and effects as well as personrelated mental states. Both Zhou et al. (2018b) and Zhang et al. (2019) introduce knowledge triplets from ConceptNet into open-domain response generation. Recently, Li et al. (2022) and Zhong et al. (2021) exploit ConceptNet to enhance emotion reasoning for response generation. Ghosal et al. (2020) utilizes ATOMIC in emotional dialogue modeling for emotion identification. Sabour et al. (2021) leverages commonsense from ATOMIC to improve the understanding of speaker's situations and feelings. Therefore, enabling dialogue systems to leverage commonsense and driving implications from the speaker's explicit statements are highly beneficial for more empathetic responses. In this work, we focus on the task of empathetic dialogue generation on EmpatheticDialogues dataset, and pay attention to addressing social related commonsense knowledge from ATOMIC. For each event, we use the social relations in ATOMIC to infer the commonsense knowledge about the person involved in the event. We adopt COMET (Bosselut et al., 2019) to generate commonsense sentences for the given events. This model is pre-trained on triplets from ATOMIC and then fine tuned on ATOMIC20 20 (Hwang et al., 2021), so that is more suitable for inferring knowledge regarding unseen events in the original ATOMIC daily basis dataset. ![2_image_0.png](2_image_0.png) ## 3 Methodology Our proposed model is built upon the Transformerbased pre-trained language model to generate listener's utterance. Each conversation process of the model is mainly divided into three stages: contextual probing, contextual unification workspace and knowledge-aware decoder. The overview of our model is illustrated in Figure 2. ## 3.1 Task Formulation The task requires a dialogue model to play the role of the listener and generate empathetic responses. Formally, let U = [u1, u2*, ..., u*n−1] denote a dialogue history of n−1 utterances, where ui = [w i1 , wi2 , ..., wiMi ] is the i-th utterance that consists of Mi words. Let K = {ki} denote the commonsense knowledge generated from COMET, where kiis the empathetic commonsense inference knowledge. Our goal is to generate a response Y using historical utterance U and commonsense knowledge K as input. A dialogue history encoder to encode U, a knowledge encoder to encoder K, and a decoder to incorporate dialog history, dynamically select knowledge and generate response. ## 3.2 Contextual Probing To obtain semantic representations of the dialog history and the knowledge from ATOMIC, we divide the context probing part into context encoding ## And Knowledge Acquisition. 3.2.1 Context Encoding We concatenate the utterances in the dialogue history and prepend a special token [CLS] to obtain the dialogue historical context input U = [CLS] ⊕ u1 ⊕ u2 ⊕ ... ⊕ un−1, where ⊕ is the concatenation operation. Then, we use the final hidden representation of [CLS] as the representation of the whole sequence. We use BART encoder part to acquire the contextual representation. The sequence U is fed into the encoder, and the hidden state of the encoder token: zctx = Encctx(U), (1) where zctx ∈ R L×d, L is the length of the sequence, and d is the hidden size of the context encoder. ## 3.2.2 Knowledge Acquisition In ATOMIC, six relations could be inferred for the person X involved in the event: the effect of the event on X (*xEffect*), X's reaction to the event (*xReact*), X's intent before the event (*xIntent*), what X need in order for the event to happen (*xNeed*), what X would want after the event(*xW ant*), and an inferred attribute of X's characteristics (*xAttr*). Since predicting a person's attributes involves judging the other person, which is not included in the empathetic process, we ignore *xAttr* in our approach and use the remaining five relations. For input sequence U, we respectively append five special relation tokens ([xReact], [xWant], [xNeed], [xIntent], [xEffect]) to the last utterance in the dialogue history and then use COMET to generate k commonsense inferences S r = [csr1 , csr2 , ... , csrk ] per relation r, where r ∈ {*xReact, xW ant, xNeed, xIntent, xEffect*}. For each relation, we concatenate the generated commonsense inferences to obtain its commonsense sequence CSr = csr1 ⊕ csr2 ⊕ ... ⊕ csrk , which demonstrates the knowledge regarding the speaker's dialogue state (i.e. emotion and situation). Accordingly, similar to the previous section, we prepend [CLS] to the sequences denoted as ECSr , which then are fed to five separate commonsense knowledge encoders, as shown in the contextual probing part of Figure 2: $$\mathbf{Z}_{r}=\mathbf{Enc}_{K n o}(\mathbf{E}_{C S_{r}}),$$ Zr = EncKno(ECSr), (2) where Zr ∈ R lr×d, lr is the lengths of the commonsense inference sequences. Then, we utilize the hidden vector of [CLS] as the representation for each relation, and through average operation we obtain the fused representation zr = *Average*(Zr[0]) ∈ R dfor all relations. ## 3.3 Contextual Unification Workspace To better leverage the hidden representation from knowledge acquisition and context encoding, we apply the workspace module for unifying contextual information according to emotion label. The workspace consists of two parts: emotion classification for identifying speaker's status, and adaptive knowledge selection for excluding irrelevant knowledge representation. ## 3.3.1 Emotion Classification In contrast to concatenating the representations at a sequence level, we use point-wise addition to fuse the additional knowledge in the sequence, i.e., the fusing of knowledge and the context representation: ## Zf = Zr + Zctx. (3) In order to acquire a more accurate prediction of the speaker's emotion, given that we are provided with an emotion label e for each conversation, we use the infused representation of knowledge and context representation to perform emotion classification. We also pass zf through a linear layer gθ, followed by a softmax operation to produce the emotion category distribution Pemo ∈ R q, where q is the number of available emotion categories: ## Pemo = Softmax(Gθ(Zf )), (4) where θ ∈ R d×qis the weight vector for the linear layer. During training, we optimize these weights by minimizing the Cross-Entropy (CE) loss between the emotion category distribution Pemo and the ground truth label e: $${\mathcal{L}}_{\mathrm{emo}}=-\log(P_{\mathrm{emo}}(e)).$$ Lemo = − log(Pemo(e)). (5) ## 3.3.2 Adaptive Knowledge Selection We present a knowledge selection method that the decoder can adaptively choose the commonsense representations based on the emotion classification results. Given the set of knowledge representation Z = {Zr[0]}, the goal is to choose the most appropriate knowledge relations that satisfy the consistency with the context representation vector zctx. By this selection paradigm, the irrelevant relations, which would potentially confused the generated response, will be eliminated, so as to boost the performance of dialogue system. Inspired by Global Workspace Theory in cognitive science (Blum and Blum, 2022; Baars, 1993) , the process of contextual coordination is realized by eliminating irrelevant cognition. We therefore implement the label of emotion as the coordination of context and the Lemo(gθ(z), gθ(zctx)) from the supervised evaluation to eliminate irrelevant cognition. The knowledge selection mechanism is divided into two stages, *competition* and *broadcasting*: - During the *competition* stage, we recursively exclude the irrelevant information of knowledge representation based on the emotion status. Specifically, at iteration m, we choose the maxz∈Z{Lemo(gθ(z), gθ(zctx))} as the most irrelevant knowledge representation. In order to model the influence of knowledge exclusion, we leverage nonlinear regression method (Xu and Xuan, 2019; Shen et al., 2022) to calculate the dynamics G = ∇θf ∈ R d×q of the aforementioned max loss. Please refer to the Appendix for the technical details. After the last iteration, the remaining knowledge representation, as the winner of competition, is applied for acknowledging the unified speaker's emotion status. - In the *broadcasting* stage, the winner of competition stage will be applied for unifying the combined representation in decoder. Specifically, we realize this stage by adding the dynamics of the selecting process to rectify the knowledge representation. Thus, the generated response will less affected by the unrelated information from knowledge encoder in contextual probing module. We provide Algorithm 1 in Appendix to show the exclusion method. Figure 3 displays how the workspace process refine the knowledge representation. ## 3.4 Knowledge-Aware Decoder Generally, not all knowledge contributes to the generation of the response, so the model should have the ability to select knowledge. Instead of performing knowledge selection in the encoding phase, we leave it to the decoding phase. As shown in the right part of Figure 2, a knowledge-aware ![4_image_0.png](4_image_0.png) cross attention block is introduced to select knowledge dynamically. Feed the selected knowledge to the context-knowledge refiner, which assists in response generation. The fused knowledge is taken as the input of this block, and then the output of this block is refined to exploit the knowledge contributions. ## 3.4.1 Knowledge Refiner In order to refine the context and knowledge contributions in each layer, we replace the residual addition to a refine gate after the knowledge-aware attention block. Denote hk as output of knowledgeaware attention block and hc as the residual from the previous block, the output of refiner can be expressed by: $$R_{f}(\widetilde{\mathbf{h}}_{k},\mathbf{h}_{c})=\alpha\cdot\mathbf{LN}(\widetilde{\mathbf{h}}_{k})+(1-\alpha)\cdot\mathbf{h}_{c}\tag{6}$$ $$\widetilde{\mathbf{h}}_{k}=\mathbf{h}_{k}+\boldsymbol{\delta}_{m}\tag{7}$$ $$\alpha=\sigma(\mathbf{w}\cdot[\widetilde{\mathbf{h}}_{k};\mathbf{h}_{c}])\tag{8}$$ $\widetilde{\mathbf{i}}=\widetilde{\mathbf{i}}$ Where LN is a linear layer, hek is the rectified knowledge representation, w ∈ R 2dis a learnable parameter and σ denotes sigmoid function. ## 3.4.2 Response Generation Lastly, the target response Y = [y1, y2*, ..., y*T ] with length T, which is generated by the decoder token by token by using the embeddings of the tokens that have been generated and the commonsense-refined contextual representation Rf (hek, hc), which has fused the information from both the context and the commonsense inferences. We adopt the standard negative log-likelihood (NLL) loss on the target response Y : $${\mathcal{L}}_{\mathrm{nll}}=-\sum_{t=1}^{T}\log(y|(\mathbf{U},\mathbf{K}),y_{<t}).\qquad(9)$$ ## 3.5 Training Objectives All the parameters for our proposed model are trained and optimized based on the weighted sum of the two mentioned losses: $\mathcal{L}=\mathcal{L}_{\rm null}+\gamma\mathcal{L}_{\rm emo}$, (100) where γ is hyper-parameter that we use to control the influence of the these losses. In our experiments, we set γ = 1. ## 4 Experiments 4.1 Datasets We conduct our experiments on the EmpatheticDialogues, a large-scale multi-turn dataset containing 25k empathetic conversations between crowd sourcing workers. The dataset also provides an emotion label for each conversation from the total 32 available emotions. ## 4.2 Baselines We select the following baseline models for comparison on EmpatheticDialogues: (1) **Transformer** (Vaswani et al., 2017): An original Transformer, which is trained to optimize the NLL loss. (2) Multi-TRS (Rashkin et al., 2019): A variation of the Transformer for multitask that trained to jointly optimize an additional cross-entropy loss for emotion classification with the NLL loss. (3) MoEL (Lin et al., 2019): A Transformer-based model that uses 32 emotion-specific decoders to generate a response. Therefore, each decoder is optimized to respond appropriately for each emotion. (4) **MIME** (Majumder et al., 2020): Another Transformer-based model that mimics the context emotion to a varying degree considering its negative and positive emotions, and then generates empathetic response based on the blend of these two emotions. (5) **EmpDG** (Li et al., 2021a): A multi-resolution adversarial framework which applies an empathetic generator to produce empathetic responses and an interactive discriminator to ensure that the generated responses are consistent with the context and are also empathetic. (6) CEM Models PPL B-1 B-2 B-3 B-4 R-1 R-2 Dist-1 Dist-2 Acc Transformer 37.62 18.07 8.34 4.57 2.86 17.22 4.21 0.36 1.35 – Multi-TRS 37.50 18.78 8.55 4.70 2.95 16.85 4.21 0.35 1.27 33.95 MoEL 36.60 18.07 8.30 4.37 2.65 18.24 4.81 0.59 2.64 31.74 MIME 37.24 18.60 8.39 4.54 2.81 17.08 4.05 0.47 1.66 30.96 EmpDG 37.43 19.96 9.11 4.74 2.80 18.02 4.43 0.46 1.99 31.65 CEM 36.33 16.12 7.29 4.06 2.03 15.77 4.50 0.62 2.39 36.84 Ours 16.08 **21.73 10.62 6.24 4.09 19.77 5.65 2.19 9.61 49.16** w/o A∗ 15.41 19.50 9.54 5.52 3.62 19.35 5.57 2.16 8.87 46.47 w/o Knowledge **15.24** 20.11 9.86 5.72 3.73 19.72 5.82 2.08 8.59 44.87 w/o Context 15.62 20.45 9.98 5.78 3.74 19.88 5.78 1.82 7.41 46.34 (Sabour et al., 2021): An empathetic generation approach which leverages commonsense to draw more information about the speaker's situation and uses this additional information to further enhance the empathy expression in generated responses. ## 4.3 Implementation Details We implement all the models using PyTorch and use the encoder and decoder from base version of BART in our work. We use Adam optimizer with initial learning rate 0.00005 in 5 epochs. The batch size is 16. The max sequence length in source and target is 256 and 64 respectively. We use the same 8:1:1 train/valid/test split as provided by Rashkin et al. (2019). In each experiment, we apply an early stop mechanism to prevent the model from over fitting, and then report the test results of the optimal model on the test set. All our training and test results were performed on 32GB Tesla V100 GPU. ## 4.4 Evaluation Metrics 4.4.1 Automatic Evaluation We employ Perplexity (PPL), corpus-level BLEU (B-n), sentence-level ROUGE (R-n) and Distinct-n (Dist-n) as our main automatic metrics. Perplexity represents the model's confidence in its set of candidate responses, with higher confidence resulting in a lower PPL. This can be used to evaluate the general quality of the generated responses. Response with higher BLEU and ROUGE is closer to the ground-truth. Distinct-n measures the proportion of unique n-grams in the generated responses and is commonly used to evaluate generation diversity. In addition, since our proposed model and most baseline models perform emotion classification as part of their training process, we also report the prediction accuracy (Acc). 4.4.2 Human Evaluation Following the methods in CEM, we conduct an aspect-based pairwise preference test. That is, for a given context, we pair our model's response with a response from the baselines and ask annotators to give each response a rating score from four aspects: 1) Coherence (**Coh.**): which response is more coherent in content and relevant to the context; 2) Empathy (**Emp.**): which response shows more understanding of the speaker's situation and presents a more appropriate emotion; 3) Informativeness (**Inf.**): which response conveys more information about the context. 4) Continuity (**Con.**): which response ignites the speaker's more desire to continue the conversation. Then, we randomly sample 100 response pairs and totally shuffle the response order in each sample. We assign crowd sourcing workers to annotate each pair on a scale of 1 to 5. ## 4.5 Evaluation Results 4.5.1 Automatic Evaluation Results Table 1 reports the evaluation results on automatic metrics. Ours model achieves the lowest perplexity, which suggests the overall quality of our generated responses is higher than the baselines, approximately 56% lower than CEM. In addition, our model also considerably outperforms the baselines in terms of Dist-n, BLEU-n and ROUGE-n, which highlights the diversity of the responses and the relevance between generated response and speaker's situation. In terms of emotion classification, our model had a much higher accuracy compared to the baselines, nearly 34% higher than CEM, which suggests the adaptive selection of commonsense knowledge is pivotal for detecting the speaker's emotion. Table 2 reports the evaluation results on lowresource training set, and we have the following observations: (1) In the full-data scenario, our model achieves start-of-the-art performance by infusing commonsense knowledge, which means that the importance of knowledge in dialogue generation. Besides, reducing the number of training samples has effect on model performance, but not that much, for that even the model using 1/4 data still has the approximate values in PPL, BLEU-n, ROUGE-n and Dist-n compared with the model using full data. (2) In the 1/8 training data scenario, our model achieves the comparable performance with baselines even though them leveraged all training data. (3) Responses generated by our model have higher Dist-n in low-resources scenarios, which means that our model can better obtain information from multiple knowledge and generate more diverse texts. ## 4.5.2 Ablation Studies We conduct ablation studies to verify the effectiveness of each of the components in emotion classification and the generation performance. Specifically, we design three variants: workspace, *knowledge* and *context*. It is worth noting that since workspace depends on *knowledge* and *context*, when knowledge or *context* module is removed, workspace is removed by default: 1. w/o Adapter: the mechanism in workspace that used for adaptive commonsense knowledge selection is removed, and the emotion classification is based on none selected commonsense representation; 2. w/o Knowledge: the commonsense knowledge representation used for emotion classification is removed (Equation 6), and the hidden representation of the [CLS] token from the encoded context is used for emotion classification; 3. w/o Context: the context representation used for emotion classification is neglected (Equation 6), but keep the affective and cognitive commonsense knowledge representations; The obtained results are shown in Table 1. We observe that reducing the workspace module results in lower classification accuracy as the same as BLEU-n and ROUGE-n. And removing the commonsense knowledge information also impacts the emotion classification accuracy. The above phenomena suggest that information about both the speaker's emotion and their situation are necessary for correctly identifying their feelings, and ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) dynamical knowledge selection is leveraging the knowledge contribution to the cognition response. Removing those components leads to lower Dist-n scores but higher perplexity, which indicates the effectiveness of those components in generating more diverse responses. ## 4.5.3 Human Evaluation Results Table 4 reports the evaluation results on human ratings. We observe that responses from our model are more contextually coherent than those from baselines. Besides, with the enhancement of commonsense knowledge, the response from our model are able to convey more specific and informative content. It is worth to note that, for the aspect of continuity, our model significantly outperforms all the baselines, which suggests that the generated responses may increase speaker's engagement, thus a more intimate emotional expression. | Context | I ended up staying at home for the holidays. | | |--------------------|----------------------------------------------------------------------------------------|-----------------------------------------| | MoEL | Where did you go? | | | MIME | I am sorry to hear that. What happened? | | | EmpDG | Oh no! What happened? | | | CEM | Did you enjoy it? | | | Selection Process | xEffect → xReact → xWant → xNeed → xIntend | | | Selected Knowledge | xIntent: To be alone; To be home; To relax; To rest | | | Emotion | Lonely | | | Ours | Oh, I am sorry to hear that. | What did you do while staying at home ? | | Gold | Why is that? Comfort? | | | Context | My friend borrowed quite a lot of money from me. I really do believe he will repay me. | | | MoEL | That is so nice of him. Did you find a good friend? | | | MIME | That is a good thing to do. | | | EmpDG | That is a good friend. | | | CEM | That is nice of him. | | | Selection Process | xReact → xIntend → xEffect → xWant → xNeed | | | Selected Knowledge | xNeed: To ask for a loan; To get a loan; To ask him to repay; To ask for money | | | Emotion | Trusting | | | Ours | I am sure he will repay you . | | | Gold | You do? That's good, friends can be terrible people to lend too. | | Table 3: We report the case study of generated responses from EmpatheticDiaglogues. The responses with yellow background color demonstrate the awareness to the emotion and the selected knowledge. Models Coh. Emp. Inf. Cont. MoEL 3.57 3.26 3.11 3.09 MIME 3.61 3.30 3.09 3.13 EmpDG 3.42 3.10 2.94 2.89 CEM 3.90 3.49 3.08 3.19 Ours **4.39 4.13 4.18 4.24** ## 4.6 Qualitative Studies Case Study Table 3 shows the cases from EmpatheticDialogues, from which we can see that the response of our method outperforms the baselines. We analyze these cases with respect to the four factors evaluated by human. In aspect of *Coherence* and *Informativeness*, our response is more coherent in content and consistent to the context information. For instance, in case one, by the awareness of selected knowledge 'To be home', our method mentions this phrase in response so that the response better acknowledges speaker's intention. However, other methods fail to generate consistent response. It can be observed that MoEL and CEM dismiss the implication that the speaker is alone at home. The workspace module improves *Empathy* and *Continuity* by selecting the most influential commonsense with respect to the context. In both cases, the selected knowledge corresponds to the speaker's situation, which produces a more meaningful response by showing careness for speakers. Efficacy of Knowledge Selection Selection process illustrates that the most irrelevant knowledge is selected and eliminated at each iteration. By combining dynamics from the selection process in refiner, the generated sentence gradually focuses on speaker's emotion status, so that our method provides more interpretable knowledge selection process for the dialogue system. Figure 4 provides characteristic of knowledge selection process. It indicates that workspace module tends to select inferred knowledge from the relation xReact. Since xReact reflects speaker's reaction to context, our adaptive selection method potentially provides the consistency between context and knowledge. ## 5 Conclusions In this paper, we improve empathetic dialogue generation by infusing dynamical commonsense knowledge to promote the understanding of the speaker's situation and feelings, which leads to more consistent and empathetic responses. The automatic and human evaluation demonstrate that the effectiveness of our approach in high-quality empathetic response generation. ## Limitations One limitation in this work is the metrics employed in the automatic evaluation. The metrics mainly focus on the quality of generated response and the accuracy of emotion recognition, while automatic evaluation lacks a comprehensive method to evaluate empathy. Another limitation comes from the utilization of the dataset designed for open-domain dialogue system, so that the generated response from the proposed framework is not task-oriented. In the future, we will build empathetic dialogue generation datasets with diverse and task-oriented response, and develop metrics to evaluate the understanding of the speaker's situation. ## Ethics Statement The human evaluation is conducted by the employed workers, who does not involve privacy issues. We use public datasets to conduct our experiments. Existing packages involved in this work are displayed in the appendix. ## Acknowledgment This work was supported by UniDT's Cognitive Computing and Few Shot Learning Project. ## References Bernard J Baars. 1993. *A cognitive theory of consciousness*. Cambridge University Press. Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural language processing with Python: analyzing text* with the natural language toolkit. " O'Reilly Media, Inc.". Lenore Blum and Manuel Blum. 2022. A theory of consciousness from a theoretical computer science perspective: Insights from the conscious turing machine. Proceedings of the National Academy of Sciences, 119(21):e2115934119. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. M. H. Davis. 1983. Measuring individual differences in empathy: Evidence for a multidimensional approach. *Journal of Personality and Social Psychology*, 44(1):113–126. Deepanway Ghosal, Navonil Majumder, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. Cosmic: Commonsense knowledge for emotion identification in conversations. *arXiv preprint* arXiv:2010.02795. Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: on symbolic and neural commonsense knowledge graphs. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 35, pages 6384–6392. Jing Yang Lee, Kong Aik Lee, and Woon Seng Gan. 2022. Improving contextual coherence in variational personalized and empathetic dialogue agents. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7052–7056. IEEE. Qintong Li, Hongshen Chen, Zhaochun Ren1, Pengjie Ren, Zhaopeng, and Zhumin Chen. 2021a. Empdg: Multi-resolution interactive empathetic dialogue generation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4454–4466. Qintong Li, Piji Li, Zhumin Chen, and Zhaochun Ren. 2022. Towards empathetic dialogue generation over multi-type knowledge. In *Proceedings of the AAAI* Conference on Artificial Intelligence. Qintong Li, Piji Li, Zhaochun Ren, Pengjie Ren, and Zhumin Chen. 2021b. Knowledge Bridging for Empathetic Dialogue Generation. In *Proceedings of the* AAAI Conference on Artificial Intelligence. Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. MoEL: Mixture of Empathetic Listeners. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing (EMNLP). Navonil Majumder, Deepanway Ghosal, Devamanyu Hazarika, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2021. Exemplars-guided Empathetic Response Generation Controlled by the Elements of Human Communication. In *CIKM*. Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: MIMicking Emotions for Empathetic Response Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards Empathetic Opendomain Conversation Models: a New Benchmark and Dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Sahand Sabour, Chujie Zheng, and Minlie Huang. 2021. CEM: Commonsense-aware Empathetic Response Generation. In Proceedings of the AAAI Conference on Artificial Intelligence. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In *Proceedings of the AAAI Conference on Artificial intelligence*, volume 33, pages 3027–3035. Xuli Shen, Xiaomei Wang, Qing Xu, Weifeng Ge, and Xiangyang Xue. 2022. Towards scalable and fast distributionally robust optimization for data-driven deep learning. In 2022 IEEE International Conference on Data Mining (ICDM), pages 448–457. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *Thirty-first AAAI Conference on* Artificial Intelligence. Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. MISC: A MIxed StrategyAware Model Integrating COMET for Emotional Support Conversation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics,. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2020. Diverse and informative dialogue generation with context-specific commonsense knowledge awareness. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 5811–5820. Qing Xu and Xiaohua Xuan. 2019. Nonlinear regression without i.i.d. assumption. *Probability, Uncertainty* and Quantitative Risk. Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. 2019. Grounded conversation generation as guided traverses in commonsense knowledge graphs. In *arXiv:1911.02707*. Chujie Zheng, Yong Liu, Wei Chen, Yongcai Leng, and Minlie Huang. 2021. CoMAE: A Multi-factor Hierarchical Framework for Empathetic Response Generation. In *Findings of the Asso- ciation for Computational Linguistics: ACL-IJCNLP*. Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, and Chunyan Miao. 2021. Care: Commonsense-aware emotional response generation with latent concepts. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 14577–14585. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018b. Commonsense knowledge aware conversation generation with graph attention. In *IJCAI*, pages 4623–4629. Pei Zhou, Pegah Jandaghi, Hyundong Cho, Bill Yuchen, Lin Jay Pujara, and Xiang Ren. 2021. Probing Commonsense Explanation in Dialogue Response Generation. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP). Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In *Proceedings of the 56th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 1128–1137. ## A The Details Of Cognition Dynamics Our goal is to calculate the effect of knowledge representation on the predictions of the linear transformation function gθ in the *workspace* module. The influence of excluding irrelevant knowledge representation can be interpreted as the change of θ with respect to Lemo, which is ∇θLemo(gθ(z), gθ(zctx)), z ∈ Z. Here, Z = {Zr[0]}, Zr[0] ∈ R d. In order to eliminate the most irrelevant knowledge representation, we take the max(·) on loss function with respect to z ∈ Z. However, it is challenging to calculate the gradient when we implement max(·) on the groups of loss functions, because the above function is non-differentiable. Thus, we first bring differentiability for maxz∈Z{Lemo(gθ(z), gθ(zctx))}. To simplify notation, objective function is set as Φ(θ) = max1≤j≤J fj (θ), fj (θ) = Lemo(gθ(zj ), gθ(zctx)). Here, fj denotes the loss function with respect to knowledge representation zj and each fj is differentiable. gθ is the parametric linear layer. Then, calculating the gradient of θ turns into the following discrete mini-max problem: $$\operatorname*{min}_{\theta\in\mathbb{R}^{d}}\operatorname*{max}_{1\leq j\leq J}f_{j}(\theta).$$ fj (θ). (11) In order to smooth objective function Φ during the iteration m, we linearize fj at θm and obtain the convex approximation of Φ as $$\hat{\Phi}(\mathbf{\theta})=\max_{1\leq j\leq J}\{f_{j}(\mathbf{\theta}_{m})+\langle\nabla f_{j}(\mathbf{\theta}_{m}),\mathbf{\theta}-\mathbf{\theta}_{m}\rangle\}\,.\tag{12}$$ The linearization term smooths max(·) function. Next step is to find descent direction, which minimizes Φˆ. However, Φˆ is not strictly convex with respect to θ, the algorithm may not reach global minimum. So a regularization term ∥θ − θm∥2 is added for finding stable descent direction. Denote the descent direction δ = θ − θm, the discrete mini-max problem now is equivalent to $$\min_{\delta,\nu}\quad\|\delta\|_{2}+\nu$$ (13a) s.t. $$f_{j}(\mathbf{\theta}_{m})+\langle\nabla f_{j}(\mathbf{\theta}_{m}),\delta\rangle\leq\nu,\,\forall1\leq j\leq J.$$ (13b) Problem (13) is a semi-definite quadratic programming (QP) problem since we choose ℓ2 norm as the regularization term. When the number of datapoints in subgroup is large, widely-used QP algorithms, such as active-set method, are timeconsuming. Thus we turn to the dual problem. Consider the Lagrange multiplier for problem (13), $$L(\mathbf{\delta},\nu;\mathbf{\lambda})=\frac{1}{2}\|\mathbf{\delta}\|^{2}+\nu$$ $$+\sum_{j=1}^{J}\lambda_{j}(f_{j}(\mathbf{\theta}_{m})+\langle\nabla f_{j}(\mathbf{\theta}_{m}),\mathbf{\delta}\rangle-\nu).$$ By strong duality theorem, the minimum of original problem is equal to the maximum of dual problem under specific constrains: $$\min_{\delta,\nu}\max_{\lambda\geq0}L(\delta,\nu;\lambda)=\max_{\lambda\geq0}\min_{\delta,\nu}L(\delta,\nu;\lambda)\tag{15}$$ Let f = (f1, · · ·, fJ ) Tand G = ∇θf ∈ R d×q. By setting e = 1, the above problem is equivalent to $$\max_{\lambda\geq0}\min_{\delta,\nu}\left(\frac{1}{2}\|\delta\|^{2}+\nu+\lambda^{T}(\mathbf{f}+\mathbf{G}\delta-\nu\mathbf{e})\right).\tag{1}$$ ).</p> <p>$$\left(16\right)$$</p> Note that $$\frac{1}{2}\|\delta\|^{2}+\nu+\lambda^{T}(\mathbf{f}+\mathbf{G}\delta-\nu\mathbf{e})$$ $$=\frac{1}{2}\|\delta\|^{2}+\lambda^{T}(\mathbf{f}+\mathbf{G}\delta)+\nu(1-\lambda^{T}\mathbf{e}).\tag{17}$$ $$T\,\mathbf{e}).$$ $$(11)$$ If 1 − λ T e ̸= 0, the objective function will be −∞. Thus, we must have 1 − λ T e = 0 when the maximum is attained. The problem is converted to $$\max_{\lambda_{i}\geq0,\sum_{i=1}^{J}\lambda_{i}=1}\min_{\delta}\frac{1}{2}\|\delta\|^{2}+\lambda^{T}\mathbf{G}\delta+\lambda^{T}\mathbf{f}.\tag{1}$$ $$\mathbf{f}.$$ Let the gradient of the inner minimization term to be zero, we have solution δ = −GTλ. By changing the sign of (18), the maximization term is reduced to $$\begin{array}{l l l}{{\operatorname*{min}}}&{{(\frac{1}{2}\lambda^{T}{\bf G G}^{T}\lambda-\lambda^{T}{\bf f})}}&{{}}&{{\mathrm{(19a)}}}\\ {{}}&{{}}&{{}}\\ {{\mathrm{s.t.}}}&{{\sum_{i=1}^{J}\lambda_{i}=1,\lambda_{i}\geq0.}}&{{}}&{{\mathrm{(19b)}}}\end{array}$$ Suppose λ is the solution of the QP problem (13), then δ = −GTλ is the solution of problem above. Thus, we have the δ as the change of eliminating irrelevant knowledge representation z. By adding δ to the refiner in decoder module, the final generated response would be less affected by the irrelevant knowledge. The effect of δ is demonstrated by the generated responses in Table 5, and we also display how the elimination of irrelevant knowledge boost the performance. Algorithm 1 Adaptive Knowledge Selection Method. Input: The set of knowledge representation Z = {Zr[0]}, Zr[0] ∈ R d, linear layer gθ, θ ∈ R d×q, the context representation vector zctx ∈ R dfrom dialogue history encoder, the objective function of emotion classification Lemo. - *Competition Stage*: while len(Z) > 1 do m = 1 I = maxz∈Z{Lemo(gθ(z), gθ(zctx))} f = {Lemo(gθ(z), gθ(zctx)), z ∈ Z} Gm = ∇θf ∈ R d×q Solve Lagrange multiplier λ: min λ ( 1 2 λ T GmGTmλ − f Tλ) | m = 1 I = maxz∈Z{Lemo(gθ(z), gθ(zctx))} f = {Lemo(gθ(z), gθ(zctx)), z ∈ Z} Gm = ∇θf ∈ R d×q Solve Lagrange multiplier λ: 1 T GmGT mλ − f Tλ) min ( λ 2 λ | Speaker: My friend borrowed quite a | | |------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------| | Context | lot of money from me. I really do believe he'll repay me. xIntent: To be helpful xWant: To repay the money xNeed: To ask for a loan xEffect: Gets a receipt xReact: Happy; Relieved | | | Emotion | Trusting | | | Selection Process | xReact → xEffect → xIntent → xNeed → xWant | | | Selected Knowledge | xWant: To repay the money | | | Knowledge | | | | s.t. | PJ i=1 λi = 1, λi ≥ 0. | | | if m = 1 then δm = −GT mλ else δm = δm−1 − GT mλ end if Z = Z−I m = m + 1 | | | | end while - Broadcasting Stage: he k = hk + δm | Ours (w/o A*) | That is very nice of him. What did he do? | | Ours (w/ A*) | I am sure he will repay you. Speaker: | | | Context | I had a nice meal and my favorite beverage after work. xIntent: To relax after work xWant: To go to bed xNeed: To go to the restaurant xEffect: Has a full belly xReact: Satisfied; Happy | | | Content | Trusting | | | Selection Process | xNeed → xWant → xReact → xEffect → xIntent | | | Selected Knowledge | xIntent: To relax after work | | | Ours (w/o A*) | What did you eat | | | Ours (w/ A*) | Sounds like a relaxing day. What did you drink? | | | Knowledge | | | | B | Involved Existing Packages | | | Existing packages involved in this work include: 1) the open source codes, models weights and generated outcomes of Transformer (Vaswani et al., 2017), Multi-TRS (Rashkin et al., 2019), MoEL (Lin et al., 2019), MIME (Majumder et al., 2021), EmpDG (Li et al., 2021a), CEM (Sabour et al., 2021), and 2) the evaluation metrics from Natural | | | Existing packages involved in this work include: 1) the open source codes, models weights and generated outcomes of Transformer (Vaswani et al., 2017), Multi-TRS (Rashkin et al., 2019), MoEL (Lin et al., 2019), MIME (Majumder et al., 2021), EmpDG (Li et al., 2021a), CEM (Sabour et al., 2021), and 2) the evaluation metrics from Natural Language Toolkit (Bird et al., 2009). ## C Additional Case Study We provide qualitative studies in Section 4.6. It includes 1) Ablation study of our cognition dynamics (Table 5); 2) Additional case study of generated responses from EmpatheticDiaglogues (Table 6); 3) Stackplot of the knowledge selection process for all the emotions in EmpatheticDiaglogues (Figure 5). Table 5: Ablation study of our cognition dynamics. A∗ represents the adaptive knowledge selection method in the workspace module. Selection Process illustrates that at each the iteration, the most irrelevant knowledge is selected and eliminated. | Speaker: My family and I are going on vacation in a few weeks. We rented a Largo Beachfront Condo and I can not wait! Listener: Oh I love the beach!! Which beach are you going to go to? Speaker: We are going to be staying at Miramar Beach in Florida, about an hour Panama City Beach. The view is amazing from our balcony and we are so ready! | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------| | MoEL | That is great! I hope you have fun! | | | MIME | That sounds like a lot of fun! | | | EmpDG | That is awesome! I hope you are able to have a blast! | | | CEM | That sounds like fun! | | | Selection Process | xIntend → xEffect → xReact → xNeed → xWant | | | Selected Knowledge | xWant: To enjoy the view; To enjoy the beach; To enjoy the day; To have fun | | | Emotion | Anticipating | | | Ours | Sounds like a lot of fun. | I have never been there, but I will check it out! | | Gold | I bet it is tough. A regular income is nice but the freedom you get from working | | | for yourselfs is better. Context | Speaker: My parents keep taking my money that is not nice of them. | | | Context | Listener: Why do they keep taking it?? Speaker: They need it for other stuff. I never get a chance to save any of it. | | | MoEL | That is great! I hope you find a good job! | | | MIME | That is a good thing to do. | | | EmpDG | That is a very nice thing to do. | | | CEM | Oh no! I am sorry to hear that. | | | Selection Process | xReact → xEffect → xIntent → xWant → xNeed | | | Selected Knowledge | xNeed: To get rid of it; To get rid of something; To buy it; To have it | | | Emotion | Annoyed | | | Ours | I am sorry to hear that. | Maybe you can talk to them about it. | | Sorry to hear that. I found that it helps to have multiple savings accounts, | | | | Gold | so whenever you get a certain amount of money ( like 5 , 10 ) save it and be strict not to spend it. Speaker: I went to visit a friend one day and her boyfriend made a pass at me. He actually suddenly kissed me, as I was waiting for my friend to return. I could not believe his nerve! Listener: Oh no, what did you do? Speaker: I quietly pushed him away, as I did not want my friend to hear. I was so shocked and upset, since he knew I wouldn't want to make a scene and took advantage. | | | MoEL | That is so nice of her. | | | MIME | That is nice of you. | | | EmpDG | That is a lot of people are not. I am sure you will be able to get another one. | | | CEM | That is terrible. Did you find out who did it? | | | Selection Process | xIntent → xEffect → xNeed → xWant → xReact | | | Selected Knowledge | xReact: Angry; Upset; Sad; Annoyed; Scared | | | Emotion | Angry | | | Ours | I would have done the same thing. | | | Gold | Did you ever tell your friend? | | | Context | Speaker: My mother recently went to the bank for me. I love her so much. Listener: That is really nice of her. It is nice to have someone | | | Context | to help you do things sometimes. Speaker: Nice people are the best. Empathy is something we all need more of! | | | MoEL | That is very nice of you. It is always nice to have family. | | | MIME | That is a good thing to do. | | | EmpDG | That is a very nice thing to have a good mom. | | | CEM | That is nice of you. | | | Selection Process | xNeed → xWant → xIntent → xReact → xEffect | | | Selected Knowledge | xEffect: Is loved; Is respected; Is liked | | | Emotion | Caring | | | Ours | I agree with you there. It is nice to have someone to lean on. | | | Gold | Yes. Too many people these days are mean to one another! | | | Table 6: Additional case study of generated responses from EmpatheticDiaglogues. 7870 | | | ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In section Limitations. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? appendix ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 4 D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ceron-etal-2023-additive
Additive manifesto decomposition: A policy domain aware method for understanding party positioning
https://aclanthology.org/2023.findings-acl.499
Automatic extraction of party (dis)similarities from texts such as party election manifestos or parliamentary speeches plays an increasing role in computational political science. However, existing approaches are fundamentally limited to targeting only global party (dis)-similarity: they condense the relationship between a pair of parties into a single figure, their similarity. In aggregating over all policy domains (e.g., health or foreign policy), they do not provide any qualitative insights into which domains parties agree or disagree on. This paper proposes a workflow for estimating policy domain aware party similarity that overcomes this limitation. The workflow covers (a) definition of suitable policy domains; (b) automatic labeling of domains, if no manual labels are available; (c) computation of domain-level similarities and aggregation at a global level; (d) extraction of interpretable party positions on major policy axes via multidimensional scaling. We evaluate our workflow on manifestos from the German federal elections. We find that our method (a) yields high correlation when predicting party similarity at a global level and (b) provides accurate party-specific positions, even with automatically labelled policy domains.
# Additive Manifesto Decomposition: A Policy Domain Aware Method For Understanding Party Positioning Tanise Ceron Dmitry Nikolaev Sebastian Padó Institute for Natural Language Processing, University of Stuttgart, Germany {tanise.ceron,dmitry.nikolaev,sebastian.pado}@ims.uni-stuttgart.de ## Abstract Automatic extraction of party (dis)similarities from texts such as party election manifestos or parliamentary speeches plays an increasing role in computational political science. However, existing approaches are fundamentally limited to targeting only *global* party (dis)similarity: they condense the relationship between a pair of parties into a single figure, their similarity. In aggregating over all *policy domains* (e.g., health or foreign policy), they do not provide any qualitative insights into which domains parties agree or disagree on. This paper proposes a workflow for estimating policy domain aware party similarity that overcomes this limitation. The workflow covers (a) definition of suitable policy domains; (b) automatic labeling of domains, if no manual labels are available; (c) computation of domainlevel similarities and aggregation at a global level; (d) extraction of interpretable party positions on major policy axes via multidimensional scaling. We evaluate our workflow on manifestos from the German federal elections. We find that our method (a) yields high correlation when predicting party similarity at a global level and (b) provides accurate party-specific positions, even with automatically labelled policy domains. ## 1 Introduction Party competition is a fundamental process in democracies. It provides space for different political stances to emerge, allowing people to choose which of them they most identify with. Investigating this process is relevant for understanding the reasons behind the choice of voters in elections as well as the behavior of parties in policy decisionmaking once in power (Benoit and Laver, 2006). Within political science, the positioning of parties is investigated under the umbrella term of "party competition". Some studies look at specific policies such as "welcoming refugees", others, at broader domains such as "economy". Traditionally, the positioning of parties within these policies or domains is scaled down to a reduced number of political dimensions such as the well-established left-right or the libertarian-authoritarian axes in order to facilitate the comparison among parties and their ideologies (Heywood, 2021). Analyses are usually carried out by experts, who gather policy and ideological stances of members of the political parties in several countries in Europe and beyond (Jolly et al., 2022). Alternatively, electoral programs are manually annotated following a specific codebook that takes into account the position of the parties on policies so that the salience of the labels can be scrutinised (Burst et al., 2021). Recently, computational approaches have been developed to automate and scale up party position analysis to larger amounts of text (Slapin and Proksch, 2008; Däubler and Benoit, 2021; Ceron et al., 2022). This development has the potential of alleviating the burden of annotation, but has so far been realised only at an *aggregated* level: party positions are projected on the left-right scale or on a distance-based approach between party pairs according to several policies, not providing insights at the level of policy domains. This requires political scientists either to manually check for sections of the text of their interest in case the objective is to understand the positioning of parties on a more fine-grained level or to make assumptions about a policy considering the entire document. In this paper, we extend the previous studies to provide a computational model for party positions and party similarity *at the level of policy domains*. To do so, we semi-automatically decompose the texts into interpretable thematic blocks based on an updated inventory of annotated labels from the Comparative Manifesto Project (CMP). Sentence embeddings leverage well the grouping of finergrained categories into these blocks, which we call policy domain from now on. Then, they are used to 7874 | Party | Text | Category | |----------------------------------------------------|---------------------------------------------------------------------------------------------------|----------------------| | AfD | The principles of equality before the law. | Equality: Positive | | CDU | We are explicitly committed to NATO's 2% target. | Military: Positive | | FDP | And with a state that is strong because it acts lean and modern | Government and | | instead of complacent, old-fashioned and sluggish. | Admin. Efficiency | | | SPD | There need to be alternatives to the big platforms - with real opportunities for local suppliers. | Market Regulation | | Grüne | We will ensure that storage and shipments are strictly monitored. | Law and Order: Posi. | | DieLinke | Blocking periods and sanctions are abolished without exception. | Labour groups: Posi. | Table 1: Translated examples of sentences from German federal election manifestos (2021) with their categories as annotated by the Comparative Manifesto Project. compute pairwise policy differences between parties. The results show that this re-grouping of categories into higher policy domains performs well not only at an aggregate level in comparison with the ground truths, but that they also match the positioning of parties within the political dimensions at the individual level of policy domains. Besides shedding light on the positioning of parties regarding where they most (dis)agree, we also avoid relying on the *salience* (i.e., frequency) of the categories. This assumption is implicit in many existing party positioning models including our own prior work (Ceron et al., 2022) and is motivated on the grounds that major domains, such as economic and social policy, should play a more prominent role. At the same time, there is strong evidence that voters re-weigh domains by their priorities (Iversen, 1994). We take this as evidence that models would benefit from focusing on modeling *within*-domain similarities and differences between parties. We evaluate the extent to which annotations can be forgone by evaluating several classifiers to automatically predict the policy domains of the 2021 German federal elections based on annotated manifestos from previous elections. Comparing the party positioning given by the manually annotated and the predicted labels, we find that the classifier can substitute annotations at an aggregate level and also in most policy domains, allowing new, unannotated documents to be analysed automatically. We make our code freely available.1 ## 2 Related Work The Comparative Manifesto Project. Party manifestos, also known as electoral programs or party platforms, condense parties' ideologies and 1https://github.com/tceron/additive_manifesto_ decomposition stances towards various policies (Budge, 2003). The Comparative Manifesto Project2annotates manifestos from multiple countries around the world following a codebook that takes into account the positioning of parties according to the left-right political dimension (Budge et al., 2001). The codebook contains 143 fine-grained categories. Table 1 shows some examples. The categories are labelled according to policies and may or may not contain the stance towards the policy as well. For example, there are two labels for Military: *Military: Positive* and *Military: Negative*, but there is only one category for *Peace* because no party is against it. In most cases, the annotations are assigned to every sentence of the manifesto, however, sentences are split into smaller parts whenever there is more than one self-contained category. Computational models of party positioning. Party manifestos, which provide a particularly rich source of information on parties' positions, have been extensively used in computational political science. In the pre-neural era, they mainly focused on word/token distributions to position parties along a scale; thus, the Wordscore approach used the distributions extracted from reference texts to determine party positioning of new texts (Laver et al., 2003). Slapin and Proksch (2008) focus on overcoming the disadvantageous dependence on reference texts which assumes that political discourse does not change significantly over time and that the reference corpus always contains good representations of extreme policy positions. Arguably, the adoption of (static) word embeddings such as Word2Vec (Mikolov et al., 2013) instead of word distributions constituted a step forward for computational models of party positioning. For example, Glavaš et al. (2017) take advantage of 2https://manifesto-project.wzb.eu/ the possibility to align word embeddings across languages to present a multilingual model for extracting party positions from speeches of the European parliament. Rheault and Cochrane (2020) exploit another property of embedding spaces, namely the information on graded word similarity implicit in them. They build combined representations from word embeddings and political metadata and then estimate the positions of different parties through dimensionality reduction. The embeddings are reduced to two dimensions and their projection in the space shows the alignment of parties from Britain, Canada, and the US on a left-right axis. The recent shift from static word embeddings to contextualized embeddings was a second important step. Contextualized embedding models, like BERT (Devlin et al., 2019), are not only able to pick up on corpus-specific usage of words, but can also be fine-tuned for specific tasks, which greatly improves the quality of the representations. In previous work (Ceron et al., 2022), we predicted global party similarity using Sentence-BERT (SBERT, Reimers and Gurevych 2019), a model for the task of sentence-similarity prediction. It uses a Siamese network with a triplet loss function that aims at placing mutually similar sentences close to one another in embedding space and pushing dissimilar ones apart. We found that SBERT representations can profit substantially from tuning by party, forcing the model to place sentences from the same party closely together in the semantic space. Architectures similar to SBERT with modifications in the loss function have followed such as different types of contrastive and non-contrastive self-supervised learning (Gao et al., 2021) and normalization techniques in the distribution through an unsupervised objective during training (Li et al., 2020). The original SBERT architecture, though, remains the most widely used and numerous pretrained models, including multilingual ones, have been made publicly available (Ceron et al., 2022). Despite these successes, the computational studies mentioned above have not proposed a general way of capturing the positioning of parties within specific policy domains, opting for narrowly applicable ad-hoc modifications of existing algorithms. For example, Laver et al. (2003) adapt their reference values (related to the word distribution) to few chosen domains, and Slapin and Proksch (2008) manually identify sections of the manifestos that discuss economic issues. ![2_image_0.png](2_image_0.png) ## 3 Methodology 3.1 Workflow The goal of the additive manifesto decomposition method we propose is to computationally analyse the positioning of parties both at the level of policy domains and at an aggregated level of information. Figure 1 illustrates the four steps in which we decompose this analysis: (1), we define policy domains (visualized as colors). This is discussed in Section 3.2. (2), we label manifestos with the policy domains. Unless manual annotation is available, this involves training a policy domain labeller. This is discussed in Section 3.3. (3), we represent parties' positions on policy domains by vectors and compute the similarities between these vectors, which can later be aggregated to obtain global similarities. This is discussed in Section 3.4. Finally, (4), we apply a dimensionality reduction technique to the parties' policy domain distance matrix to be able to inspect their positions. We apply the methods that we propose to corpora from the Comparative Manifesto Project (CMP, cf. Section 2) and use examples from the CMP below for illustration. However, we believe that the CMP is fairly typical regarding size and annotation granularity for resources in computational political science. We are confident that our methods generalize to other corpora. ## 3.2 Policy Domain Grouping Given that the objective is to understand where parties (dis)agree the most according to the way they expose their stances and ideologies in the manifestos rather than on the salience of mentions of a policy, we first have to decompose the manifestos into interpretable thematic blocks, which we identify as policy domains. Policy domains are in principle freely definable in an inductive fashion (Waldherr et al., 2019) but must fulfil three requirements to be useful: (1) Domains must be coherent and interpretable in the context of policies to support the goal of understanding in which domains parties are most similar and dissimilar (2) Domains must be neutral with regard to stance. In other words, the categories with opposite stances (positive and negative) vis-a-vis a certain problem (e.g., immigration) should belong to the same policy domain. (3) Domains must be located at the right level of granularity: they must be detailed enough to be informative (cf. (1)), but not so detailed that accurate classification becomes impossible in practice. For example, the original CMP categories are arguably too fine-grained (such as the examples in Table 1). We propose that a reasonable granularity for party positioning can typically be achieved by *clustering* fine-grained category annotations from sources such as the CMP codebook. To do so, we represent the texts through sentence embeddings as state-of-the-art representations (cf. Section 2). This already enables us to compute cosine distances between all pairs of sentences belonging to two categories and use their average as a distance measure of topical coherence between two given categories. Formally, given a set of sentences {s1, s2*, . . . , s*n} and a disjoint collection of categories {C1, C2*, . . . , C*k}, for each category pair (Cp, Cq), we compute $$\operatorname{dist}(C_{p},C_{q})={\frac{1}{N}}\sum_{i\in C_{p},j\in C_{q}}1-\operatorname{cosine}(s_{i},s_{j})$$ where N is the number of sentence pairs. The resulting distance matrix between low-level CMP categories can then serve as input for an average-linkage hierarchical-clustering algorithm, which produces a tree of categories, from which a suitable level of abstraction can be selected that meets the requirements laid out above. Inspection of candidate policy domains is also adopted as a sanity check for the sentence embedding model. ## 3.3 Policy Domain Prediction For texts without policy domain annotation, we predict policy domains for all sentences using existing annotated corpora as training data. Technically, this is a labeling task where each token is a sentence (or segment thereof) which can be solved by any state-of-the-art classifier architecture. It has two main challenges. The first one is the high contextual dependence on political discourse. As a result, the classification of individual sentences is often challenging. For example, a vague formulation, such as *There is still a lot to do*, must take into account based on the category of the previous sentence, a possibility explicitly acknowledged by the CMP codebook. This clearly indicates that it is sensible to approach domain prediction as a *sequence* labeling task. The second challenge is that training and test data are always bound to be "out of domain", since they will differ in either country or time: we either need to project from past elections to new ones, or across countries, and thus political cultures. Since both of these settings can introduce strong concept drift, this makes the task an example of out-ofdomain prediction. The end result of policy domain prediction is then a decomposition of a party manifesto p into a disjoint collection of k policy domains {D p 1 , Dp 2 , . . . , Dp k}. Note that the set of sentences associated with any domain may be empty. ## 3.4 Computing Party (Dis)Similarities After decomposing the sentences of manifestos into policy domains, we compute the similarity between parties by domain. We re-use the simple coherence measure from the policy domain grouping (cf. Section 3.2). Again, this involves choosing a sentence embedding model, a parameter of our method. Given two parties' manifestos p and q, we interpret *dist*(D p i , Dq i ), the average pairwise distance among sentences for policy i as the distance between parties p and q for this domain. To obtain an aggregated party distance, we simply *average* the distances of all policy domains. As argued in Section 1, this removes the effect of domain salience from the model and arguably obtains the clearest party positioning as perceived by a "neutral" voter (Iversen, 1994). ## 3.5 Multidimensional Scaling The results of the previous step can be represented as a square matrix of the distances between every party pair. In order to enable a more qualitative analysis of the results by policy domain, we apply a multi-dimensional scaling (MDS) technique which maps a distance matrix onto a one-dimensional scale while respecting the distances as well as possible. MDS models are well established for visualization in political science (Rheault and Cochrane, 2020; Heywood, 2021). We utilize Principal Component Analysis is chosen because the first component explains well the variability in the data. ## 4 Experimental Setup 4.1 Data We analyze the positions of the six German parties which obtained parliamentary seats in 2021 based on their 2021 federal election manifestos. These are Die Linke, Bündnis 90/Die Grünen, Christian Democratic Union (CDU), Free Democratic Party (FDP), Social Democratic Party for Germany (SPD), and Alternative for Germany (AfD). We train a policy domain labelled for these manifestos based on the annotated data provided by the CMP. We experiment with two training sets: DEtrain contains only manifestos from Germany dating from 2002 to 2017. The second instead, DACHtrain consists of manifestos from the majority German-speaking countries (Germany, Austria, and Switzerland) for all elections from 2002 to 2019. This allows us to understand whether the classifier benefits more from focused data of a single country (the country of interest for the analysis) or if the raw amount of data is more relevant. Appendix A provides details on data statistics. ## 4.2 Policy Domain Grouping To define our policy domains, we concatenate the manifestos of six German major political parties from the 2021 elections, together with their CMP annotations, into a single corpus. It contains a total of 69 annotated categories, however, only the ones with 10 occurrences or more are included in the grouping - a total of 61. We employ multilingual-mpnet-base-v2, the vanilla SBERT model to compute similarities3in order to make the clustering more general. It is a vanilla multi-lingual model with the base-size version of XLM-RoBERTa (Conneau et al., 2020) as the encoder trained on more than 50 languages.4 Representations from the multilingual SBERT model are post-processed with whitening transformation (Su et al., 2021), as suggested by experiments finding that more isotropic embeddings capture political text similarity substantially better (Ceron et al., 2022). Hierarchical agglomerative clustering led to a clustering that consistently grouped thematically close categories with opposite valences into single domains, as shown in Fig. 3 in Appendix B. In the inspection of the clustering tree, we verify that all 10 categories that contained positive and negative labels fall in the same cluster in order to satisfy requirement 2. We then selected the tightest possible clusters of categories that together formed coherent policy domains (fulfilling requirements 1 and 3). The remaining 8 categories (that were not included in the clustering) are added to the formed clusters manually. We consulted with political scientists and related work (Benoit and Laver, 2006; Jolly et al., 2022) to verify the result. The full list of CMP categories falling into each of our issues is presented in Appendix B. ## 4.3 Policy Domain Labelling As stated above in Section 3.3, domain labels in manifestos are context-dependent. Therefore, we give up the assumption of previous analyses of manifestos (Däubler and Benoit, 2021) that annotated sentences are independent units of information. Instead, we treat policy domain labelling as a sequence labelling task. Our preliminary experiments showed that incorporating sequence information is indeed beneficial for prediction quality, and we chose a simple "bigram"-based model: pairs of subsequent sentences from manifestos were concatenated, and the model was tasked with predicting the label of the second one.5 We use averaged token embeddings from xlm-roberta-large and pooled representations from the multilingual version of mpnet-base-v2 fine-tuned on paraphrase detection as sentence-pair embeddings6as encoded representations and use a two-layer MLP with tanh activation as the classification head. The system is then trained end-to-end for two epochs. As a first baseline, we choose the majority baseline between the 14 categories (13 policy domains in addition to the category "Other" which does not belong to any domain). The second baseline instead follows the same bi-gram idea in terms of input and is logistic regression fed with the representation taken from frozen SBERT mpnet-base-v2. ## 4.4 Party (Dis)Similarity - Sentence Encoders We experiment with four different sentence encoding models when computing party similarities (as explained in Section 3.4). Our baseline is FastText for German based on character n-gram embeddings (Bojanowski et al., 2017).7 The second model is a base-sized cased version of BERT trained on German data, a monolingual Transformer-based model. The representation of a given sentence from these models is an average of its token embeddings. Then, as end-to-end sentence encoders we use two versions of SBERT. The first is the vanilla SBERT pre-trained model multilingual-mpnet-base-v2. The second is SBERTdomain, a pre-trained model from our prior work (Ceron et al., 2022), which we fine-tuned on German CMP data from before 2019 to distinguish between 6 higher-level domains from the CMP codebook. Our preliminary experiments showed that applying post-processing with whitening improves all models. Therefore, all sentence representations in this step are whitened as in Section 4.2. ## 4.5 Evaluation 4.5.1 Ground Truth We evaluate our additive manifesto decomposition method against two sources of ground truth. RILE index. The RILE index is a widely used way of computing the positioning of parties on certain policy domains or in aggregate. Laver and Budge (1992) selected 12 categories from the CMP codebook as left-leaning and 12 others as right-learning.8 The score is then computed as RILE = (R − L)/N, where R and L are counts of sentences from the right and left categories, respectively. Dividing by N, the manifesto length, results in a normalized score between -1 and 1. As our approach returns a distance matrix, we need to use dimensionality reduction to obtain a single estimate per party. For this purpose, we extract the first axis of the classical MDS algorithm applied to distance matrices - corresponding to the first principal component in PCA analysis. CMP-category salience. Given that the RILE index makes use of only 24 out of the 143 categories from the CMP codebook, we used another type of ground truth that takes into account all categories and corresponds to the traditional political science approach of comparing domain saliences, i.e. relative prominences of different policy categories in manifestos (Budge et al., 2001). Each party is represented as a vector of relative frequencies of categories normalized by the manifesto length. Euclidean distances between these representations are then used to create a party distance matrix. ## 4.6 Evaluation Metrics We evaluate the results of the first principal component analysis against the RILE score with Pearson correlation in order to understand the extent to which our models capture the aggregated left-right dimension of the political spectrum through textual similarity. For checking how well our method captures the more nuanced method of measuring partyplatform dissimilarities from category saliences, we use the Mantel test (Mantel, 1967). For both metrics, both by-domain and aggregate agreement scores can be computed. For experiments with unannotated manifestos, we predict the policy domain labels using the bestperforming classifier and then repeat the evaluation in the same way using the predicted labels. ## 5 Results And Discussion 5.1 Annotated Setup In the *annotated setup*, we use the ground truth of policy domains as annotated in the CMP dataset. We evaluate party-positioning landscape extracted using our method, both in aggregate and for different policy domains, against the ground truths: the RILE scores and the distances computed using CMP-category saliences. Aggregated similarity. Table 2 illustrates the correlation of the aggregated similarity computation with the ground truths. Correlations are very high in both ground truths with small differences across models. FastText, our baseline, performs best in predicting the Rile index (Mantel r = 0.94) and second in the CMP distance (r = 0.80). We believe that the excellent performance of this model is | Policy Domains are . . . | | | | | |----------------------------|-----------|-------|-------|-------| | Annotated | Predicted | | | | | Model | Rile | CMP | Rile | CMP | | r | Man | r | Man. | | | FastText | 0.94* | 0.80* | 0.67 | 0.76* | | BERTGerman | 0.84* | 0.77* | 0.59 | 0.79* | | SBERTvanilla | 0.91* | 0.80* | 0.56 | 0.71* | | SBERTdomain | 0.87* | 0.84* | 0.79* | 0.80* | ![6_image_0.png](6_image_0.png) given due to the similarity computation. The comparison between sentences from the same policy domain (theme) might help in capturing fine-grained differences in stances between parties. BERTGerman is the model that performs the worst even though for a slim difference - as previous research suggested, the quality of BERT for sentence representation is low (Li et al., 2020). Finally, SBERTvanilla and SBERTdomain have comparable results. While the former performed the best on RILE (r = 0.91) in comparison with the latter (r = 0.87), the latter comes out first in the CMP distances (r = 0.84 vs. 0.80). This suggests that the non-fine-tuned model can still excel in the task of text similarity on out-of-domain data. Depending on the purpose, however, the fine-tuned version might be a better option, in line with previous results on representing political text (Ceron et al., 2022). Similarity by policy domains. We further analyze the output of the best model, namely SBERTdomain. Figure 2 shows the results of the application of MDS to the policy domain distance matrices. On the left-handed side of the plot lies the name of policy domains and on the right-handed side the Pearson's r with respect to the RILE score. The higher the (absolute value of the) correlation coefficient, the more the scale in question follows the classic left-right scale as measured by RILE. As expected, some policy domains yield high correlation whereas others do not. Importantly, this is not a measure of model quality. Rather, as it has often been observed in the political-science literature, the left-right scale cannot explain the complete picture of party positioning (Heywood, 2021). Therefore, quantitative analysis has to be complemented by qualitative judgments about the appropriateness of the predictions. Indeed, the results mirror some well-known facts about German politics. For example, in foreign relations, EU and protectionism - which is only moderately correlated with the left-right scale at r = 0.47 - the AfD is an outlier compared to other parties, arguably because it is against being part of the European Union and has a different stance with regard to having ties with Russia as compared with the other parties, which all fall in the same region. Another case is *education and technology* where AfD and Die Linke, who are generally can be regarded as the opposite pole of the left-right spectrum, happen to share a lot of common ground in their stance toward the expansion of education and investment in technology and infrastructure (r = -0.38). On the other hand, in policy domains such as *military and peace* and *immigration and* multiculturalism, party positions align very well with the overall left-right scale (r > 0.85), with right-leaning parties being more militaristic and immigration averse. In sum, we take the results of this analysis as evidence that our workflow produces accurate finegrained characterizations of party positions. | Model | DEtrain | DACHtrain | | | | |-------------------------------------------------|--------------------------------------|-------------|---------------|--------|------| | Majority Baseline | 14.5% | 14.5% | | | | | SBERTfrozen+log. reg. | 55.3% | 56.7% | | | | | RoBERTaxlm+MLP | 62.5% | 64.5% | | | | | SBERTtune+MLP | 60.4% | 63.1% | Policy domain | Mantel | Acc. | | culture & civic mindedness | 0.51 | 58.2% | | | | | democracy & constitutionalism | 0.92* | 62.8% | | | | | education & technology | 0.89* | 61.8% | | | | | equality | 0.94* | 70.7% | | | | | foreign relations, eu & protectionism | 0.96* | 70.5% | | | | | government | admin, | de | | | | | centralization & econo... | 0.91* | 53.0% | | | | | immigration, multiculturalism & human rights | 0.96* | 53.8% | | | | | labour groups & welfare state | | | | | | | Table 3: Accuracy score of the classifier on the test set (same test set for both training datasets). 5.2 Predicted Setup In the predicted setup, we do not use the CMP annotations of policy domains but predict the policy domains instead. Policy domain labeller. Table 3 shows the accuracy of the models and the majority baseline on the test set. Overall, the larger but more varied training set including all German-speaking countries (DACHtrain) performs better than DEtrain (data only from Germany) in all models, suggesting that it is not necessary to exclusively have data from the same country of analysis - given the similarity in the political scenario. As expected, the SBERTfrozen which is not fine-tuned for the | law and order & traditional morality | 0.78* | 71.8% | | | | market regulation & nationalisation | 0.83* | 72.0% | | | | | military & peace | 0.88* | 86.9% | | | | | political authority, civic mindedness & anti... | 0.34 | 27.9% | | | | | sustainability & agriculture | 0.97* | 77.4% | | | | Policy domain labeller. Table 3 shows the accuracy of the models and the majority baseline on the test set. Overall, the larger but more varied training set including all German-speaking countries (DACHtrain) performs better than DEtrain (data only from Germany) in all models, suggesting that it is not necessary to exclusively have data from the same country of analysis - given the similarity in the political scenario. As expected, the SBERTfrozen which is not fine-tuned for the task, performed the worst (55.3% and 56.7%). Whereas SBERT+MLP performed second (60.4% and 63.1%) and the best model is XLM-RoBERTalarge+MLP (62.5% and 64.5%), whose bigger size likely won over additional pretraining of a smaller model. The results of the XLM-RoBERTa-large model fine-tuned on DACHtrain are used for the rest of this analysis. Aggregated similarity. We evaluate how the predictions of our policy domain labeller perform in a scenario where there are new upcoming elections and no annotations are available. Table 2 shows that even though even results are not as incisive as in the annotated scenario, the correlation scores are still high for CMP saliences. In terms of models, SBERTdomain is the best-performing model (Mantel r = 0.80), similarly to the annotated scenario SBERTvanilla is the worst performing encoder (r = 0.71), with FastText (r = 0.76) and BERTGerman (r = 0.79) in between. As for the RILE score, only SBERTdomain demonstrates a statistically significant correlation. These results confirm that the additive manifesto decomposition is dependent on the precision of the policy domains labels but can also provide interpretable results for unannotated data. Table 4: Mantel correlation between the distance matrices of the annotated and the predicted setups. ∗ means p < 0.05. Acc.: accuracy of classifier within each policy domain. Similarity by policy domains. Our sources of ground truth do not provide us with gold measures of the similarity within each policy domain. Therefore, we cannot directly evaluate by-domain matrices produced with the predicted data. However, we can indirectly evaluate their usefulness by comparing them to the matrices produced using the gold annotations, which we already know to be highly meaningful. Table 4 shows the Mantel correlations between the distance matrices produced with the annotated setup and the one from the predicted setup for each policy domain. Mantel correlation is 0.78 or higher in 10 out of 13 policy domains. Negative outliers are culture and civic mindedness, *political authority* and *labour groups and welfare state*. We further investigate whether there is a correlation between the number of correctly labelled sentences by classifier (measured by accuracy) and Mantel correlation of the results. We find that there is a relatively strong correlation (Pearson r = 0.59, p = 0.03). This suggests that one can predict which policy domains will yield the most faithful results in an unsupervised scenario on the basis of their accuracy in the policy domain labeling part of the workflow. ## 6 Conclusion In our first contribution, we introduce Additive Manifesto Decomposition, a workflow for efficient analysis of party platforms, both in aggregate and across a range of policy issues. It builds on stateof-the-art sentence-representation models, which it uses for three operations on policy domains: definition, prediction, and (cross-party) similarity computation. In this manner, our workflow can incorporate advances on the representational level (Reimers and Gurevych, 2019; Ceron et al., 2022) but complements them with a crucial level of reflection and analysis at the informative level of policy domains. Our second contribution is a study of the political landscape in Germany using our workflow. The results we obtain match well with expert judgements, suggesting that our workflow yields a reliable technique to automatically study the similarity between parties across policy domains. In addition to analysing the implicit stance space, operationalized through distance matrices derived from text similarity, we show that our method makes it possible to recover the traditional scaling analyses of the political science literature: we can efficiently approximate the aggregate RILE (right-left) scores provided by experts in the aggregate settings, and when proceeding by domain, we see that our methods recover non-trivial policy configurations, e.g., the agreement of the far-right and far-left parties in Germany on the subject of EU and the expansion of education. Moreover, we show that classifiers substitute the annotations of these high-level domains and still yield similar results as compared to the fully annotated scenario. Germany provided an appropriate target for our case study, given both the large number of annotated manifestos and large body of expert analyses. Nevertheless, an important direction for future work is testing the applicability of our workflow to other countries, in particular regarding the training of policy domain labelers given the challenging concept drift between elections, and the possible cross-lingual application of our model components despite differences between political cultures (Braun and Schmitt, 2020). Lastly, our methodology does not only suit the identification of the positioning in the political domain, but more broadly it can be seen as a different way of identifying the stance of an entity (person, organization, group). It can be applied whenever there is some aggregation of texts with regard to a set of entities. The distinction lies in the more fine-grained identification of stances: we (a) take larger chunks of text as input and (b) position the entities on a scale rather than characterizing them as in favor, neutral or against a given topic. ## 7 Limitations The main limitation of the proposed study is the relatively small scale of the dataset it is based on. The proposed method is scalable and computationally undemanding (all of the analyzed models can be trained on a single GPU with 12G of memory), and it is feasible to apply it to other countries in the CMP dataset. However, in order to arrive at interpretable results that could be verified in terms of policy substance based on the experts' knowledge of the political spectrum, we had to focus the evaluation part on the materials of a single election cycle in one country. Potentially, the method can be applied to any country whose manifestos have CMP annotations, however, further investigation with data from other countries needs to be carried out to verify that. While most policies are recurrent in manifestos, there may be a few topics appearing in upcoming elections, adding some variability in debate across election years. The policy domain labeller might need to be updated every now and then with current topics of interest (e.g. Covid, a sudden expansion of the military). Therefore, the effect of news electoral programs in the classification step requires more investigation namely, the feasibility of further training with new topics of the current debate or the necessity to re-train the whole classifier with new manifestos over again. That being said, the CMP codebook has remained the same for over two decades now. We take this as evidence that the policy domains do not need to change, only the ability of the classifier to correctly identify sentences with unseen topics. ## Acknowledgements We are thankful for the insights on policy and party positioning contributed by Nils Düpont, Sebastian Haunss and Nico Blokker. We acknowledge funding by Deutsche Forschungsgemeinschaft (DFG) for project MARDY 2 (375875969) within the priority program RATIO. ## References Kenneth Benoit and Michael Laver. 2006. *Party policy* in modern democracies. Routledge. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Daniela Braun and Hermann Schmitt. 2020. Different emphases, same positions? The election manifestos of political parties in the EU multilevel electoral system compared. *Party Politics*, 26(5):640–650. Ian Budge. 2003. Validating the Manifesto Research Group approach: theoretical assumptions and empirical confirmations. In *Estimating the policy position* of political actors, pages 70–85. Routledge. Ian Budge, Hans-Dieter Klingemann, Andrea Volkens, Judith Bara, and Eric Tanenbaum, editors. 2001. Mapping Policy Preferences: Estimates for Parties, Electors, and Governments 1945-1998. Oxford University Press, Oxford, New York. Tobias Burst, Werner Krause, Pola Lehmann, Jirka Lewandowski, Theres Matthieß, Nicolas Merz, Sven Regel, and Lisa Zehnter. 2021. Manifesto corpus. version: 2021.1. *Berlin: WZB Berlin Social Science* Center. Tanise Ceron, Nico Blokker, and Sebastian Padó. 2022. Optimizing text representations to capture (dis)similarity between political parties. In *Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)*, pages 325–338, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Thomas Däubler and Kenneth Benoit. 2021. Scaling hand-coded political texts to learn more about left-right policy content. *Party Politics*, page 13540688211026076. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Goran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2017. Unsupervised cross-lingual scaling of political texts. In *Proceedings of the 15th Conference of the European Chapter of the Association* for Computational Linguistics: Volume 2, Short Papers, pages 688–693, Valencia, Spain. Association for Computational Linguistics. Andrew Heywood. 2021. *Political ideologies: An introduction*. Bloomsbury Publishing. Torben Iversen. 1994. Political leadership and representation in West European democracies: A test of three models of voting. American Journal of Political Science, 38(1):45–74. Seth Jolly, Ryan Bakker, Liesbet Hooghe, Gary Marks, Jonathan Polk, Jan Rovny, Marco Steenbergen, and Milada Anna Vachudova. 2022. Chapel Hill expert survey trend file, 1999–2019. *Electoral Studies*, 75:102420. Michael Laver, Kenneth Benoit, and John Garry. 2003. Extracting policy positions from political texts using words as data. *American political science review*, 97(2):311–331. Michael J Laver and Ian Budge. 1992. Measuring policy distances and modelling coalition formation. In Party policy and government coalitions, pages 15–40. Springer. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, 6-9 May 2019. Nathan Mantel. 1967. The detection of disease clustering and a generalized regression approach. Cancer research, 27(2):209–220. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In *Proceedings of the* International Conference on Learning Representations. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Ludovic Rheault and Christopher Cochrane. 2020. Word embeddings for the analysis of ideological placement in parliamentary corpora. *Political Analysis*, 28(1):112–133. Jonathan B Slapin and Sven-Oliver Proksch. 2008. A scaling model for estimating time-series party positions from texts. American Journal of Political Science, 52(3):705–722. Jianlin Su, Jiarun Cao, Weijie Liu, and Yangyiwen Ou. 2021. Whitening sentence representations for better semantics and faster retrieval. *ArXiv*, abs/2103.15316. Annie Waldherr, Lars-Ole Wehden, Daniela Stoltenberg, Peter Miltner, Sophia Ostner, and Barbara Pfetsch. 2019. Inductive codebook development for content analysis: Combining automated and manual methods. *Forum Qualitative Sozialforschung / Forum:* Qualitative Social Research, 20(1). ## A Data Statistics And Handling A.1 Data For The Party Positioning Analysis | Party | 2021 | |---------------------------------------------------------|--------| | The Left (Die Linke) | 4850 | | Social Democratic Party of Germany (SDP) | 1665 | | Alternative for Germany (AfD) | 1574 | | Christian Democratic Union/Christian Social Union (CDU) | 2775 | | Alliance'90/Greens (Grüne) | 3947 | | Free Democratic Party (FDP) | 2239 | Table 1: Number of sentences per party per year from the 2021 German elections. ## A.2 Data For Training The Policy Domain Classifiers Preprocessing. The CMP annotations contain the H and 0 labels for some sentences. While Hs are excluded from all the modelling because they represent the heading of a section. The 0 label is kept for the classifier in order to emulate a real world case scenario where there are labels that do not represent any policy domain/category. The "Germany" training regime with manifestos from Germany only contains 57,259 instances whereas the "German" regime with data from German-speaking countries has 106,724 instances in total. 10% of each of them is used as the validation set. 2017 2002 2005 2009 2013 Alliance'90/Greens 3826 1644 1860 3578 5382 Alternative for Germany 1003 0 0 0 72 The Left 3926 0 0 1660 2453 Free Democratic Party 2053 1971 1398 2230 2560 Party of Democratic Socialism 0 840 0 0 0 Christian Democratic Union/Christian Social Union 1340 1293 769 1975 2534 Social Democratic Party of Germany 2631 1591 880 2181 2873 The Left. Party of Democratic Socialism 0 0 572 0 0 Pirates 0 0 0 0 1755 Table 2: Number of sentences per party per year from the German elections. Table 3: Number of sentences per party per year from the Swiss elections. | Party | 2007 | 2019 | 2011 | 2015 | |----------------------------------------------------|--------|--------|--------|--------| | Christian Democratic People's Party of Switzerland | 125 | 313 | 148 | 278 | | FDP. The Liberals | 126 | 784 | 207 | 110 | | Swiss People's Party | 1035 | 1423 | 120 | 1329 | | Conservative Democratic Party of Switzerland | 0 | 974 | 72 | 329 | | Swiss Labour Party | 104 | 673 | 0 | 353 | | Green Liberal Party | 94 | 144 | 68 | 225 | | Christian Social Party | 172 | 0 | 270 | 0 | | Social Democratic Party of Switzerland | 1133 | 122 | 71 | 129 | | Federal Democratic Union | 40 | 637 | 0 | 0 | | Green Party of Switzerland | 800 | 571 | 411 | 506 | | Protestant People's Party of Switzerland | 89 | 129 | 25 | 553 | Party 2017 2019 2002 2006 2008 2013 The New Austria and Liberal Forum 126 1170 0 0 0 1006 Team Stronach for Austria 0 0 0 0 0 1195 Austrian Communist Party 0 0 0 0 113 0 Austrian People's Party 2793 719 2163 2051 602 1157 Austrian Freedom Party 452 220 2667 325 461 115 Peter Pilz List 71 0 0 0 0 0 Austrian Social Democratic Party 2722 1893 1139 714 1189 716 Alliance for the Future of Austria 0 0 0 551 342 0 The Greens 1084 2248 683 693 691 2369 Table 4: Number of sentences per party per year from the Austrian elections. Austria 3348 555 2301 2369 2181 976 1905 Germany 5462 1614 2784 3903 5182 2744 5094 Switzerland 779 403 431 1388 1218 763 1070 Total 9589 2572 5516 7660 8581 4483 8069 Country**labour groups** and welfare state sustainability and agriculture education and technology culture and civic mindedness government admin, (de)centralization and economic planning law and order and traditional morality**other** Austria 5222 3288 4238 1476 3450 3131 224 Germany 6386 4311 5999 1484 7865 4022 409 Switzerland 2022 2198 1377 285 1378 1380 109 Total 13630 9797 11614 3245 12693 8533 742 Country equality military and peace **democracy and** constitutionalism**foreign relations, eu** and protectionism**market regulation** and nationalisation political authority, civic mindedness and anti-imperialism immigration, multiculturalism and human rights Table 5: Number of sentences per label and country for training the policy domain labeller. ## A.3 Models' Hyperparameters And Libraries SBERT*frozen*+Logistic Regression: - No hyperparameter optimization for the logistic regression model - default parameters from the library Scikit-learn - Frozen SBERT model: paraphrase-multilingual-mpnet-base-v2 RoBERTaxlm + Multi-layer perception (MLP): - RoBERTa model: xlm-roberta-large - First linear layer's input size: R Nx1024 - One tahn activation layer - Second linear layer's input size: R Nx14 - 5 epochs - AdamW optimizer (Loshchilov and Hutter, 2019) - Learning rate: 10−5 - HuggingFace for implementation SBERT*tune* + Multi-layer perception (MLP): - SBERT model: paraphrase-multilingual-mpnet-base-v2 - First linear layer's input size: R Nx768 - One tahn activation layer - Second linear layer's input size: R Nx14 - 5 epochs - AdamW optimizer (Loshchilov and Hutter, 2019) - Learning rate: 10−5 - SBERT HuggingFace for implementation Hardware information for all experiments: - System CPU: 2 x Intel Xeon E5-2650 v4, 2,20GHz, 12 Core - 24 cores - 256 GByte of memory - GPU: 4 x Nvidia GeForce GTX 1080 Ti, 12 GB B Appendix B.1 Hierarchical clustering with CMP categories ![13_image_0.png](13_image_0.png) ## B.2 Cmp Categories Clustered Across Germany, Switzerland, And Austria | policy domain | Categories from CMP | |-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | equality | Equality: Positive | | military and peace | Military: Negative, Peace, Military: Positive Political Corruption, Direct Democracy: Positive, Democracy General: Positive, | | democracy and | Constitutionalism: Negative, Representative Democracy: Positive, | | constitutionalism | Constitutionalism: Positive, Democracy General: Negative, Democracy | | foreign relations, eu | Internationalism: Negative, European Community/Union: Positive, Protectionism: Negative, | | and protectionism | Protectionism: Positive, Internationalism: Positive, European Community/Union: Negative | | market regulation and nationalisation | Nationalisation, Controlled Economy, Free Market Economy, Market Regulation Civic Mindedness: Bottom-Up Activism, Political Authority: Party Competence, Anti-Imperialism: State Centred Anti-Imperialism, Marxist Analysis, National Way of Life General: Negative, National Way of Life General: Positive, Transition: Rehabilitation and Compensation, Political Authority: Personal Competence, Political Authority, Political Authority: Strong government, Transition: Pre-Democratic Elites: Negative, Civic Mindedness: Positive, Anti-Imperialism, Anti-Imperialism: Foreign Financial Influence | | political authority, civic mindedness and anti-imperialism | National Way of Life: Immigration: Negative, Human Rights, Underprivileged Minority Groups, Multiculturalism General: Negative, Multiculturalism: Immigrants Assimilation, Foreign Special Relationships: Positive, Multiculturalism General: Positive, Multiculturalism: Immigrants Diversity, National Way of Life: Immigration: Positive, Freedom and Human Rights, Multiculturalism: Indigenous rights: Positive, Multiculturalism: Positive, National Way of Life: Positive, National Way of Life: Negative, Multiculturalism: Negative, Foreign Special Relationships: Negative | | labour groups | Welfare State Limitation, Middle Class and Professional Groups, Labour Groups: Positive, | | and welfare state | Labour Groups: Negative, Welfare State Expansion | | sustainability | Environmental Protection, Agriculture and Farmers: Positive, Sustainability: Positive, | | and agriculture | Agriculture and Farmers: Negative, Agriculture and Farmers: Positive | | education and technology | Technology and Infrastructure: Positive, Education Expansion, Education Limitation | | culture and civic mindedness | Culture: Positive, Civic Mindedness General: Positive | | government admin, (de)centralization and economic planning immigration, multiculturalism and human rights | Governmental and Administrative Efficiency, Corporatism/Mixed Economy, Anti-Growth Economy: Positive, Keynesian Demand Management, Centralisation, Economic Growth: Positive, Decentralization, Incentives: Positive, Economic Goals, Economic Planning, Economic Orthodoxy, Anti-Growth Economy: Positive Law and Order: Negative, Traditional Morality: Negative, Non-economic | | law and order and | Demographic Groups, Freedom, Law and Order: Positive, Traditional Morality: | | traditional morality | Positive, Law and Order: Positive | | Table 6: Categories of CMP in final policy domain clusters. The ones in blue are the results of the policy domain | | Table 6: Categories of CMP in final policy domain clusters. The ones in blue are the results of the policy domain grouping approach with SBERT whereas the ones in purple refer to the categories that occurred less than 10 times in the 2021 German manifestos, and therefore, are added manually in the clusters. The ones in black are also manually added because they were annotated in the manifestos used for the classification, but not for the analysis. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 (Limitations) ✗ A2. Did you discuss any potential risks of your work? Because there are no risks concerning this work, to the best of our knowledge. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and section 1 (Introduction) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Sections 3 (Methodology) and 4 (Experimental Setup) ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 (Experimental Setup) ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 (Experimental Setup) ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 3 (Methodology) and 4 (Experimental Setup) and in the Appendix. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhan-etal-2023-similarizing
Similarizing the Influence of Words with Contrastive Learning to Defend Word-level Adversarial Text Attack
https://aclanthology.org/2023.findings-acl.500
Neural language models are vulnerable to word-level adversarial text attacks, which generate adversarial examples by directly substituting discrete input words. Previous search methods for word-level attacks assume that the information in the important words is more influential on prediction than unimportant words. In this paper, motivated by this assumption, we propose a self-supervised regularization method for Similarizing the Influence of Words with Contrastive Learning (SIWCon) that encourages the model to learn sentence representations in which words of varying importance have a more uniform influence on prediction. Experiments show that SIWCon is compatible with various training methods and effectively improves model robustness against various unforeseen adversarial attacks. The effectiveness of SIWCon is also intuitively shown through qualitative analysis and visualization of the loss landscape, sentence representation, and changes in model confidence.
# Similarizing The Influence Of Words With Contrastive Learning To Defend Word-Level Adversarial Text Attack Pengwei Zhan§‡**, Jing Yang**§∗ , He Wang§, Chao Zheng§, Xiao Huang§**, Liming Wang**§ §Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China ‡School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China {zhanpengwei,yangjing,wanghe6029}@iie.ac.cn {zhengchao1135,huangxiao,wangliming}@iie.ac.cn ## Abstract Neural language models are vulnerable to word-level adversarial text attacks, which generate adversarial examples by directly substituting discrete input words. Previous search methods for word-level attacks assume that the information in the important words is more influential on prediction than unimportant words. In this paper, motivated by this assumption, we propose a self-supervised regularization method for Similarizing the Influence of Words with Contrastive Learning (SIWCon) that encourages the model to learn sentence representations in which words of varying importance have a more uniform influence on prediction. Experiments show that SIWCon is compatible with various training methods and effectively improves model robustness against various unforeseen adversarial attacks. The effectiveness of SIWCon is also intuitively shown through qualitative analysis and visualization of the loss landscape, sentence representation, and changes in model confidence. ## 1 Introduction Neural language models have achieved impressive performance in various natural language processing (NLP) tasks, but they are also proven vulnerable to adversarial examples, which induce incorrect model output by adding small perturbations to natural inputs (Szegedy et al., 2014; Jia and Liang, 2017). Unlike attacks on images, which are performed by directly adding imperceptible continuous noise to the input, adversarial text attacks are commonly performed by substituting input text due to the discrete and non-differentiable nature of text (Gao et al., 2018; Alzantot et al., 2018; Li et al., 2019; Zhan et al., 2022b; Garg and Ramakrishnan, 2020). Among the various granularities of adversarial text attacks, word-level attacks have been more focused on by recent works for their effectiveness in maintaining semantic similarity and grammatical ∗Corresponding Author. ![0_image_0.png](0_image_0.png) correctness. Unlike character-level and sentencelevel attacks, word-level attacks are less likely to be detected by spell checkers or to undermine the overall coherence of a sentence (Ebrahimi et al., 2018; Iyyer et al., 2018; Liang et al., 2018). Under a unified framework, word-level attacks can always be formulated as a combinatorial optimization problem (Yoo et al., 2020; Morris et al., 2020a,b), and various attack methods can be decomposed into *Search Space* and *Search Method*. The search space contains the possible substitutions for each word, while the search method determines the substitution order and strategy for selecting the optimal substitution from the search space. Since the search space may be model-agnostic, we should focus on the search method for the potential of improving the robustness against word-level attacks. Previous search methods for word-level attacks are based on the assumption: different *words* in a sentence contribute differently to model prediction, with the information in important words being more influential than the information in unimportant *words.* Therefore, following the word importance scores obtained through attribution methods, the attack can be seen as a process of *iteratively* substituting words in a sentence, with important words substituted first, followed by unimportant words. For example, the search methods Word Importance Ranking (WIR) (Gao et al., 2018; Jin et al., 2020; Li et al., 2020) and PWWS (Ren et al., 2019) obtain word importance using Occlusion (Zeiler and Fergus, 2014), then WIR performs substitution in descending order of word importance and PWWS formulates token scores that use word importance as weights to guide the attack. Following this assumption, the success of wordlevel attacks can be explained. The words in a sentence can be classified as important words, which contain more influential information for prediction, or unimportant words, which contain less influential information. Search methods that substitute important words first can perturb more influential information in each attack step, making the model more likely to be deceived. Therefore, it is natural to wonder: will the model be more robust when the information in both important and *unimportant* words has a similar degree of influence on *prediction?* Motivated by this question, we propose a selfsupervised regularization method for Similarizing the Influence of Words with Contrastive Learning (SIWCon) that improves the model robustness against word-level attacks. The motivation of our method is illustrated in Figure 1. We summarize our main contributions as follows: 1. We discuss the relationship between model robustness and the influence of information in words of different importance. 2. We propose SIWCon, a contrastive learning method that improves the robustness of language models by encouraging models to learn sentence representations that consider the information in words of different importance to have a more similar influence on prediction. 3. We evaluate SIWCon against several attack methods on three models of different architectures and on Movie Review (MR), SST2, and IMDB datasets. Results show that SIWCon improves the model robustness against unforeseen adversarial attacks *without learning from* any adversarial perturbation. 4. We provide qualitative analysis and visualization on loss landscape, sentence representation, and model confidence change, intuitively showing the effectiveness of SIWCon. ## 2 Related Works Robustness of Language Models. The current methods for improving the robustness of language models ignore the assumption discussed in §1. While some works attempt to detect or transform potential adversarial examples before the model (Zhou et al., 2019; Mozes et al., 2021), this does not actually improve the model's robustness. Other methods, such as performing certifiably robust training through interval bound propagation (IBP), can be computationally costly and difficult to scale to large models like BERT (Jia et al., 2019; Huang et al., 2019). Additionally, it has been reported that while IBP improves adversarial accuracy, it comes at the huge cost of reduced clean accuracy (Wang et al., 2021). Some works try to perform adversarial training by incorporating adversarial examples in the training set (Jin et al., 2020; Li et al., 2021), while this method can only improve the robustness against the adversarial perturbations that the model has seen. Moreover, generating adversarial examples is time-consuming, thus adversarial training is difficult to scale to a large dataset. In this paper, based on the ignored assumption, we discuss the model robustness from new perspectives, focusing on attribution and sentence representation. Contrastive Learning. Contrastive learning is first proposed in computer vision tasks to help models learn better image representation (Chen et al., 2020a,b; He et al., 2020; Pan et al., 2021). This self-supervised learning method alleviates the dependence on the costly labeled data. Recently, encouraged by the superior performance, various contrastive learning methods have been proposed for NLP tasks. Following the discrete nature of text, some previous works construct the pair examples by augmenting the input sentence (Giorgi et al., 2021; Wu et al., 2020; Fang and Xie, 2020; Zhan et al., 2022a; Gao et al., 2021), e.g., by word deleting, reordering, substituting, and back-translating, or by augmenting the word embedding (Yan et al., 2021), e.g., by shuffling, cutting off, dropping out the embedding matrix. Unlike the previous works that aim to improve the downstream performance, we focus on improving the model robustness. ## 3 Methodology 3.1 Preliminaries Suppose we have the input text X ∈ X and the output labels Y ∈ Y = {1*, . . . , C*} that follow the data distribution D. A model fθ : *X → Y* that maps the input text to the output probability space is trained by minimizing Lce(X, Y ; θ): $$\mathbb{E}(\mathbf{X},Y)\sim\mathcal{D}[-\log\frac{\exp(w_{Y}^{T}r_{\mathbf{\theta}}(\mathbf{X}))}{\sum_{k=1}^{C}\exp(w_{k}^{T}r_{\mathbf{\theta}}(\mathbf{X}))}]\;,\tag{1}$$ where $w_{Y}\in\mathcal{W}$ denotes the model classification. where wY ∈ W denotes the model classification parameters toward class Y , W is the overall classification parameters, and rθ(·) denotes the latent sentence representation encoded by the model f with parameters θ. The well-trained model can learn the distribution of data and predict the input sentence based on the posterior probability: $$P(Y_{t r u e}|\mathbf{X})={\frac{\exp(w_{t r u e}^{T}r_{\theta}(\mathbf{X}))}{\sum_{k=1}^{C}\exp(w_{k}^{T}r_{\theta}(\mathbf{X}))}}\ ,\quad\quad(2)$$ where w*true* denotes the classification parameters toward the ground-truth class Y*true*. To attribute the prediction P(Y*true*|X), i.e., identifying the words that are most influential on the prediction (Li et al., 2016b; Ross et al., 2017; Sundararajan et al., 2017; Kim et al., 2020), we use the gradient-based attribution method (Feng et al., 2018; Li et al., 2016a; Arras et al., 2016; Situ et al., 2021). The influence score of word xi ∈ X can be formally defined as: $$S c o r e(x_{i})=\left\|\frac{\partial\,w_{r u e}^{T}r_{\theta}(X)}{\partial\,e m b(x_{i})}\right\|_{2}\,,\qquad(3)$$ where emb(·) denotes embedding, and k·k2 denotes L 2 norm. The influence of a word is the norm of the influence score of every embedding dimension. ## 3.2 Word-Level Adversarial Attack Following the analysis of word-level adversarial attacks in §1, an adversarial example Xadv generated by search methods from a normal example X = (xn)n∈{1*,...,N*} can be formulated as: $\mathbf{X}^{adv}=\mathcal{O}(\mathbf{X})=o(x_{n})_{n\in\{1,...,N\}}$, s.t. $\forall n\in\{1,\ldots,N\},\ \ \Delta x_{n}<\delta$, and $\Delta\mathbf{X}<\varepsilon$, and $\operatorname*{argmax}_{Y\in\mathcal{Y}}\ \mathcal{P}(Y|\mathbf{X}^{adv})\neq\operatorname*{argmax}_{Y\in\mathcal{Y}}\ \mathcal{P}(Y|\mathbf{X})$, $$\mathbf{X}^{a d}$$ s.t. where O(X) denotes performing word-level substitution on sentence X, o(xn) denotes substituting the word xn with a new word from a finite search space that contains all qualified substitutions, if possible. ∆xn and δ respectively denote the difference and the maximum allowed difference between xn and o(xn), ∆X and ε respectively denote the difference and the maximum allowed difference between X and O(X). δ and ε are used to filter qualified substitutions in the search space, which may mainly focus on the semantics and the L p norm of the embedding distance of each word and the entire sentence, ensuring the adversarial example is imperceptible to humans. To generate adversarial examples more effectively, the search methods of current attacks, i.e., the strategies to perform o(·), follow the assumption that the information in important words is more influential than the information in unimportant words, and heavily rely on attribution results like (3). These methods attempt to substitute important words first to perturb more influential information in each attack step. Therefore, if different words in a sentence have a similar slight influence on prediction, the attacks should only slightly impact the model prediction in each attack step. To this end, we detail the SIWCon regularization method next. ## 3.3 The Siwcon Regularization Recall that the goal of SIWCon is to similarize the influence of words. After regularization, the influence of different words on prediction should be similarly slight. To formally define this goal, we first define the 40% of words in a sentence with the highest and lowest influence scores as the important and unimportant words, respectively, following the attribution results of (3). We then propose two efficient non-deterministic data augmentation operations, t imp(·) and t ump(·), which respectively means randomly removing important and unimportant words in a sentence. Therefore, under the training scenario of (1), the primary goal of SIWCon can now be formulated as: $$\begin{array}{ll}\min&||Q_{imp.}-Q_{imp.}||\ :\\ \theta&\\ \end{array}\tag{5}$$ $$\begin{array}{ll}\min&||Q_{imp.}-Q_{imp.}||\ :\\ \theta&\\ \end{array}$$ $$\begin{array}{ll}\min&||Q_{imp.}-Q_{imp.}||\ :\\ \theta&\\ \end{array}$$ $$\begin{array}{ll}\min&||Q_{imp.}-Q_{imp.}||\ :\\ \theta&\\ \end{array}$$ where Ximp is an augmentation sampled from t imp(X), and Xump is an augmentation sampled 7893 from t ump(X). Qimp and Qump measure the extent of model confidence decrease when a random part of information in the important and unimportant words is lost, indicating the *overall* influence of the information in words of different importance on prediction. The complete objective of SIWCon can be further decomposed into two perspectives: Objective 1: The influence of different words should be *similar*, thus the model should treat the sentences with information in words of different importance lost (Ximp and Xump) similarly. Objective 2: The influence of different words should be *slight*, thus the model should treat the sentences with different information lost (Ximp and Xump) similarly to the original sentence that contains complete information (X). To achieve **Objective 1** and **Objective 2**, and further the goal of SIWCon, we use a contrastive loss objective from the perspective of sentence representation. To define the contrastive loss objective, for convenience, we first define the calculation S: $${\mathcal{S}}_{(i,j)}^{(k,l)}=\exp(\mathrm{sim}[r_{\theta}(X_{i}^{k}),r_{\theta}(X_{j}^{l})]/\tau)\;,\quad0$$ where k, l ∈ {imp, ump, ·}, respectively indicate the augmentation sampled from t imp(·), the augmentation sampled from t ump(·), and the normal example, *i, j* are the example indices, sim[ri, rj ] = r T i rj/krikkrjk is the cosine similarity, τ is a temperature parameter similar to the NT-Xent loss (Chen et al., 2020a; van den Oord et al., 2018). Then the contrastive loss function for an example in a mini-batch Xi ∈ {Xi} B i=1 is defined as: [− log Spositive PB j=1(Snegative) ] , LSIWCon(Xi; θ) = E {Xi} B i=1∼D Xump i ∼t ump(Xi) Ximp i ∼t imp(Xi) (7) $${\mathrm{where}}$$ $$\begin{array}{l}{{{\mathcal S}_{p o s i t i v}={\mathcal S}_{(i,i)}^{(i m p,u m p)}+{\mathcal S}_{(i,i)}^{(\cdot,u m p)}+{\mathcal S}_{(i,i)}^{(\cdot,i m p)}\ ,}}\\ {{{\mathcal S}_{n e g a t i v}={\mathcal S}_{(i,j)}^{(\cdot,\cdot)}+1_{[i\neq j]}[{\mathcal S}_{(i,j)}^{(\cdot,u m p)}+{\mathcal S}_{(i,j)}^{(\cdot,i m p)}]\ ,}}\end{array}$$ B is the batch size, 1[·]is an indicator function that equals 1 if the condition [·] is true; otherwise, it equals 0. Specifically, to calculate the loss for each mini-batch, we first randomly sample the augmentations X ump ifrom t ump(Xi) and the augmentations X imp ifrom t imp(Xi) for each example in the mini-batch. The general framework of SIWCon is shown in Figure 2. ![3_image_0.png](3_image_0.png) To achieve Objective 1, we use the term S (imp,ump) (i,i)in the numerator. This constraint maximizes the similarity between the representations of the augmentations with important and unimportant words removed, making the different degrees of incomplete information in the augmentations have a similar impact on the prediction. To achieve Objective 2, we use the term S (·,ump) (i,i) and S (·,imp) (i,i)in the numerator. These constraints maximize the similarity between the original sentence and the two augmentations, making the incomplete information in the remaining words of the augmentations have a similar influence as the complete information in the normal sentence. Intuitively, the semantics of different examples should be different, and following the constraints in S*positive*, the semantics of the augmentations of different examples should also be different. Therefore, the three terms in S*negative* denote that, given an example within a mini-batch, we treat both the other examples and the augmentations derived from other examples as negative examples. The final loss of SIWCon regularization is computed across all examples in a mini-batch. When SIWCon is used in the normal training scenario (1), the overall objective is: $$\min_{\theta}\ {\cal L}_{ce}({\bf X},Y)+\alpha\ {\cal L}_{\it SIMCon}({\bf X})\,\tag{8}$$ where α is a parameter balancing the supervised part and the contrastive regularization part. | MR | SST2 | IMDB | | | | | | | | | | | | |-------------|------------|-------------|------------|-------------|------------|--------|--------|--------|--------|--------|--------|--------|--------| | DeepWordBug | TextFooler | DeepWordBug | TextFooler | DeepWordBug | TextFooler | | | | | | | | | | Model | Method | ACC. ↑ | AUA. ↑ | ACC. ↑ | AUA. ↑ | ACC. ↑ | AUA. ↑ | ACC. ↑ | AUA. ↑ | ACC. ↑ | AUA. ↑ | ACC. ↑ | AUA. ↑ | | Normal | 77.01 | 3.66 | 77.01 | 0.33 | 80.96 | 4.67 | 80.96 | 0.33 | 77.38 | 0.30 | 77.38 | 0.00 | | | +SIWCon | 76.84 | 23.00 | 76.74 | 1.67 | 81.19 | 12.00 | 80.39 | 4.67 | 76.32 | 15.67 | 78.55 | 8.33 | | | LSTM | AT | 76.45 | 40.00 | 75.79 | 2.00 | 78.78 | 46.33 | 80.05 | 1.67 | 74.41 | 47.67 | 77.32 | 0.33 | | +SIWCon | 76.08 | 54.00 | 75.04 | 6.00 | 78.56 | 55.33 | 79.59 | 3.68 | 74.07 | 56.67 | 76.03 | 3.67 | | | Normal | 77.58 | 9.66 | 77.58 | 3.33 | 79.47 | 15.67 | 79.47 | 4.33 | 76.60 | 2.33 | 76.60 | 5.67 | | | +SIWCon | 77.67 | 15.67 | 76.64 | 6.33 | 78.73 | 19.33 | 80.73 | 8.00 | 76.01 | 22.00 | 75.24 | 9.67 | | | TextCNN | AT | 75.23 | 43.00 | 73.73 | 10.33 | 73.74 | 68.67 | 75.11 | 10.67 | 74.34 | 33.00 | 76.72 | 9.00 | | +SIWCon | 74.26 | 53.00 | 74.48 | 14.33 | 73.28 | 71.33 | 75.22 | 16.00 | 73.73 | 45.67 | 75.58 | 22.67 | | | Normal | 86.12 | 9.67 | 86.12 | 8.33 | 91.74 | 24.67 | 91.74 | 12.33 | 83.44 | 12.33 | 83.44 | 5.67 | | | +SIWCon | 85.46 | 60.33 | 84.31 | 30.67 | 90.94 | 32.00 | 90.83 | 19.33 | 83.93 | 22.33 | 83.92 | 10.33 | | | BERT | AT | 86.68 | 68.33 | 84.80 | 34.67 | 91.63 | 72.00 | 91.51 | 34.67 | 83.46 | 51.67 | 83.24 | 31.33 | | +SIWCon | 86.49 | 77.33 | 84.90 | 40.00 | 90.85 | 76.33 | 91.63 | 41.67 | 83.16 | 64.67 | 83.62 | 37.33 | | ## 4 Experiment 4.1 Metrics We measure the model performance with *Accuracy* (ACC.), the model robustness with *Accuracy Under Attack (AUA.)*, and the influence of words with three *Area Over the Perturbation Curve (AOPC)* metrics (DeYoung et al., 2020; Samek et al., 2017; Nguyen, 2018). AOPC*Comp.* and AOPC*Suff.* respectively measure the overall influence of the information in important and unimportant words on prediction. AOPC*Comp.* is formulated as: $$\frac{1}{K+1}\sum_{k=1}^{K}{\mathcal{P}}(Y_{t r u e}|\mathbf{X})-{\mathcal{P}}(Y_{t r u e}|t_{/k}^{i n p}(\mathbf{X})),\quad(9)$$ and AOPC*Suff.* is formulated as: $$\frac{1}{K+1}\sum\limits_{k=1}^{K}{\cal P}(Y_{\it true}|{\mathbf{X}})-{\cal P}(Y_{\it true}|t_{/k}^{\it unnp}({\mathbf{X}})),\tag{10}$$ where t imp /k and t ump /k are deterministic transformations that remove the k most and least important words in a sentence, respectively. We also use AOPC*Diff.* to indicate the difference between AOPC*Comp.* and AOPC*Suff.*, measuring how the goal of SIWCon is achieved. ## 4.2 Experiment Setup Setup. We conduct experiments on MR (Pang and Lee, 2005), SST2 (Socher et al., 2013), and IMDB (Maas et al., 2011) datasets. We use LSTM (Hochreiter and Schmidhuber, 1997), TextCNN (Kim, 2014), and the base version of BERT (Devlin et al., 2019) as models. More details of the datasets and models can be found in Appendix A.1 and A.2. We use Normal training (1) and Adversarial training (AT, detailed in Appendix A.3) as basic training methods. In the main experiment, we use DeepWordBug (Gao et al., 2018) and TextFooler (Jin et al., 2020) as attack methods. We also use BAE (Garg and Ramakrishnan, 2020), TextBugger (Li et al., 2019), and PWWS (Ren et al., 2019) in the analysis. Implementation Details. The K in (9) and (10) are set as 40% of each sentence's length. We use Adam (Kingma and Ba, 2015) as the optimizer. For LSTM and TextCNN, we use the average token embedding before the last dense layer as the sentence representation. For BERT, we use the [CLS] token embedding as the sentence representation. Unless otherwise specified, the batch size is set as 32, the learning rate/α/τ for LSTM, TextCNN, and BERT is 1e-3/1.2/0.01, 1e-3/1.2/0.05, and 3e-5/0.005/1.5. The reported results are the average of five individual runs with randomly picked seeds. ## 4.3 Main Results In the main experiment, we train the models on three datasets with different training methods and then measure their robustness by attacking 600 examples randomly picked from the testing set. Following Jin et al. (2020) and Li et al. (2021), for adversarial training, we incorporate the adversarial examples of 10% randomly picked training data into the new training set, which are generated by the same attack method for measuring robustness. | DeepWordBug | TextFooler | | | | | | | |---------------|--------------|--------|--------------|-----------------|--------|----------|-------| | Model | Method | AComp. | ASuff. | ADiff. ↓ AComp. | ASuff. | ADiff. ↓ | | | MR | | | | | | | | | Normal | 0.096 | 0.046 | 0.050 | 0.096 | 0.046 | 0.050 | | | +SIWCon | 0.070 | 0.035 | 0.035 | 0.070 | 0.032 | 0.038 | | | LSTM | AT | 0.084 | 0.037 | 0.047 | 0.072 | 0.032 | 0.040 | | +SIWCon | 0.066 | 0.040 | 0.026 | 0.048 | 0.021 | 0.027 | | | Normal | 0.094 | 0.024 | 0.070 | 0.094 | 0.024 | 0.070 | | | +SIWCon | 0.090 | 0.028 | 0.062 | 0.058 | 0.004 | 0.054 | | | TextCNN AT | 0.096 | 0.037 | 0.059 | 0.087 | 0.031 | 0.056 | | | +SIWCon | 0.114 | 0.063 | 0.051 | 0.076 | 0.024 | 0.052 | | | Normal | 0.064 | 0.018 | 0.046 | 0.064 | 0.018 | 0.046 | | | +SIWCon | 0.030 | 0.015 | 0.015 | 0.038 | 0.022 | 0.016 | | | BERT | AT | 0.054 | 0.029 | 0.025 | 0.042 | 0.016 | 0.026 | | +SIWCon | 0.050 | 0.035 | 0.015 | 0.036 | 0.029 | 0.007 | | | SST2 | | | | | | | | | Normal | 0.083 | 0.022 | 0.061 | 0.083 | 0.022 | 0.061 | | | +SIWCon | 0.071 | 0.017 | 0.054 | 0.055 −0.004 | 0.059 | | | | LSTM | AT | 0.099 | 0.027 | 0.072 | 0.075 | 0.006 | 0.069 | | +SIWCon | 0.078 | 0.026 | 0.052 | 0.066 | 0.010 | 0.056 | | | Normal | 0.094 | 0.028 | 0.066 | 0.094 | 0.028 | 0.066 | | | +SIWCon | 0.078 | 0.016 | 0.062 | 0.087 | 0.026 | 0.061 | | | TextCNN AT | 0.031 −0.007 | 0.038 | 0.046 −0.006 | 0.052 | | | | | +SIWCon | 0.040 | 0.010 | 0.030 | 0.303 −0.018 | 0.048 | | | | Normal | 0.042 | 0.013 | 0.029 | 0.042 | 0.013 | 0.029 | | | +SIWCon | 0.038 | 0.020 | 0.018 | 0.045 | 0.025 | 0.020 | | | BERT | AT | 0.041 | 0.015 | 0.026 | 0.032 | 0.017 | 0.015 | | +SIWCon | 0.047 | 0.031 | 0.016 | 0.028 | 0.020 | 0.008 | | | IMDB | | | | | | | | | Normal | 0.070 | 0.006 | 0.064 | 0.070 | 0.006 | 0.064 | | | +SIWCon | 0.048 | 0.041 | 0.007 | 0.064 | 0.045 | 0.019 | | | LSTM | AT | 0.083 | 0.024 | 0.059 | 0.033 | 0.002 | 0.031 | | +SIWCon | 0.012 | 0.008 | 0.004 | 0.113 | 0.088 | 0.026 | | | Normal | 0.124 | 0.041 | 0.083 | 0.124 | 0.041 | 0.083 | | | +SIWCon | 0.077 | 0.023 | 0.054 | 0.065 | 0.018 | 0.047 | | | TextCNN AT | 0.108 | 0.040 | 0.068 | 0.078 | 0.024 | 0.054 | | | +SIWCon | 0.112 | 0.088 | 0.024 | 0.114 | 0.096 | 0.018 | | | Normal | 0.059 | 0.023 | 0.036 | 0.059 | 0.023 | 0.036 | | | +SIWCon | 0.042 | 0.027 | 0.015 | 0.057 | 0.026 | 0.031 | | | BERT | AT | 0.084 | 0.036 | 0.048 | 0.048 | 0.005 | 0.043 | | +SIWCon | 0.062 | 0.021 | 0.041 | 0.044 | 0.013 | 0.031 | | SIWCon has a slight impact on clean accuracy. The results of clean accuracy are illustrated in Table 1. SIWCon only slightly impacts the clean accuracy when combined with other training methods. Normal+SIWCon sometimes outperforms *Normal* method, and the average accuracy difference between the two methods is only 0.97%. *AT+SIWCon* causes a slight drop in model accuracy compared to Normal method, with the negative impact mainly resulting from the integration of adversarial examples rather than the usage of SIWCon. The average accuracy difference between AT and *AT+SIWCon* is only 0.36%, and 1.53% between *Normal* and AT. SIWCon improves model robustness. The results of robustness are illustrated in Table 1. SIW- Con is a self-supervised regularization method that relies solely on the training data (not including labels) and their augmentations generated by removing words, without learning from any adversarial perturbations. Nevertheless, SIWCon is effective in improving model robustness. Under the *unforeseen* scenario, the average AUA. of *Normal+SIWCon* is 10.60% higher than *Normal* method (17.45% vs. 6.85%). Under the *foreseen* scenario, SIWCon can further improve the robustness of models, with the average AUA. of *AT+SIWCon* being 7.35% higher than that of AT (40.98% vs. 33.63%). These results demonstrate the effectiveness of SIWCon and its potential to be combined with more training methods, using as a plug-and-play regularization. ## Siwcon Makes Words Of Different Importance have a similar influence. The results of word influence are illustrated in Table 2. SIWCon similarizes the influence of information in words of different importance, as evidenced by the average AOPCDiff. of *Normal+SIWCon* being 0.017 lower than that of *Normal*, and of *AT+SIWCon* being 0.016 lower than that of AT. Recall the question we raised in section §1, the increased *AUA.* and decreased AOPC*Diff.* when using SIWCon in training empirically give an affirmative answer. ## 4.4 Further Analysis On Siwcon In this section, we conduct further analysis and ablation study on BERT and MR dataset. Hyperparameter α. The influence of α is illustrated in Figure 3(a). We find that when α is set to different values, the robustness of BERT can always be effectively improved, as the *AUA.* of Normal+SIWCon is always higher than that of *Normal*. When α is small, BERT tends to be more robust. Different values of α also have a slight impact on the clean accuracy, as the ACC. of *Normal+SIWCon* is always close to that of *Normal*. Temperature τ . The influence of τ is shown in Figure 3(b). Similar to α, when τ is set to various values, the robustness of the model is consistently improved, while the *ACC.* fluctuates around that of the normally trained BERT. However, τ has a greater impact on the clean accuracy than α. Batch Size. The influence of batch size is shown in Figure 3(c). SIWCon is benefit from larger batch size. As the batch size increases, the gap in clean accuracy (*ACC.*) between the models trained with ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) and without SIWCon decreases, while the gap in robustness (*AUA.*) tends to increase. We conjecture that this is due to the contrastive nature of SIWCon regularization, as larger batch sizes provide more negative examples, thereby facilitating the regularizing (Chen et al., 2020a). ## Attack Methods And Examples Ratio. We Test the performance of SIWCon with more attack methods under different adversarial training settings, and the results are shown in Figure 4. We observe that SIWCon consistently outperforms the basic training method in terms of model robustness when using different attack methods. Additionally, we find that when a higher proportion of adversarial examples are incorporated into adversarial training, robustness may sometimes be reduced. However, SIWCon effectively mitigates this negative impact. Ablation Study. We replace the data augmentation operations t imp(·) and t ump(·) in SIWCon with new augmentation operations that randomly drop out words in sentences to perform ablation study. The results in Table 3 show that the influence-based data augmentation operations used in SIWCon help the model (i) improve robustness, as *AUA.* of SIWCon are higher than the random methods, and (ii) | ACC. ↑ AUA. ↑ AOPCComp. | AOPCSuff. | AOPCDiff. ↓ | | | | |---------------------------|-------------|---------------|-------|-------|-------| | DeepWordBug | | | | | | | SIWCon | 85.46 | 60.33 | 0.030 | 0.015 | 0.015 | | w/ random | 86.49 | 38.67 | 0.035 | 0.014 | 0.021 | | TextFooler | | | | | | | SIWCon | 84.31 | 30.67 | 0.038 | 0.022 | 0.016 | | w/ random | 85.45 | 22.34 | 0.044 | 0.020 | 0.024 | similarize the influence of the words of different importance on prediction, as AOPC*Diff.* of SIWCon are lower than the random methods. ## 4.5 Further Analysis On Model Behavior Loss Landscape. Following the filter normalization scheme proposed by Li et al. (2018), we finetune BERT on the MR training set, and plot the loss landscape of BERT on the MR testing set, as shown in Figure 5. It is shown that the loss landscape of *Normal+SIWCon* (b) is visibly smoother and changes more slowly than the normally trained BERT (a). Furthermore, adversarial training (c) makes the loss landscape smoother than the *Normal* method (a), while when it is combined with SIWCon (d), the loss landscape is further smoothened. According to the finding of Mok et al. (2021) that a robust model should have a smooth loss landscape, the visualization results demonstrate that SIWCon is effective for improving model robustness. Sentence Representation. We fine-tune BERT on MR and then, for a normal sentence, we generate two groups of sentences by *cumulatively* removing the 40% most and least important words in the sentence (e.g., abcd → abcd → abcd), following the gradient attribution (3). We also utilize PWWS (Ren et al., 2019) to generate adversarial examples from the normal sentence. The sentence representations visualized by t-SNE (van der Maaten and Hinton, 2008) and the reduction paths (Feng et al., 2018) are shown in Figure 6. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) More results can be found in Appendix B.1. The representation of the normal sentence (•) can be seen as a point with complete information for supporting the prediction contained, the bias of incomplete sentences (4 and ) from the normal sentence (•) can be seen as the information loss caused by word removal, and the location of the adversarial example indicates when how much information is lost, the example can no longer maintain the original prediction. When unimportant words are removed (), the representations for both models are steadily biased from the normal sentence, and removal will not drastically bias the representations towards the adversarial example, indicating that the information in unimportant words is not influential on prediction. However, the two models behave differently when important words are removed (4). For Normal method, the representations are biased towards the adversarial example, and the prediction will be drastically biased when a few important words are removed (indicated by the blue arrow). For SIWCon, the representations are ![7_image_2.png](7_image_2.png) steadily biased in a similar manner as when unimportant words are removed, and the representations do not fall into the neighborhood of the adversarial example, indicating that important words are less influential on prediction and it is more difficult for attack methods to find adversarial examples. Confidence Changing. We illustrate the change in model confidence with the removal of words on case instance in Figure 7. More results can be found in Appendix B.2. We cumulatively remove the most or least important words in a sentence, and the change in confidence can be seen as the influence of the information in the removed words. SIWCon reduces the influence of the information in important words, as more important words need to be removed to shift the model's prediction. ## 5 Conclusion This paper presents SIWCon, a self-supervised regularization method based on contrastive learning. SIWCon improves the robustness of language models by encouraging the words of different importance to have more similar influence on prediction. Experiments show that SIWCon effectively improves model robustness without depending on adversarial perturbation. We hope the insights provided in this paper will inspire further research. ## Limitations The loss objective of the proposed SIWCon regularization is computed on augmented data, which increases the time required for the model to complete training. We evaluate SIWCon on classification tasks, but it may be applied to various other tasks, such as reading comprehension and textual entailment. More evaluations are expected to be done in future works. The proposed SIWCon regularization is effective in defending against word-level adversarial attacks, as the basic elements of the augmentation methods are words. However, similar regularization techniques can also be applied to characters and sentences, and we leave evaluating the effectiveness of such variants in future works. ## Ethics Statement In this paper, we propose a self-supervised regularization method for improving the model robustness, which does not need to learn from any adversarial examples. Since adversarial examples are always difficult to generate for language models, our method can thus reduce the financial and environmental cost of robustness improvement. Furthermore, our method forces models consider different words to have a similar degree of influence on prediction, potentially reducing the model's bias. All the datasets we use are publicly available, and we do not violate their licenses. ## Acknowledgements The authors would like to thank the anonymous reviewers for their comprehensive and constructive comments. This research was supported by National Research and Development Program of China (No.2019YFB1005200). ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In *Proceedings of the 2018 Conference on* Empirical Methods in Natural Language Processing, pages 2890–2896. Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2016. Explaining predictions of non-linear classifiers in NLP. In *Proceedings of the 1st Workshop on Representation Learning for NLP, Rep4NLP@ACL 2016*, pages 1–7. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020a. A simple framework for contrastive learning of visual representations. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020*, volume 119 of *Proceedings of Machine Learning Research*, pages 1597–1607. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E. Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In *Proceedings of the* 56th Annual Meeting of the Association for Computational Linguistics, pages 31–36. Hongchao Fang and Pengtao Xie. 2020. CERT: contrastive self-supervised learning for language understanding. *ArXiv preprint*, abs/2005.12766. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan L. Boyd-Graber. 2018. Pathologies of neural models make interpretation difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719–3728. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In *2018 IEEE Security and Privacy Workshops, SP* Workshops 2018, pages 50–56. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910. Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174–6181. John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In *Proceedings of the 59th Annual Meeting of the Association* for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 879–895. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pages 9726–9735. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9:1735– 1780. Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019,, pages 4081–4091. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1875–1885. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2021–2031. Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129–4142. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 8018–8025. Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154–3167. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In *Proceedings of the 2014* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021. Contextualized perturbation for textual adversarial attack. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053–5069. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. 2018. Visualizing the loss landscape of neural nets. In *Advances in Neural Information* Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pages 6391–6401. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In *26th Annual Network and Distributed System Security Symposium, NDSS 2019*. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural models in NLP. In *Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 681–691. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding neural networks through representation erasure. *ArXiv preprint*, abs/1612.08220. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 6193–6202. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In *Proceedings of* the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, pages 4208– 4215. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150. Jisoo Mok, Byunggook Na, Hyeokjun Choe, and Sungroh Yoon. 2021. Advrush: Searching for adversarially robust neural architectures. In *2021 IEEE/CVF* International Conference on Computer Vision, ICCV 2021, pages 12302–12312. John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3829–3839. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020b. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126. Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis D. Griffin. 2021. Frequency-guided word substitutions for detecting textual adversarial examples. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics, pages 171–186. Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In *Proceedings of the 2018 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1069–1078. Tian Pan, Yibing Song, Tianyu Yang, Wenhao Jiang, and Wei Liu. 2021. Videomoco: Contrastive video representation learning with temporally adversarial examples. In *IEEE Conference on Computer Vision* and Pattern Recognition, 2021, pages 11205–11214. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In *Proceedings of the* 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115–124. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1532–1543. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In *Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019*, pages 1085–1097. Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In *Proceedings of the Twenty-Sixth* International Joint Conference on Artificial Intelligence, IJCAI 2017, pages 2662–2670. Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. 2017. Evaluating the visualization of what a deep neural network has learned. *IEEE Trans. Neural Networks Learn. Syst.*, 28(11):2660–2673. Xuelin Situ, Ingrid Zukerman, Cécile Paris, Sameen Maruf, and Gholamreza Haffari. 2021. Learning to explain: Generating stable explanations fast. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language* Processing, ACL/IJCNLP 2021, pages 5340–5355. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on* Empirical Methods in Natural Language Processing, pages 1631–1642. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In *Proceedings of the 34th International Conference on* Machine Learning, ICML 2017, Proceedings of Machine Learning Research, pages 3319–3328. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In *2nd International Conference on* Learning Representations, ICLR 2014, Conference Track Proceedings. Aäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *ArXiv preprint*, abs/1807.03748. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. *Journal of Machine* Learning Research, 9(86):2579–2605. Xiaosen Wang, Jin Hao, Yichen Yang, and Kun He. 2021. Natural language adversarial defense through synonym encoding. In *Proceedings of the ThirtySeventh Conference on Uncertainty in Artificial Intelligence, UAI 2021*, volume 161 of *Proceedings of* Machine Learning Research, pages 823–833. Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. CLEAR: contrastive learning for sentence representation. *ArXiv* preprint, abs/2012.15466. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5065– 5075. Jin Yong Yoo, John Morris, Eli Lifland, and Yanjun Qi. 2020. Searching for a search method: Benchmarking search algorithms for generating NLP adversarial examples. In *Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting* Neural Networks for NLP, pages 323–332. Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818–833. Pengwei Zhan, Yang Wu, Shaolei Zhou, Yunjian Zhang, and Liming Wang. 2022a. Mitigating the inconsistency between word saliency and model confidence with pathological contrastive training. In Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2226–2244. Pengwei Zhan, Chao Zheng, Jing Yang, Yuxiang Wang, Liming Wang, Yang Wu, and Yunjian Zhang. 2022b. PARSE: an efficient search method for black-box adversarial text attacks. In *Proceedings of the 29th* International Conference on Computational Linguistics, COLING 2022, pages 4776–4787. Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 4903–4912. ## A Additional Experimental Details A.1 Details On Dataset MR contains movie reviews from Rotten Tomatoes, and the examples are labeled as positive or negative, with 8,530 for training and 1,066 for testing. SST2 contains sentences labeled as positive or negative, with 67,349 for training and 1,821 for testing. IMDB contains binary polar movie reviews from Internet Movie Database, which are also labeled as positive or negative, with 25,000 for training and 25,000 for testing. ## A.2 Details On Model The experiments are conducted on three models with different architectures. The LSTM (Hochreiter and Schmidhuber, 1997) consists of a 300dimensional GloVe embedding layer (Pennington et al., 2014), a Bi-LSTM layer with 150 hidden units, and a dense layer. The TextCNN is similar to the architecture in (Kim, 2014), while the embedding is also replaced with the 300-dimensional GloVe embedding. The BERT (Devlin et al., 2019) used in our experiment is the base uncased version. ## A.3 Details On Baseline When SIWCon is combined with adversarial training, the overall objective is formulated as: $$\min_{\theta}\ \mathcal{L}_{ce}(\mathbf{X},Y)+\mathcal{L}_{ce}(\mathbf{X}^{adv},Y)+\alpha\ \mathcal{L}_{SINCCn}(\mathbf{X}).\tag{11}$$ This joint training objective helps the model to learn both the normal and adversarial examples distribution and simultaneously regularizes the model on the word influence. ## B Additional Experimental Results B.1 Analysis On Sentence Representation We give more visualizations of sentence representations and reduction paths in Figure 8-13. The instance sentences are randomly picked from MR dataset, the sentence representations are obtained on BERT, and the adversarial examples are generated by PWWS. Similar as the results in main text, darker examples indicate more words are removed. Black and *orange* arrows respectively illustrate the reduction path of unimportant and important words. Blue arrow highlights the reduction that drastically biases the prediction. Representations in *pink* area belong to the neighborhood of the adversarial example. ## B.2 Analysis On Confidence Changing We provide more results on the change in model confidence with the removal of words in 14-17. The instance sentences are randomly picked from MR dataset, and the results are obtained on BERT. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![13_image_0.png](13_image_0.png) ![13_image_2.png](13_image_2.png) ![13_image_4.png](13_image_4.png) ![13_image_6.png](13_image_6.png) ![13_image_1.png](13_image_1.png) ![13_image_3.png](13_image_3.png) ![13_image_5.png](13_image_5.png) ![13_image_7.png](13_image_7.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In Section Limitations. ✓ A2. Did you discuss any potential risks of your work? In Section Ethics Statement. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 3 And Section 4. ✓ B1. Did you cite the creators of artifacts you used? In Section 1, Section 4.2, and Appendix A. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? In Section Ethics Statement. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In Section Ethics Statement. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The datasets we used in the paper are widely used benchmark datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In Section 4.2, Appendix A.1, and Appendix A.2. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In Appendix A.1 ## C ✓ **Did You Run Computational Experiments?** In Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In Section Limitations and Appendix A.2. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Section 4.2, Appendix A. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Section 4.2. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Section 4.2. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.